[ 3 / biz / cgl / ck / diy / fa / ic / jp / lit / sci / vr / vt ] [ index / top / reports ] [ become a patron ] [ status ]
2023-11: Warosu is now out of extended maintenance.

/sci/ - Science & Math


View post   

File: 12 KB, 324x492, bellu.jpg [View same] [iqdb] [saucenao] [google]
6938421 No.6938421 [Reply] [Original]

There is the "Fermat-observation" that you can algebraically differentiate polynomials in the sense that if p is the function, and you define

p(x;h) := (p(x+h)-p(x))/h

then

p'(x) = p(x;0)


e.g. if
p(x) := 2 x^2 - x + 3
then
p(x+h)-p(x) = h ((4x-1)+2h)
so that
p(x;h) = (4x-1)+2h

We don't need to take the limit, we can just set h=0.
Now it also works on
f(x) = 1 / (a+bx)
but I couldn't find much more examles.

In particular higher powers alla 1/x^n spoil it and functions like sin or exp are not approachable, as I don't have Taylor expansion available.

Can anyone characterize the expression for which this worls?

This relates to Hadamard's lemma.

>> No.6938426

>>6938421
>We don't need to take the limit, we can just set h=0.
Not unless you define division by zero first.

>> No.6938445
File: 78 KB, 258x258, wootmollyava.jpg [View same] [iqdb] [saucenao] [google]
6938445

>>6938426
I guess I made the prescription clear enough..

>> No.6938446

>>6938421
It works for any smooth f. If f is differentiable, define F(x,h) = (f(x+h)-f(x))/h when h is not 0 and F(x,0) = f'(x).

Then *by definition* F(x,0)=f'(x). F will be continuous if f is C^1.

>> No.6938452

>>6938445
You didn't. You can't set h=0 because you're dividing by h in your definition and division by zero is not generally defined. Even in places where it is defined it won't give you the answer you seek. This is why people invented limits and infinitesimals.

>> No.6938472

>>6938446
Okay, but how would you go about to do that for
1 / (a^2+x^2)^3 defined on R\{-a,a}?

>> No.6940521
File: 390 KB, 1200x800, lady_tagalot_by_drunken_novice-d52py4v.jpg [View same] [iqdb] [saucenao] [google]
6940521

>>6938452
Incorrect, OP is not dividing by zero. You can divide by h in the ring of polynomials in two variables because the numerator is a multiple by h. And then you can apply the evaluate-h-at-0 ring homomorphism from [the ring of polynomials in x and h] to [the ring of polynomials in x]. This is perfectly valid; at no point are we dividing by zero.

>>6938446
Using continuity is cheating. The point of this is to do everything algebraically. Indeed the definition OP gave is one I really like (for polynomials) since it doesn't depend on the scalar field or any notion of continuity, and abstracts the notion of "cancel and set h equal to 0" into a logically valid form. Moreover, this cannot work as a definition of the derivative, since you defined F(x,0) in terms of the derivative.

So here's OP's question: given any field K, for which rational functions f in K(x) does the quotient [f(x+h)-f(x)]/h (a priori as an element of K(x,h)) exist in K(x)[h]? Given such an f, computing the quotient is just a matter of expanding out and then factoring out the h to cancel.

>> No.6940527

>>6940521
>you don't have to define the value of sin(x)/x at 0 because you can just divide the formal power series blah blah
yeah, nah, ur a cunt

>> No.6940806

>>6940521
>for which rational functions
all of them?

>> No.6941502

>>6940527
Not sure if trolling / flamebaiting. Fact One: given any polynomial f(x) in the polynomial ring K[x], there will exist a polynomial Q(x,h) such that f(x+h)-f(x)=h*Q(x,h) within the ring K[x,h]. Fact Two: given any polynomial Q(x,h), it can be evaluated at h=0, yielding Q(x,0). We define f'(x) to be Q(x,0). If you agree with Facts One and Two, then you have no choice but to accept that this is sensible. Note that sin() is not a polynomial, although incidentally this argument does work with formal power series (not Laurent series) too, which relates to local ring theory and the (h)-adic topology.

If you have never heard the words "ring" or "homomorphism" before, then you are unlikely to understand the meaning of my exposition here. In that case I recommend the takeaway should be as follows: the division (f(x+h)-f(x))/h is necessarily performed in a certain algebraic structure (the so-called ring of polynomials) **before** h is set equal to zero, so no division by zero ever occurs.

>> No.6941515

If anybody's familiar with integers mod p, here's a way to see why it works. One cannot of course define 0/0 in any ring. However if phi:Z->Z/pZ is the projection map (a ring homomorphism), while phi(p)/phi(p) does not make sense, certainly phi(p/p) does, it's just 1 mod p. Note how the division is performed in the integers before being interpreted as a residue mod p. It's understandable this fact can be hidden when we do not explicitly write down the homomorphism involved.

>> No.6941573

The other explanations given are also correct, but in terms of infinitesimals the observation is this: for any function f such that f(x+h)=g(x)+g(h,x) for some functions g and h, taking the noninfinitesmial part of f(x+h) gives us a function of x only.

This is the traditional, intuitive way of viewing the derivative which was formalized only somewhat recently.

>> No.6941605

>>6941502
>If you have never heard the words "ring" or "homomorphism" before
cute! :)

>> No.6941621

>>6941502
>Note that sin() is not a polynomial, although incidentally this argument does work with formal power series
Yes, I know that the argument works for formal power series, which is why I brought it up, and yet it is exceedingly curious that no one uses it for defining the sinc function and instead defines the sinc function to have a particular value at 0. Don't you find that interesting?

>> No.6941766

>>6941621
are you suggesting that I am arguing this algebraic method is how we should be defining the derivative for real-valued functions, or for defining values of functions at removable discontinuities? I have not argued that. I've said: (1) this algebraic definition of derivative does not actually divide by zero (nobody seems to be contesting that anymore), and (2) is a sensible definition for the derivative of polynomials over scalar fields that have no topology or notion of convergence or limits. if that was you arguing with me on both of these points, the first point now abandoned and the second a strawman, I have to wonder: if we do not actually disagree with each other about anything then why are you arguing with me.

(indeed I do not necessarily always think of polynomials as functions; they are elements of a ring so can also be thought of as kind of as analogous to numbers.)

the standard definition of the derivative of polynomials in nontopological contexts is to simply define (x^n)'=nx^(n-1) and extend linearly. it's fine to transport the power rule over to other settings and use that as a definition, but I feel this algebraic definition is superior or at least interesting to know about because it abstracts the notion of "dividing by h and setting h equal to 0" into a logically valid form.