[ 3 / biz / cgl / ck / diy / fa / ic / jp / lit / sci / vr / vt ] [ index / top / reports ] [ become a patron ] [ status ]
2023-11: Warosu is now out of extended maintenance.

/sci/ - Science & Math


View post   

File: 27 KB, 539x450, solveit.png [View same] [iqdb] [saucenao] [google]
10513165 No.10513165 [Reply] [Original]

Well

>> No.10513178

How is x^inf even defined? Sorry, I don't know if this exists as a formal definition. I am somewhat skeptical of the notation used.

>> No.10513309

>>10513165
Do you mean the limit of the derivatives? I mean sure it's e^x under any sort of convergence.
You need to define the infinite derivative though, since nth derivatives are defined inductively.

>> No.10513509

>>10513165
>>10513178
[math]\infty[/math] is not a real (or complex, or quaternionic, or octonionic) number, so [math]\displaystyle \frac{d^{\infty}y}{dx^{\infty}}[/math] by itself like that is about as meaningful as [math]\displaystyle \frac{d^{\textrm{apples}}y}{dx^{\textrm{apples}}}[/math] if we're working in any of the number systems that actually matter.
That said, even though [math]\infty[/math] is not a number itself, it is still useful in describing the behavior of numbers, so it works in the context of limits.
Instead of taking [math]\displaystyle \frac{d^{\infty}y}{dx^{\infty}}[/math], you could take [math]\displaystyle \lim_{n\to\infty} \frac{d^{n}}{dx^{n}} \left(e^{x}\right)[/math], which is [math]e^{x}[/math] (think: the derivative of [math]e^{x}[/math] is itself).
All's good, right? But wait! You could also interpret "[math]\displaystyle \frac{d^{\infty}y}{dx^{\infty}}[/math]" as [math]\displaystyle\lim_{\left(n,a\right)\to\left(\infty,e\right)^{-}} \frac{d^{n}}{dx^{n}} \left(a^{x}\right)[/math], which is [math]0[/math] (think: [math]\displaystyle \frac{d^{n}}{dx^{n}} \left(a^{x}\right) = a^{x}\ln{\left(a\right)}[/math], and [math]0 < \ln{\left(a\right)} < 1[/math] for [math]1 < a < e[/math]).
Lastly, you could interpret it as [math]\displaystyle\lim_{\left(n,a\right)\to\left(\infty,e\right)^{+}} \frac{d^{n}}{dx^{n}} \left(a^{x}\right)[/math], which would be [math]\infty[/math] (think: [math]\ln{\left(a\right)} > 1[/math] for [math]a > e[/math]).
Long story short, the moment you put [math]\infty[/math] into an equation outside of a limit is the moment you invalidate the whole thing. It's not an actual number, folks!

>> No.10513535

>>10513509
While everything you say is true, if you asked 100 mathematicians for what they would consider the most reasonable formalization of an infinite derivative, I would bet that all would give your first definition.

>> No.10513594

[math]
1^\infty = \color{crimson}{\undefined} \\
\displaystyle
\lim_{x \to \infty} 1^x=1
[/math]
OP's stuff goes the same way

>> No.10513699

>>10513535
That's fair, but do consider the following: they'd also probably say the most reasonable formalization of [math]\displaystyle x^{\infty}[/math] is [math]\displaystyle \lim_{n\to\infty} x^{n} = x \cdot x \cdot x \cdot \ldots[/math], but they still wouldn't say [math]1^{\infty}[/math] is [math]1 \cdot 1 \cdot 1 \cdot \ldots = 1[/math], since it could also be taken as, say, [math]\displaystyle \lim_{n\to\infty} \left(1 + \frac{z}{n}\right)^{n} = e^{z}[/math] (which could give you any complex number that isn't [math]0[/math]), or literally anything else. That's why [math]\displaystyle 1^{\infty}[/math] is an indeterminate form in limits: the expression alone doesn't give you enough information to assign it a value, even an infinite one.
(Not saying that [math]\displaystyle \frac{d^{\infty}}{dx^{\infty}} \left(e^{x}\right)[/math] is an indeterminate form, since, to my knowledge, it can't be transformed into [math]\displaystyle \frac{0}{0}[/math] or [math]\displaystyle \frac{\infty}{\infty}[/math].)

>> No.10513714

>>10513535
There is no reason at all to believe that is the case. Take a look at what happens before you exponential the term. It depends on infinity. So for finite derivatives it exists, but once you get into hyperreal and surreal numbers, such as infinity you end up with an indeterminate form such that (ln e) raised to infinity. So that issue is that you may get 0, e^x, or infinity. This is deep because say you have some function f. You may be able to apply the Laplace or Fourier transform a finite amount of times, but you can apply it an infinite amount of times? They both have well established conditions for which you can apply the transform an infinite amount of times. However, they don't specify what that limit must be. Fourier requires damping to zero rapidly. How rapidly is the question. Suppose you got an nth order Partial DE of a function f. Now you might assume that f has e^x somewhere in there. If you let n go infinity, then the derivatives only make sense for the first finite derivatives. The rest aren't exactly clear due to the indeterminate form of 1 raised to infinity. But this goes beyond Fourier and Laplace and any transform. It also intertwines with Euler's identity relating the exponential to imaginary numbers. What happens to Euler's identity if you take the infinity-th derivaiive. Many things can happen, in fact it could possibly damp to zero which would mean that when working in the complex field, you are actually working in the real space and that every increment made(derivative=slope) means you are adding hyperreals to get to the complex plane from the real plane. This slight connection would make hyperreals more important because you are adding a hyperreal to a real value to get the derivative but typically this effect is ignored. But it cannot be ignored if you get to the complex plane via the real.

>> No.10513744

>>10513165
not following the logic, how could it be 0?

>> No.10513780

>>10513744
What is ln(1+1/n) as n approaches infinity? It is 0. The derivatives comes from a clever selection of where to move exponents. Because of careful selection, it is possible that near infinity, n becomes severely damped, before reaching the infinity-th derivative. This done when e is actually slightly less than the actual value for e of 2.71. This means that e will become critically damped at infinity leading to a solution of zero. This is totally plausible.

>> No.10513795

>>10513780
i thought the derivative of f is a function describing the slope of f at every point.
not sure what that has to do with the limit of the function as n approaches infinity

>> No.10513809

>>10513165
Let D be the operator d/dx. Then you're asing lim D^n (e^x), which is just e^x

>> No.10513819

>>10513795
You have to look the polynomial 1/n. All that happens is that instead of convergence to e, the estimation for e converges below e. Taking the natural log of a number slightly less than e might be something like 0.9999 and when compounded an infinite number of times, this value vanishes to zero. That's all the argument is. Nothing more nothing less. But this requires looking at the bigger picture. When you evaluate e by the definition given above, you cannot try to do one and the other because they occur simultaneously or else you could evaluate it to 1^infinity. There really is no trick. However when dealing with an infinite derivative, you have to consider that 1/n damps out to zero very slowly while a linear operator taking an infinite amount of times converges pretty rapidly. So now you got two infinite quantities to look at. You might say, hey I can do the derivative a discrete number of times, while the same does not have to be true for evaluating e, so I might let the approximation converge prematurely before making taking the next derivative. This leads to a value of e that is smaller than e. No real magic here although you have to bring out hyperreals to get the bigger picture.

>> No.10513871

>>10513809
Okay forget about operators. Now you are dealing with two distinct infinities. One is discrete and the other one is real values. You can only take the derivative a discrete number of times whereas in [math] e = lim_{x \to \infty} (1+ \frac{1}{n})^n[/math], this infinity does not necessarily have to be natural. So one thing you might note about real numbers versus natural numbers, is if you say okay I will set my line [0,1] but with real numbers you can set that line to [0, 1.99999....] and still be within a similar arena. Now another thing you might notice is that if you want both intervals to be the same, then you have for natural numbers [0,1] and [0,0.999....] for real numbers. So now imagine that you get to evaluate the natural number e. You have pretty much reached infinity. However, you have not yet reached the infinity-th derivative but rather infinity-1. Because they are both infinities, you have to adjust accordingly and make the infinity-th derivative the true infinity and make the approximation of e slightly smaller than e. Everytime you take the derivative of the exponential,you are actually multiplying by ln(e) which is of course 1. However, now it has been stated that e has been adjusted so that it is infinitesimally smaller than e. So when you take derivative an infinite amount of times, you end up with (ln(e)) raised to infinity. If your e is less than the actual e, then ln(e) is less than 1. Raise any value less than 1 to infinity and you get approximately 0.
All this result says is that all functions vanish near infinity. It is very intuitive because you should expect all functions to stop growing at "infinity". Whether infinity is exists or not is not the point. So any function at infinity reaches a limit and should grow anymore, as there is nothing larger than infinity especially if you have an absolute limit.

>> No.10514230

>>10513165
Can u solve this in laplace domain

>> No.10514243

is this a carefully veiled schizothread?

>> No.10514245

>>10513871
Shouldn't you be in another thread DABing on people?

>> No.10514302

>>10513165
Let f(x) = exp(x). We know base case of n = 0 or 1, that the derivative is exp(x), now let's prove the inductive case. Assume the nth derivative of f(x) = exp(x), then derive both sides with respect to x, we get that the n+1th derivative is exp(x). There's literally no way for it to be zero.

>> No.10514519

>>10513165
induction says its e**x

the proof is trivial

>> No.10514538

>>10514519
But it's really 0
You have to think in geometric terms and what infinity really means

>> No.10514550
File: 434 KB, 565x289, 1536183547742.png [View same] [iqdb] [saucenao] [google]
10514550

>>10514538
>You have to think in geometric terms and what infinity really means

>> No.10514555

>>10514538
well why not just write the binomial formula as a series and differentiate it? your assumption is that the fact that there are only finitely many terms in a binomial formula means that they will eventually all be reduced to zero, but if the number of terms is itself becomes infinite then you will always have the terms needed to generate e**x

>> No.10514563

>>10513871
Holy hell you sure use a lot of words to say that some limits don't commute

>> No.10514581
File: 15 KB, 208x326, 700[1].jpg [View same] [iqdb] [saucenao] [google]
10514581

>>10513509
>[math]\frac{d^n}{dx^n} a^x = a^x\operatorname{ln}(a)[/math]

>> No.10514709

>>10513780
Except e equals the limit of of (1+1/n)^n as n goes to infinity. In which case you could do ln of that which would be n*ln(1+1/n) and you would have to prove that goes to zero. You can't take the product of two functions that results in an indeterminate form (infinity * 0) and it expect that to equal to product of their limits. This is calculus 2 material.

My guess is that it equals 1, but I haven't made a proof.

>> No.10514873
File: 10 KB, 509x40, exponential.png [View same] [iqdb] [saucenao] [google]
10514873

>>10514302
Look at it this way bud, there is such a thing as a series expansion of e**x. This series expansion takes on a polynomial which you can think of as set of linearly independent vectors. Basically, at every point on the curve there is a line. If you take a few derivatives of the derivatives of e**x nothing happens, but what about if you take the derivative an infinite amount of times? only one term doesn't drop out. So for finite derivatives you might not care about the last term because you just assume that it is so negligible you essentially say infinity. Now if you get to that scope, it comes really important and you should expect it to drop out as well eventually also reaching zero.

>> No.10514924

>>10514581
You're right; meant to write [math]\displaystyle \frac{d^{n}}{dx^{n}} \left(a^{x}\right) = a^{x}\ln^{n}{\left(a\right)}[/math]. Good catch, my friend!

>> No.10514925

>>10514709
It could be any range of material that is not the point. Consider this, infinity is neither odd nor even. Now look at sine. Sine is an odd function. sin(-x) = -sin(x). If you take an even number of derivatives, you end up with (-1)^n sin(x). If you do it an odd number of times you end up cosine plus or minus of course. So taking the derivatives an infinite amount of times, and expecting either sine or cosine is equivalently to saying that infinity will be either odd or even. That is obviously not the case. Sine and cosine are related by the complex exponential. As you can probably now see, sine and cosine derivatives at infinity are undefined, but they are probably equal to zero. That is not hard to see. If you look at the series expansion, you can take a few derivatives and not do much damage, but an infinite derivatives gets rid of the symmetry, essentially vanishing to zero.

>> No.10514962

>>10513165
Let's assume y =/= e^x....

>> No.10514969

>>10514873
yeah but look at
>>10514555

>> No.10514983

>>10513509
You can take quaternion-fold derivatives? What the fuck are you on about, anon?

>> No.10515060

>>10514969
>>10514969
You have to realize that these infinities are the same. They arent playing catch up. When doing the series expansion, you have a discrete number of terms. When doing the derivative you are doing it by a discrete number. This means that these two infinities are the same. You won't always have the terms necessary. In fact why should that be the case? You kind of have to ensure that when dealing with infinities. Now if you have 2 times as many terms as taking a derivative n times, you could surely make that argument. But that argument no longer holds. When you look at the series expansion, you essentially say that whatever was infinity is replaced by something that will be infinity. If you apply the infinity-th derivative to each term in the series expansion, they all vanish, except for the last term but with this term you end up with an infinity much greater on the denominator that essentially reduces it 0.

>> No.10515116

>>10515060
its actually two different limits for the two definitions of e and the infinite derivative, so depending on which you resolve first defines wether or not its e**x or zero (i think).

>> No.10515127

>>10514983
Haha, sorry if that part was confusing. I personally have no idea how quaternion-fold differentiation or integration would work.
My point was basically just that, since [math]\infty[/math] is not in [math]\mathbb{R}[/math], [math]\mathbb{C}[/math], [math]\mathbb{H}[/math], or [math]\mathbb{O}[/math], you can't exactly do any familiar math with it, i.e. just stick it into any old equation and expect things to work out.
>>10515060
Fascinating. I've been keeping up with the conversation for a bit, and you're essentially saying that, when you ask for [math]\displaystyle \lim_{n\to\infty} \frac{d^{n}}{dx^{n}} \left(e^{x}\right)[/math], you're actually asking for [math]\displaystyle \lim_{n\to\infty} \frac{d^{n}}{dx^{n}} \left(\left(1 + \frac{x}{n}\right)^{n}\right)[/math], right? Wolfram|Alpha tells me that [math]\displaystyle \frac{d^{n}}{dx^{n}} \left(1 + \frac{x}{n}\right)^{n} = \frac{n!}{n^{n}}[/math], and it also tells me that [math]\displaystyle \lim_{n\to\infty} \frac{n!}{n^{n}}= 0[/math], so that makes sense.
Alternatively, you are asking for [math]\displaystyle \lim_{n\to\infty} \frac{d^{n}}{dx^{n}} \left(\sum_{k=0}^{n} \frac{x^{k}}{k!}\right)[/math]. Wolfram|Alpha tells me that [math]\displaystyle \frac{d^{n}}{dx^{n}} \left(\sum_{k=0}^{n} \frac{x^{k}}{k!}\right) = 1[/math], so naturally [math]\displaystyle \lim_{n\to\infty} \frac{d^{n}}{dx^{n}} \left(\sum_{k=0}^{n} \frac{x^{k}}{k!}\right) = 1[/math].
This all makes sense, actually! Not how I originally envisioned the answer as to why the OP question lacks an answer, but it makes sense.

>> No.10515141

heres the expression defined properly:
>theres no reason to adjoin the two limits to one variable

lim{k->inf} D**k lim{n -> inf} (1 + x/n)**n

>> No.10515153

for k = n always we have a finite number of derivatives and a finite number of terms. the final term being D**n (x/n)**n = x * (n-1)!

>> No.10515176

>>10514873
>>10514925
This is dumb as shit.
Infinity is not a number, negative and positive infinity are certainly not equal even if they were numbers, and if a limit oscillates like the infinite derivatives of sin and cos, then the limit doesn't exist. The power series of exp(x) never goes to zero because every time you differentiate the expansion you get an infinite number of terms that is perfectly equal to power series of exp(x). This is basic calculus 2 that OP is fucking up.

>> No.10515180

>>10515153
oops. just (n-1)!

>> No.10515228

>>10515180
double oops. the answer is 1 i think

>> No.10515235

>>10515127
your definition of e**x for the binomial form is incorrect. is it not lim{n->inf}(1+1/n)**(nx) ??

>> No.10515240

>>10513165
freshman discovers his first limits, tries to fiddle around with notation, finds out you can't just do that. posts on 4chan. gets shut down.

>> No.10515242

>>10514873
>So for finite derivatives you might not care about the last term
there isn't a last term you middling retard

>> No.10515253

>>10515116
Partially. What you must understand that Infinity from Real numbers is larger than infinity from Natural numbers. Basically, the natural numbers are a subset of the real numbers. They don't include infinity of course but following this assumption, then a discrete number will reach its infinity before a real number reaches its infinity. So when you take the infinite derivative, you are not evaluating e necessarily, but instead evaluating for a case when e is less than actual value of e.

>> No.10515262

>>10515253
none of what you posted is true

>> No.10515266

>>10515242
Well that's obvious to me but was that so obvious to you?

>> No.10515268

>>10515176
Nope don't think so.
By the linearity of derivatives,
[math] lim_{n \to \infty} e^x = lim_{n \to \infty} (1) + lim_{n \to \infty} (x) +lim_{n \to \infty} \frac{x^2}{2} + ... + lim_{n \to \infty} \frac{x^n}{n!} [/math]
Obviously all the other terms vanish do they not?

>> No.10515278 [DELETED] 

>>10515176
Sorry I mean

[math]
lim_{n \to \infty} \frac{d^n}{dx^n} e^x = lim_{n \to \infty} \frac{d^n}{dx^n} (1) + lim_{n \to \infty} \frac{d^n}{dx^n} (x) +lim_{n \to \infty} \frac{d^n}{dx^n} \frac{x^2}{2} + ... + lim_{n \to \infty} \frac{d^n}{dx^n}\frac{x^n}{n!}
[\math]
By the linearity of the derivative and likewise both n here due to the series and derivative are from the natural numbers. So there is really is no discrepancy.

>> No.10515282

>>10515278
whats it like being retarded?

>> No.10515284 [DELETED] 

>>10515176
Sorry I mean

[math] lim_{n \to \infty} \frac{d^n}{dx^n} e^x = lim_{n \to \infty} \frac{d^n}{dx^n} (1) + lim_{n \to \infty} \frac{d^n}{dx^n} (x) +lim_{n \to \infty} \frac{d^n}{dx^n} \frac{x^2}{2} + ... + lim_{n \to \infty} \frac{d^n}{dx^n}\frac{x^n}{n!} [\math]
By the linearity of the derivative and likewise both n here due to the series and derivative are from the natural numbers. So there is really is no discrepancy.

>> No.10515291

>>10515262
How come? Okay say you have two sets of numbers, one on the real number field and the other is the natural numbers set. Say you have an interval, [0,2). The real numbers will reach 1.999.... whereas the set of natural numbers will only reach 1. So you have to cap your infinity at the discrete numbers.

>> No.10515297 [DELETED] 

>>10515176
Sorry I mean

[math] lim_{n \to \infty} \frac{d^n}{dx^n} e^x = lim_{n \to \infty} \frac{d^n}{dx^n} (1) + lim_{n \to \infty} \frac{d^n}{dx^n} (x) +lim_{n \to \infty} \frac{d^n}{dx^n} \frac{x^2}{2} + ... + lim_{n \to \infty} \frac{d^n}{dx^n} \frac{x^n}{n!} [\math]
By the linearity of the derivative and likewise both n here due to the series and derivative are from the natural numbers. So there is really is no discrepancy.You could make the argument for finite derivatives but not infinite derivatives

>> No.10515302

>>10515291
for every real number there is a natural larger than it
dont use finite examples to show something is true for infinite examples

>> No.10515304

>>10515176

[math] lim_{n \to \infty} \frac{d^n}{dx^n} e^x = lim_{n \to \infty} \frac{d^n}{dx^n} (1) + lim_{n \to \infty} \frac{d^n}{dx^n} (x) +lim_{n \to \infty} \frac{d^n}{dx^n} \frac{x^2}{2} + ... + lim_{n \to \infty} \frac{d^n}{dx^n} \frac{x^n}{n!} [/math]

>> No.10515314
File: 250 KB, 800x613, subsets.png [View same] [iqdb] [saucenao] [google]
10515314

>>10515302
Wrong. Natural numbers are a subset of Real numbers.
That is easily the case with real numbers, you see when you have exhausted all your natural numbers you have not yet exhausted all your real numbers.

>> No.10515321

>>10515314
you clearly don't understand how subsets work, so ill make it easy
give me a real number that is larger than every natural

>> No.10515349

>>10515321
Okay if let INF1 be Infinity due to Natural numbers and INF2 be infinity due to Real numbers.

Heck I don't even need to use any numbers. If something is a subset of something, then all the values of that subset are in the other set by default. No matter how large Natural numbers get, they will always be a subset of real numbers and hence they will all be inside of the Real numbers. I don't even need a definition but just a bit of intuition.

>> No.10515382

>>10515349
holy balls what the hell are you talking about? both types of infinity present in both defined limits use only natural numbers. the compactness of the reals/hyperreals has absolutely nothing to with this.

>> No.10515389

>>10515349
look here

5 < 5.5 < 6
give me any real
a
then i can find a natural just below it
b < a
and then a natural above it, b + 1 is larger
b < a < b + 1

we are talking about the size of the numbers, not the size of the set
i dont know how you can confuse the two

>> No.10515397

>>10515389
he is fucktarded basically and the answer is either 1 or e**x

>> No.10515408

>>10515389
Sure and I can do the same thing with Real numbers. Have you ever encountered set theory? Real numbers contain natural numbers such as 1,2,3,4,5... and real numbers, 1.2, 3.455, etc. Now if you have interval [0,3). Note the parenthesis. The parenthesis means do not include 3. Consequently the greatest natural number in this interval is 2. The greatest Real number is 2.9999..... I hope things are becoming clearer.

>> No.10515411

>>10515235
My friend, you'll find that they're the same thing!
[eqn]
\begin{align*}
L &= \lim_{n\to\infty} \left(1 + \frac{x}{n}\right)^{n} \\
\ln{\left(L\right)} &= \ln{\left(\lim_{n\to\infty} \left(1 + \frac{x}{n}\right)^{n}\right)} \\
&= \lim_{n\to\infty} \ln{\left(\left(1 + \frac{x}{n}\right)^{n}\right)} \tag*{This is allowed because the logarithm is continuous on all the positives.} \\
&= \lim_{n\to\infty} n\ln{\left(1 + \frac{x}{n}\right)} \\
&= \lim_{n\to\infty} \frac{\ln{\left(1 + \frac{x}{n}\right)}}{\frac{1}{n}} \\
&= \lim_{n\to\infty} \frac{\frac{d}{dn} \left(\ln{\left(1 + \frac{x}{n}\right)}\right)}{\frac{d}{dn} \left(\frac{1}{n}\right)} \\
& \tag*{This is allowed because L'Hospital's Rule says that $\lim_{n\to c} \frac{f{\left(n\right)}}{g{\left(n\right)}} = \lim_{n\to c} \frac{f'{\left(n\right)}}{g'{\left(n\right)}}$ if $\lim_{n\to c} f{\left(n\right)} = \lim_{n\to c} g{\left(n\right)} = 0$, $\infty$, or $-\infty$.} \\
& \tag*{For the denominator, recall that $\frac{d}{dn} \left(n^{c}\right) = cn^{c-1}$, so $\frac{d}{dn} \left(\frac{1}{n}\right) = \frac{d}{dn} \left(n^{-1}\right) = -n^{-2} = -\frac{1}{n^{2}}$.} \\
& \tag*{For the numerator, recall that $\ln'{\left(u\right)} = \frac{u'}{u}$, so $\frac{d}{dn} \left(\ln{\left(1+\frac{x}{n}\right)}\right) = \frac{\frac{d}{dn}\left(1+\frac{x}{n}\right)}{1+\frac{x}{n}} = \frac{-\frac{x}{n^{2}}}{1+\frac{x}{n}}$.} \\
&= \lim_{n\to\infty} \frac{\frac{-\frac{x}{n^{2}}}{1+\frac{x}{n}}}{-\frac{1}{n^{2}}} \\
&= \lim_{n\to\infty} \frac{-\frac{x}{n^{2}}}{1+\frac{x}{n}}\cdot\frac{1}{-\frac{1}{n^{2}}} \\
&= \lim_{n\to\infty} \frac{x}{1+\frac{x}{n}} \\
&= \frac{x}{1+0}\\
\ln{\left(L\right)} &= x \\
e^{\ln{\left(L\right)}} &= e^{x} \\
L &= e^{x} \\
\lim_{n\to\infty} \left(1 + \frac{x}{n}\right)^{n} &= e^{x} && \blacksquare
\end{align*}
[/eqn]
(1/?)

>> No.10515446

>>10515408
stop using finite intervals to try and talk about infinite intervals you idiot
they dont have the same properties

show me a real number larger than every natural
read what i just wrote
show me a real number larger than every natural
every, do not give me a finite interval, they are irrelevant and have no impact on your position

>> No.10515486

>>10515446
Don't need to, it's by definition of a subset. You can't say there is a number T in set N that is greater than any number in set R. This implies that T is not the set of R. This would make N not a subset of R

>> No.10515499

>>10515486
holy christ, learn how to read you inane shit
for any real number, there is a natural number larger than that real number
is not the same fucking statement as
there is a natural number, such that for any real number, that natural is larger

>> No.10515504

>>10515446
Take a look at this

[math] lim_{n \to \infty} \frac{d^n}{dx^n} e^x = lim_{n \to \infty} \frac{d^n}{dx^n} (1) + lim_{n \to \infty} \frac{d^n}{dx^n} (x) +lim_{n \to \infty} \frac{d^n}{dx^n} \frac{x^2}{2} + ... + lim_{n \to \infty} \frac{d^n}{dx^n} \frac{x^n}{n!} [/math]

The first drops out to zero obviously, same with all the other terms. Finally, the nth derivative. Well think about it this way. When you apply first derivative to a quadratic, it turns that number into a linear term. When you apply the second derivative, to that quadratic, you end up with a constant. The third derivative and you get 0. So when you apply it to this case. as was done with n=2 derivative, you turned into a constant. Now if you think about that for a second, you will turn the the nth term into a constant as well. It should be plainly obvious now that continuous infinity versus discrete infinity are two entirely different concepts.

>> No.10515508

>>10515504
honestly kill yourself
you've written e^x (an infinite sum) as a finite sum, and are acting like your "manipulation" is relevant
if you have only n terms in your series you do not have e^x, you have an approximation of it

>> No.10515509

>>10513165
re

>> No.10515635

>>10515504
>It should be plainly obvious
lol
This is kind of a thing in atomic physics with the states of electrons. In the hydrogen atom, the electron has an infinite number of energy levels squeezed within about 13.6 eV of the ground state. Somehow at the ionizing energy, the electron gains access to infinity more states with even higher energy than the infinite number of bound states. This new infinite number of states is made of free particle states and the wavefunctions become Dirac delta functions in this region of the electronic phase space. The Hamiltonian matrix of the energy operator of the electron is m by n, with m=n=infinity. Somehow, even after the infinitieth diagonal position in the matrix, there's somehow a further infinite sea of all these delta function states with energy higher than any bound state.

There's a countably infinite (distrete) number bound states but there's these other free particle states which form an unquantized energy continuum. These uncountably infinite free particle states are in some sense "denser" than discrete (quantized) bound states.

>> No.10515669

>>10515504
you are not allowed to interchange the infinite sum and the limit

>> No.10515706

>>10515235
>>10515411
It is from here that we get [math]\displaystyle e = e^{1} = \lim_{n\to\infty} \left(1 + \frac{1}{n}\right)^{n}[/math].
You'll actually find that [math]e[/math] isn't very important. The function [math]\exp[/math] such that [math]\exp = \exp'[/math] and [math]\exp{\left(0\right)} = 1[/math], on the other hand, is very much so!
One of the properties this function has is [math]\exp{\left(x+y\right)} = \exp{\left(x\right)}\exp{\left(y\right)}[/math].
[eqn]
\begin{align*}
F{\left(x,y\right)} &= \frac{\exp{\left(x\right)}\exp{\left(y\right)}}{\exp{\left(x+y\right)}} \\
\frac{\partial F}{\partial x} &= \frac{\left(\frac{\partial}{\partial x}\left(\exp{\left(x\right)}\exp{\left(y\right)}\right)\right)\left(\exp{\left(x+y\right)}\right)-\left(\exp{\left(x\right)}\exp{\left(y\right)}\right)\left(\frac{\partial}{\partial x}\left(\exp{\left(x+y\right)}\right)\right)}{\left(\exp{\left(x+y\right)}\right)^{2}} \tag*{This is allowed because of the Quotient Rule.} \\
&= \frac{\exp{\left(x\right)}\exp{\left(y\right)}\exp{\left(x+y\right)}-\exp{\left(x\right)}\exp{\left(y\right)}\left(\exp{\left(x+y\right)}\right)}{\left(\exp{\left(x+y\right)}\right)^2} \\
&= 0 \tag*{By the same process, we find that $\frac{\partial F}{\partial y} = 0$ as well.} \\
& \tag*{Since $F'{\left(x,y\right)}$ is equal to $0$ with respect to both $x$ and $y$, it must be a constant function. What is the constant here?} \\
F\left(0,0\right) &= \frac{\exp{\left(0\right)}\exp{\left(0\right)}}{\exp{\left(0+0\right)}} \\
&= 1 \\
F{\left(x,y\right)} &= \frac{\exp{\left(x\right)}\exp{\left(y\right)}}{\exp{\left(x+y\right)}} \\
&= 1 \\
\exp{\left(x\right)}\exp{\left(y\right)} &= \exp{\left(x+y\right)} && \blacksquare
\end{align*}
[/eqn]
(2/?)

>> No.10515788

>>10515669
>>10515508
You gotta realize this is taking the derivative an infinite number of times. Obviously this is not a discrete version, this is just taking the nth derivative of nth term and letting taking n to approach infinity. There is no finiteness here.

>> No.10515794 [DELETED] 

>>10515706
>>10515411
>>10515706
From here, it is trivial to prove that [math]\exp{\left(xy\right)} = \left(\exp{\left(x\right)}\right)^{y}[/math], but I'll do it anyway.
[eqn]
\begin{align*}
\exp{\left(xy\right)} &= \exp{\left(\underbrace{x+x+x+\ldots+x}_{y\ \textrm{times}}\right)} \\
&= \underbrace{\exp{\left(x\right)}\cdot\exp{\left(x\right)}\cdot\exp{\left(x\right)}\cdot\ldots\cdot\exp{\left(x\right)}}_{y\ \textrm{times}} \\
\exp{\left(xy\right)} &= \left(\exp{\left(x\right)}\right)^{y} && \blacksquare
\end{align*}
[/eqn]
You could therefore write any [math]\exp{\left(x\right)}[/math] as [math]\left(\exp{\left(1\right)}\right)^{x}[/math]. So what is [math]\exp{\left(1\right)}[/math]?
Well, in the early 18th century, Brook Taylor reasoned that for any real or complex-valued function [math]f[/math] which is infinitely differentiable at a real or complex number [math]a[/math], you could express [math]f{\left(x\right)}[/math] as the power series [math]\displaystyle \sum_{k=0}^{\infty} \frac{f^{\left(k\right)}{\left(a\right)}}{k!}\left(x-a\right)^k=f{\left(a\right)}+f'{\left(a\right)}\left(x-a\right)+\frac{f''{\left(a\right)}}{2!}\left(x-a\right)^{2}+\frac{f'''{\left(a\right)}}{3!}\left(x-a\right)^{3}+\ldots[/math].
So let's set out to find [math]\exp{\left(x\right)}[/math] as one of these Taylor series, and from there find [math]\exp{\left(1\right)}[/math]. Since we only know [math]\exp{\left(0\right)}[/math], we'll have [math]a=0[/math] (which makes this a Maclaurin series as well!).
[eqn]
\begin{align*}
\exp{\left(x\right)} &:= \sum_{k=0}^{\infty} \frac{\exp^{\left(k\right)}{\left(0\right)}}{k!}\left(x-0\right)^k \\
&:= \sum_{k=0}^{\infty} \frac{x^{k}}{k!} \\
\exp{\left(1\right)} &= \sum_{k=0}^{\infty} \frac{1}{k!} \\
&\approx 2.71828459045 \qquad \qquad \textrm{Let's call this number, I don't know, $e$.} \\
\exp{\left(x\right)} &= \left(\exp{\left(1\right)}\right)^{x} \\
&= e^{x}
\end{align*}
[/eqn]

>> No.10515820

>>10515235
>>10515411
>>10515706
From here, it is trivial to prove that [math]\exp{\left(xy\right)} = \left(\exp{\left(x\right)}\right)^{y}[/math], but I'll do it anyway.
[eqn]
\begin{align*}
\exp{\left(xy\right)} &= \exp{\left(\underbrace{x+x+x+\ldots+x}_{y\ \textrm{times}}\right)} \\
&= \underbrace{\exp{\left(x\right)}\cdot\exp{\left(x\right)}\cdot\exp{\left(x\right)}\cdot\ldots\cdot\exp{\left(x\right)}}_{y\ \textrm{times}} \\
\exp{\left(xy\right)} &= \left(\exp{\left(x\right)}\right)^{y} && \blacksquare
\end{align*}
[/eqn]
You could therefore write any [math]\exp{\left(x\right)}[/math] as [math]\left(\exp{\left(1\right)}\right)^{x}[/math]. So what is [math]\exp{\left(1\right)}[/math]?
Well, in the early 18th century, Brook Taylor reasoned that for any real or complex-valued function [math]f[/math] which is infinitely differentiable at a real or complex number [math]a[/math], you could express [math]f{\left(x\right)}[/math] as the power series [math]\displaystyle \sum_{k=0}^{\infty} \frac{f^{\left(k\right)}{\left(a\right)}}{k!}\left(x-a\right)^k=f{\left(a\right)}+f'{\left(a\right)}\left(x-a\right)+\frac{f''{\left(a\right)}}{2!}\left(x-a\right)^{2}+\frac{f'''{\left(a\right)}}{3!}\left(x-a\right)^{3}+\ldots[/math].
So let's set out to find [math]\exp{\left(x\right)}[/math] as one of these Taylor series, and from there find [math]\exp{\left(1\right)}[/math]. Since we only know [math]\exp{\left(0\right)}[/math], we'll have [math]a=0[/math] (which makes this a Maclaurin series as well!).
[eqn]
\begin{align*}
\exp{\left(x\right)} &:= \sum_{k=0}^{\infty} \frac{\exp^{\left(k\right)}{\left(0\right)}}{k!}\left(x-0\right)^k \\
&:= \sum_{k=0}^{\infty} \frac{x^{k}}{k!} \\
\exp{\left(1\right)} &= \sum_{k=0}^{\infty} \frac{1}{k!} \\
&=\ \textrm{a transcendental number about equal to 2.71828459045; let's call this number, I don't know, $e$} \\
\exp{\left(x\right)} &= \left(\exp{\left(1\right)}\right)^{x} \\
&= e^{x}
\end{align*}
[/eqn]
(3/?)

>> No.10515919

>>10515235
>>10515411
>>10515706
>>10515820
Fuck, I meant [math]2.718281828459045[/math]. Whatever.
So, really, there's nothing special to [math]e[/math]. All the definitions you'll see of it, like [math]\displaystyle e = \lim_{n\to\infty} \left(1 + \frac{1}{n}\right)^{n}[/math] or [math]\displaystyle e = \sum_{k=0}^{\infty} \frac{1}{k!}[/math], are just the cases of [math]\exp{\left(x\right)}[/math] where [math]x=1[/math].
Yes, it's the base of the natural logarithm, but that just means [math]\ln{\left(e\right)} = 1[/math], which just means [math]\ln{\left(\exp{\left(1\right)}\right)} = 1[/math], which is just the case of [math]\ln{\left(\exp{\left(x\right)}\right)} = x[/math] where [math]x=1[/math].

That last one can be proven without ever bringing up [math]e[/math].
[eqn]
\begin{align*}
\ln{\left(x\right)} &:= \int_{1}^{x} \frac{1}{t}\,dt
\tag*{We can define the natural logarithm of a real positive number $x$ as the area under the curve $y=\frac{1}{t}$ from $1$ to $x$ and proving that it is the inverse of the exponential function of $x$.} \\
\ln'{\left(x\right)} &= \frac{1}{x} \\
\frac{d}{dx} \left(\ln{\left(\exp{\left(x\right)}\right)}\right) &= \ln'{\left(\exp{\left(x\right)}\right)}\cdot\exp'{\left(x\right)} \\
&= \frac{\exp{\left(x\right)}}{\exp{\left(x\right)}} \\
&= 1 \\
\ln{\left(\exp{\left(x\right)}\right)} &= \int{1\ dx} \\
&= x + C \\
\ln{\left(\exp{\left(0\right)}\right)} &= \ln{\left(1\right)} \\
&= \int_{1}^{1} \frac{1}{t}\,dt \\
0 + C &= 0 \\
\ln{\left(\exp{\left(x\right)}\right)} &= x && \blacksquare
\end{align*}
[/eqn]
I will be completely honest: I kind of lost track of my point here. I legitimately spent my entire day on this.
[math]\LaTeX[/math] is a potent drug.
((4/4)(?))

>> No.10515961

>>10515706
Your use of the multivariable version of if the derivative is zero then the function is constant should be pointed out, as this only holds on convex open subsets of product spaces, which in this case is true as the domain is the whole space.

>> No.10517199
File: 75 KB, 500x572, 121112-knot9.jpg [View same] [iqdb] [saucenao] [google]
10517199

Leads to another maybe more interesting question:

Let

[math] f(x, n) := \left( 1 + \frac {x} {n} \right)^n [/math]

and note that

[math] \dfrac {d} {dx} f(x, n) = f(x,n) / \left( 1 + \frac {x} {n} \right)^1 [/math]

Now what is

[math] \dfrac {d^k} {dx^k} f(x, n) [/math]

?