[ 3 / biz / cgl / ck / diy / fa / ic / jp / lit / sci / vr / vt ] [ index / top / reports ] [ become a patron ] [ status ]
2023-11: Warosu is now out of extended maintenance.

/sci/ - Science & Math


View post   

File: 5 KB, 371x91, c81cc27d65651218b55941eebd72ae135bd2bb3c_png.png [View same] [iqdb] [saucenao] [google]
8904025 No.8904025 [Reply] [Original]

Post nice limits.

>> No.8904027
File: 2 KB, 235x45, MSP23511i8i4efd9d31196600003e8gb54e6868f8ie.gif [View same] [iqdb] [saucenao] [google]
8904027

e

>> No.8904287

These are my favourites

[eqn]\lim_{n\rightarrow \infty} 0 = 0[/eqn]
[eqn]\lim_{n\rightarrow \infty} m= m[/eqn]

[eqn]\lim_{n\rightarrow \infty} \pi = \pi[/eqn]
[eqn]\lim_{n\rightarrow \infty} e = e [/eqn]

>> No.8904340
File: 12 KB, 180x180, 516570_2.2.jpg [View same] [iqdb] [saucenao] [google]
8904340

[math]\int x^{-1}{\mathrm d}x = \log(x)[/math]

and for non-zero z

[math]\int x^{z-1}{\mathrm d}x=\dfrac{x^{z}}{z}[/math]

and so

[math]\lim_{z\to 0}\left(\int x^{-1}x^{z}{\mathrm d}x\right)[/math]

isn't the logarithm - it actually doesn't even exist.

But We have

[math]x^z = {\mathrm e}^{z\,\ln(x)} = 1 + z\,\ln(x) + {\mathcal O}(z^2)[/math]

which we can write as

[math]\dfrac{x^z - 1}{z} = \ln(x) + {\mathcal O}(z^2)[/math]

which we can write as

[math]\int x^{-1+z}{\mathrm d}x - \dfrac{1}{z} = \int x^{-1} {\mathrm d}x + {\mathcal O}(z^2)[/math]

or

[math]\lim_{z\to 0}\left(\int x^{-1}x^{z}{\mathrm d}x - \dfrac{1}{z}\right) = \int x^{-1}x^0 {\mathrm d}x = \ln(x)[/math]

I.e. the difference in switching limit exactly makes for a simple counterterm.

Similarly

[math]\lim_{z\to 0} \left( \sum_{n=1}^\infty n^1 (1+z)^n - (-1)^{1+1} \dfrac{1!}{\log(1+z)^{1+1}} \right) = -\dfrac{1}{1+1} \dfrac{1}{6}[/math]

[math]\lim_{z\to 0} \left( \sum_{n=1}^\infty n^3 (1+z)^n - (-1)^{3+1} \dfrac{m!}{\log(1+z)^{3+1}} \right) = -\dfrac{1}{3+1} \dfrac{1}{-30}[/math]

and so on...

>> No.8904347

in general
[math] \lim_{z->0} \left(\sum_{n=1}^\infty n^m (1+z)^n - (-1)^{m+1} \dfrac{m!}{\log(1+z)^{m+1}} \right) = -\dfrac{1}{m+1} B_{m+1} [/math]
with B_k the Bernoulli numbers

>> No.8904580

[eqn]\lim_{n\to\infty} f\bigg(x_{n} - \frac{f\big(x_{n-1} - \frac{f(x_{n-2} - \frac{f(...)}{f'(...)})}{f'(x_{n-2} - \frac{f(...)}{f'(...)})}\big)}{f'\big(x_{n-1} - \frac{f(x_{n-2} - \frac{f(...)}{f'(...)})}{f'(x_{n-2} - \frac{f(...)}{f'(...)})}\big)}\bigg)=0[/eqn]

>> No.8905200

How nice that n divided by its factorial's geometric mean approaches e. A satisfying limit to look at.
It might go unnoticed that (n!)^(1/n) is actually a geometric mean.

>> No.8905932

>>8905200

[math] m(n):=(n!)^{1/n} [/math]

[math] \lim_{n\to \infty}\frac{n}{m(n)}={\rm e} [/math]

Plot[n/(n!)^(1/n), {n, -2, 5}]

neat

>> No.8906085

>>8904287
Woah, that's crazy how constants like [math]e[/math] and [math]\pi[/math] just randomly pop up in math like that. This is one of the reason why I'm so interested in STEM. I hope numberphile does a video on the equations you posted.

>> No.8906093
File: 10 KB, 300x168, IMG_1001.jpg [View same] [iqdb] [saucenao] [google]
8906093

>>8906085
>numberphile

>> No.8906098

>>8904025
(sorry for being a brainlet in advance, i know i don't know much)

I'm having trouble proving that limit. My first idea was to use l'hopital's, but to do that I need a function that I can take the derivative of, and I can't figure out how to describe the square root stuff as a function. Is this even the right approach?

>> No.8906107

Literally how?
[eqn] \lim_{n\to\infty}\left( \left(\sum_{k=1}^{n} \frac{1}{k} \right) - log(n) \right) = 1 - \int_{1}^{\infty} \frac{ t - \lfloor{t}\rfloor }{t^2}dt[/eqn]

>> No.8906121

>>8906098
You definitely, absolutely, cannot use l'hopitals rule here.

>> No.8906156

>>8906121
how do i become unretarded

>> No.8906182

>>8904340
bu what about -1/z going to minus infinity?

>> No.8906190

>>8906156
Probably study sequences. The first thing you should learn is that most sequences need to be tackled case by case by studying their intrinsic properties. A good analysis textbook will have a couple of nice examples of various proofs for really specific formulas.

After that you should research Viete's formula as that the formula in OP is simply a reformulation of Viete's formula where you simply have to use a lot of trig identities. I tried to look for a proof to link but all of them are locked behind a paywall in journals.

>> No.8906439 [DELETED] 
File: 74 KB, 1180x708, 0359214d013a253f92def8984de49.jpg [View same] [iqdb] [saucenao] [google]
8906439

>>8906182
At what step? And why are you asking? Note, in any case, that the Taylor expansion I use is around z=0.

>>8906107
[math] \int_{n-1}^m f(\lfloor{t}\rfloor) \,dt = \sum_{k=n}^m f(k) [/math]
and the log is defined as the int over 1/t.

>>8906098
may be a re-writing of Vieta's formula
https://proofwiki.org/wiki/Vieta's_Formula_for_Pi

It's funky, though. The expression with the positive 2's can be read as iteration of
[math] x \mapsto \sqrt{ 2+ x } [/math]
applied to [math] x_0=0 [/math]:
[math] x_0 \mapsto I(1) =\sqrt{ 2+x_0} [/math]
[math] \sqrt{ 2} \mapsto I(2) = \sqrt{ 2+ \sqrt{ 2+x_0} } [/math]
[math] \sqrt{ 2+ \sqrt{ 2} } \mapsto I(3) = \sqrt{ 2+ \sqrt{ 2+ \sqrt{ 2+x_0} } } [/math]
...
and after k applications you might wanna call this
And fixed points often swallow the initial condition. Here it corresponds to the solution
[math] x = \sqrt{ 2+ x } [/math]
which is [math]I(\infty) 2[/math].

OPs claim then is
[math] I(k) - I(\infty) \sim \left( \dfrac{pi}{2^k} \right)^2 [/math]

>>8904025
Here some more methods to compute limits via Cesaro and Abel. Those are methods saying
>If the limit exists, then it's Y.

So if a sequence [math](x_n)[/math] converges, then, by definition, [math]|x_k - x_{k-1}|[/math] gets smaller with growing k, i.e. the approximation [math]x_k \approx x_{k-1}[/math] gets more accurate. This implies [math]x_k\approx\tfrac{1}{2}(x_{k-1}+x_k)[/math] get more accurate with k and so does [math]x_k\approx\tfrac{1}{3}(x_{k-2}+x_{k-1}+x_k)[/math]. In fact

[math]x_k \approx \dfrac{1}{d_k}\sum_{j=k-(d_k-1)}^kx_j[/math]

gets better with growing [math]k[/math] for any [math]d_k[/math]:
On the right hand side you take the average more and more similar terms. The limit [math]\lim_{k\to\infty}[/math] are the same.

>> No.8906442 [DELETED] 

However, the right hand may exists even if the left hand side doesn't. For example, [math]x_k=(1+(-1)^k)/2[/math] giving sequence [math]1,0,1,0,1,0,\dots[/math] has no limit, but

[math] \sum_{j=1}^{m}1=0[/math]

so that

[math]\dfrac{1}{2m}\sum_{j=1}^{2m}x_j=\dfrac{1}{2m}\sum_{j=1}^{m}(0+1)=\dfrac{1}{2}[/math]

and

[math]\dfrac{1}{2m+1}\sum_{j=1}^{2m+1}x_j=\dfrac{1}{2m}\left(\sum_{j=1}^{m}(0+1)+1\right)=\dfrac{1}{2}\left(1+\dfrac{1}{m}\right)[/math]

and so

[math]\lim_{k\to\infty}^\text{Cesaro}x_k = \lim_{k\to\infty}\dfrac{1}{k}\sum_{j=1}^{k}x_j = \dfrac{1}{2}[/math]

The point is that if the standard [math]\lim[/math] exists, then this Cesaro limit agress AND also has a finite result for some sequences for which the standard [math]\lim[/math] does not.

Similar argument with a negative sign instead of a arithmetic mean points out that

[math]\lim_{k\to\infty}x_k = x_a + (x_{a+1}-x_{a}) + (x_{a+2}-x_{a+1}) + (x_{a+3}-x_{a+2}) + \dots[/math]

and leads to

[math]\lim_{k\to\infty}^\text{Abel}x_k = \lim_{t\to{}1}(1-t)\sum_{j=0}^\infty x_k t^k[/math]

And e.g. our above example

[math]x_k=(1+(-1)^k)/2[/math]

leads to

[math](1-t)\sum_{j=0}^\infty x_k t^k = \dfrac{1}{1+t}[/math]

and thus the sum [math]\dfrac{1}{2}[/math] as in the Borel case.

>> No.8906444 [DELETED] 

Cesaro case*

>> No.8906448
File: 74 KB, 1180x708, 0359214d013a253f92def8984de49.jpg [View same] [iqdb] [saucenao] [google]
8906448

>>8906182
At what step? And why are you asking? Note, in any case, that the Taylor expansion I use is around z=0.

>>8906107
[math] \int_{n-1}^m f(\lfloor{t}\rfloor) \,dt = \sum_{k=n}^m f(k) [/math]
and the log is defined as the int over 1/t.

>>8906098
may be a re-writing of Vieta's formula
https://proofwiki.org/wiki/Vieta's_Formula_for_Pi

It's funky, though. The expression with the positive 2's can be read as iteration of
[math] x \mapsto \sqrt{ 2+ x } [/math]
applied to [math] x_0=0 [/math]:
[math] x_0 \mapsto I(1) =\sqrt{ 2+x_0} [/math]
[math] \sqrt{ 2+x_0} \mapsto I(2) = \sqrt{ 2+ \sqrt{ 2+x_0} } [/math]
[math] \sqrt{ 2+ \sqrt{ 2+x_0} } \mapsto I(3) = \sqrt{ 2+ \sqrt{ 2+ \sqrt{ 2+x_0} } } [/math]
...
and after k applications you might wanna call this
And fixed points often swallow the initial condition. Here it corresponds to the solution
[math] x = \sqrt{ 2+ x } [/math]
which is [math]I(\infty) 2[/math].

OPs claim then is
[math] I(k) - I(\infty) \sim \left( \dfrac{pi}{2^k} \right)^2 [/math]

>>8904025
Here some more methods to compute limits via Cesaro and Abel. Those are methods saying
>If the limit exists, then it's Y.

So if a sequence [math](x_n)[/math] converges, then, by definition, [math]|x_k - x_{k-1}|[/math] gets smaller with growing k, i.e. the approximation [math]x_k \approx x_{k-1}[/math] gets more accurate. This implies [math]x_k\approx\tfrac{1}{2}(x_{k-1}+x_k)[/math] get more accurate with k and so does [math]x_k\approx\tfrac{1}{3}(x_{k-2}+x_{k-1}+x_k)[/math]. In fact

[math]x_k \approx \dfrac{1}{d_k}\sum_{j=k-(d_k-1)}^kx_j[/math]

gets better with growing [math]k[/math] for any [math]d_k[/math]:
On the right hand side you take the average more and more similar terms. The limit [math]\lim_{k\to\infty}[/math] are the same.

>> No.8906452 [DELETED] 

I(\infty)=2, I meant

>>8906448
Adding upon the summation method...the right hand may exists even if the left hand side doesn't. For example, [math]x_k=(1+(-1)^k)/2[/math] giving sequence [math]1,0,1,0,1,0,\dots[/math] has no limit, but

[math] \sum_{j=1}^{m}1=0[/math]

so that

[math]\dfrac{1}{2m}\sum_{j=1}^{2m}x_j=\dfrac{1}{2m}\sum_{j=1}^{m}(0+1)=\dfrac{1}{2}[/math]

and

[math]\dfrac{1}{2m+1}\sum_{j=1}^{2m+1}x_j=\dfrac{1}{2m}\left(\sum_{j=1}^{m}(0+1)+1\right)=\dfrac{1}{2}\left(1+\dfrac{1}{m}\right)[/math]

and so

[math]\lim_{k\to\infty}^\text{Cesaro}x_k = \lim_{k\to\infty}\dfrac{1}{k}\sum_{j=1}^{k}x_j = \dfrac{1}{2}[/math]

The point is that if the standard [math]\lim[/math] exists, then this Cesaro limit agress AND also has a finite result for some sequences for which the standard [math]\lim[/math] does not.

Similar argument with a negative sign instead of a arithmetic mean points out that

[math]\lim_{k\to\infty}x_k = x_a + (x_{a+1}-x_{a}) + (x_{a+2}-x_{a+1}) + (x_{a+3}-x_{a+2}) + \dots[/math]

and leads to

[math]\lim_{k\to\infty}^\text{Abel}x_k = \lim_{t\to{}1}(1-t)\sum_{j=0}^\infty x_k t^k[/math]

And e.g. our above example

[math]x_k=(1+(-1)^k)/2[/math]

leads to

[math](1-t)\sum_{j=0}^\infty x_k t^k = \dfrac{1}{1+t}[/math]

and thus the sum [math]\dfrac{1}{2}[/math] as in the Cesaro case.

>> No.8906457

[math]I(\infty)=2[/math], I meant

>>8906448
Adding upon the summation method...the right hand may exists even if the left hand side doesn't. For example, [math]x_k=\frac{1}{2}(1+(-1)^k)[/math] giving sequence [math]0,1,0,1,0,1,0,\dots[/math] has no limit, but

[math] \sum_{j=1}^{m}1=m[/math]

so that

[math]\dfrac{1}{2m}\sum_{j=1}^{2m}x_j=\dfrac{1}{2m}\sum_{j=1}^{m}(0+1)=\dfrac{1}{2}[/math]

and

[math]\dfrac{1}{2m+1}\sum_{j=1}^{2m+1}x_j=\dfrac{1}{2m}\left(\sum_{j=1}^{m}(0+1)+1\right)=\dfrac{1}{2}\left(1+\dfrac{1}{m}\right)[/math]

and so

[math]\lim_{k\to\infty}^\text{Cesaro}x_k = \lim_{k\to\infty}\dfrac{1}{k}\sum_{j=1}^{k}x_j = \dfrac{1}{2}[/math]

The point is that if the standard [math]\lim[/math] exists, then this Cesaro limit agress AND also has a finite result for some sequences for which the standard [math]\lim[/math] does not.

Similar argument with a negative sign instead of a arithmetic mean points out that

[math]\lim_{k\to\infty}x_k = x_a + (x_{a+1}-x_{a}) + (x_{a+2}-x_{a+1}) + (x_{a+3}-x_{a+2}) + \dots[/math]

and leads to

[math]\lim_{k\to\infty}^\text{Abel}x_k = \lim_{t\to{}1}(1-t)\sum_{j=0}^\infty x_k t^k[/math]

And e.g. our above example

[math]x_k=\frac{1}{2}(1+(-1)^k)[/math]

leads to

[math](1-t)\sum_{j=0}^\infty x_k t^k = \dfrac{1}{1+t}[/math]

and thus the sum [math]\dfrac{1}{2}[/math] as in the Cesaro case.

>> No.8906460

>>8906448
(the floor equation is for f's that vary appropriately in an interval [k-1, k])

>> No.8906470

>>8906448
>[math] \int_{n-1}^m f(\lfloor{t}\rfloor) \,dt = \sum_{k=n}^m f(k) [/math]

I'm the guy you are answering to there. I am very sorry but could you elaborate on how I'd use that to sketch a proof of the identity I posted?

Just for background, I found that identity some weeks ago while reading Apostol's analytic number theory. It is presented without proof to find asymptotic formulas for the Riemann Zeta function.

>> No.8906526
File: 48 KB, 823x558, Screen Shot 2017-05-13 at 19.28.33.png [View same] [iqdb] [saucenao] [google]
8906526

>>8906470
The numerical value of the left hand side is the definition of the Euler–Mascheroni constant
https://en.wikipedia.org/wiki/Euler%E2%80%93Mascheroni_constant

[math] -\frac{t-f(t)}{t^2} = \frac{f(t)}{t} - \frac{1}{t}[/math]

That the right terms on both sides agree is clear from the very definition of the logarithm
[math] \log(k) := \int_{1}^k \frac{1}{t}\, dt [/math]

Remains to show the identity
[math] \int_{1}^\infty \left( \dfrac{t}{f(t)} - \dfrac{f(t)}{t} \right) \dfrac{dt}{t} = 1 [/math]
pic related

>> No.8906558
File: 27 KB, 568x191, Screen Shot 2017-05-13 at 19.46.31.png [View same] [iqdb] [saucenao] [google]
8906558

>>8906526
It's btw. interesting note that

[math] \lim_{n\to\infty} \left( \sum_{k=1}^n f(k) - \int_1^n f(k)\,dk \right) [/math]

is finite whenever f is decreasing.
So e.g. the expression for [math] n^{-s} [math] is wll behaved also for s<1

>> No.8906583

[eqn]\lim_{n\to\infty}\sum_{k=0}^n k = \frac{-1}{12}[/eqn]

>> No.8906587

>>8906583
deleter this

>> No.8906646
File: 52 KB, 766x706, Screen Shot 2017-05-13 at 20.25.17.png [View same] [iqdb] [saucenao] [google]
8906646

>>8906583
I posted that one above, pic related.

Also related is

[math] \left(\dfrac{z}{\ln(1+z)}\right)^n=1+\dfrac{n}{2}z+(3n-5)\dfrac{n}{2}\dfrac{1}{12}z^2+\dfrac{n}{2}\dfrac{(n-2)(n-3)}{24}z^3+\dots [/math]

which you can use to generate a whole bunch of limits of the form

[math] c_n = \lim_{z\to 1} \left( \dfrac{p_n(z)}{q_n(z)} - \dfrac{1}{\ln(z)^n} \right) [/math]

>> No.8907099
File: 56 KB, 225x534, 1493591290467.gif [View same] [iqdb] [saucenao] [google]
8907099

>>8904025
These are not limits, but still very nice.

[eqn]2^2+3^2+5^2+7^2+11^2+13^2+17^2=666[/eqn]
[eqn]\sum\limits_{n=1}^{666}2n\left(-1\right)^{n} = 666[/eqn]
[eqn]666\prod\limits_{p|666}\left(1-\frac{1}{p}\right)=6\times6\times6[/eqn]
[eqn]\phi = -2\sin\left(666^{\circ}\right)\quad \text{ Where }\phi \text{ is the golden ratio}[/eqn]

>> No.8907118
File: 175 KB, 1234x791, Screen Shot 2017-05-14 at 00.18.16.png [View same] [iqdb] [saucenao] [google]
8907118

I just worked out the ones marked in green in pic related, but maybe I'm just adding opaque information

>> No.8907132

This one is comfy.
[eqn]\forall \theta \,\in\, \left]0,\, 2\,\pi\right[,\, \sum_{n \,=\, 1}^\infty \frac{\sin \left(n\,\theta\right)}{n} \,=\, \frac{\pi \,-\, \theta}{2}[/eqn]

>> No.8907160
File: 8 KB, 258x196, stressed_dyke.jpg [View same] [iqdb] [saucenao] [google]
8907160

>>8907132
semi-related

[math] \int_{0}^\infty \dfrac{\sin(s\,x)}{x} \dfrac{\sin(t\,x)}{x} = \dfrac{\pi}{4}(|s+t|-|s-t|) [/math]

also

[math] \sum_{n=a}^{b}\sin(2kn)=\dfrac{\sin (k (a-b-1)) \sin (k (a+b))} {\sin(k)} [/math]

Unrelated:

I conjecture

[math] \Gamma(s)\sum_{n=0}^{M-1}\, \dfrac{Z^n}{(n+A)^s} = \int_0^\infty x^{s-1}e^{-A\,x}\dfrac{1-(Z\,e^{-x})^M}{1-(Z\,e^{-x})}{\mathrm d}x [/math]

>> No.8907276

[eqn]\lim_{n\rightarrow\infty} \prod_{k=1} ^n \left(1+\frac{k}{n}\right)^n=\lim_{n\rightarrow\infty}\left(1-\frac{1}{12n}\right)^n[/eqn]

>> No.8907288

>>8907276
Things at infinity are really weird

>> No.8907294

>>8907276
>[math] -\frac{1}{12} [/math]

I call bullshit.

>> No.8907304

>>8907276

memes aside, you got identities like

[math] \lim_{n\rightarrow\infty} \prod_{k=1} ^n \left(1+\frac{1}{2^{k}n}\right)^n=\lim_{n\rightarrow\infty}\left(1+\frac{1}{n}\right)^n [/math]

>> No.8907329

>>8906156
You can only use l'hopital if you have a function divided by another function.

[eqn]\lim_{x\to anything} \frac{f(x)}{g(x)}[/eqn]

And it also must be in indeterminate form when you try to talk the limits separately. So equal to 0/0 or [math]\frac{∞}{∞}[/math]

>> No.8907371

>>8907329
Yea, I was thinking if there to make f(x) = 2^k and define a clever function g(x) that was equal to the reciprocal of square root thing, so I could apply l'hopitals to f(x)/g(x) but I see know that was way off.

>> No.8908110

>>8907099
this is some good stuff

>> No.8908125

Not a limit, but proved using limits

[eqn] \frac{n}{2^{n-1}} = \sin\Big(\frac{\pi}{n}\Big)\sin\Big(\frac{2\pi}{n}\Big)\sin\Big(\frac{3\pi}{n}\Big)\cdots\sin\Big(\frac{(n-1)\pi}{n}\Big) [/eqn]

>> No.8908337

>>8908110
though stuff like

[math] \sum\limits_{n=1}^{m}2n\left(-1\right)^{n} = m [/math]

holds for ever even number m anyway

>> No.8908604

>>8908125
How would this be proved using limits? It looks more like induction to me but what would I know.

>> No.8908756
File: 7 KB, 254x198, 271.jpg [View same] [iqdb] [saucenao] [google]
8908756

>>8908337
yea that one isnt to special, I admit.

>> No.8908768

[math]\displaystyle{lim_{k \rightarrow \inf} \frac{\pi(x) \cdot log(x)}{x} = 1} [/math]

>> No.8908783

>>8908768
what

>> No.8908788

>>8908783
Prime number theorem motherfucker.

Complex analysis was invented for this shit

>> No.8908833
File: 981 KB, 2420x3270, november-2016-vogue-cover-07.jpg [View same] [iqdb] [saucenao] [google]
8908833

>>8908756
I didn't want to be an asshole...

Take those as an apology:

[math] \prod_{n=0}^{\infty}\left(1 + x^{2^n}\right) = \frac{1}{1-x}[/math]

[math] \prod_{m=1}^\infty \left( 1 - q^{2m}\right)\left( 1 + w^{2}q^{2m-1}\right)\left( 1 + w^{-2}q^{2m-1}\right)= \sum_{n=-\infty}^\infty w^{2n}q^{n^2} [/math]

>>8908788
Riemanns paper came completely out of the blue and really it's his only paper on number theory - and given he was an expert in complex integral transforms, the claim is questionable.

There is an English translation of the paper online btw., and to me it's the greatest of all
http://www.claymath.org/sites/default/files/ezeta.pdf

>> No.8908868
File: 20 KB, 748x640, Checked.png [View same] [iqdb] [saucenao] [google]
8908868

>>8908833
Oh verry kind of you

>> No.8908910

>>8904580
For what sequence [math]\left(x_n\right)[/math]?

>> No.8908977
File: 299 KB, 641x667, a_vest.png [View same] [iqdb] [saucenao] [google]
8908977

>>8908910
If I had to guess, I think he posted Newtons method to find zeroes of a function, and which has some continuity requirements.
https://en.wikipedia.org/wiki/Newton%27s_method
Meaning the sequence is the point in the recursion and the starting point is arbitrary

>>8908868
Here some more

[math] \sin(z) = z\prod_{n=1}^\infty \left(1-\left(\frac{z}{n\,\pi}\right)^2\right) [/math]

it's awesome, I agree

[math] \dfrac{1}{\sin(z)} = \dfrac{1}{z} + 2z\sum_{n=1}^\infty (-1)^n \dfrac{1}{z^2 - (n\,\pi)^2} [/math]

>> No.8909635
File: 6 KB, 225x226, 225px-Categorical_pullback_(expanded).png [View same] [iqdb] [saucenao] [google]
8909635

while we're at sums, I think in Set

[math] |X\times_{f,g}Y|=\sum_{z\in{}Z}|f^{-1}(z)|\cdot{}|g^{-1}(z)| [/math]

implying any such sum

[math] \sum_{k\in{}K}| n_k\cdot{}m_k [/math]

can be understood as the result of a cardinality computation

>> No.8909640

[math] \sum_{k\in{}K} n_k\cdot{}m_k [/math]

>> No.8910297

Here are ones that are cool, the nice thing is that you can prove some of these with elementary mathematics (with a tiny bit of formalising help)

[math]\sum_{n=0}^{\infty} \frac{1}{(2n+1)\binom{2n}{n}} = \frac{2\pi}{3\sqrt{3}}[/math]

[math]\sum_{n=0}^{\infty} \frac{1}{(\binom{2n}{n}} = \frac{2}{27}(18 + \pi \sqrt{3})[/math]

Many other similar results pop up, for a start, I'd say to have a look at the following integrals:

[math]\int_0^1 x^n(1-x)^n \ dx [/math] (known as the Beta(n+1,n+1) function)

[math]\int_0^{\pi/2}\sin^{2n+1}x \ dx [/math]

[math]\int_0^{\pi/2}\sin^{2n}x \ dx [/math]

>> No.8910301

Here is another that can be worked out by playing with geometric series and integrals:

[math]\sum_{n=1}^{\infty} \frac{\sin nx}{n} = \frac{\pi}{2} - \frac{x}{2} [/math]

>> No.8911331
File: 49 KB, 426x635, Screen Shot 2017-05-15 at 14.16.57.png [View same] [iqdb] [saucenao] [google]
8911331

>>8910301
That was actually posted here
>>8907132

>>8910297
How did you prove em?

When i see something like sqrt(k) in a result, I'm immediately tempted to consider the summands as coefficients in a series expansion and check what the "origin function" is, pic related.