# Differences

This shows you the differences between two versions of the page.

 minus_twelve_._note [2017/11/25 23:02]nikolaj — (current) Both sides previous revision Previous revision 2019/09/22 01:30 nikolaj removed2019/09/21 00:34 nikolaj 2019/09/09 22:45 nikolaj 2019/09/09 22:33 nikolaj 2017/11/25 23:02 nikolaj 2017/07/02 22:57 nikolaj 2017/07/02 21:59 nikolaj 2016/11/11 21:48 nikolaj 2016/05/31 20:38 nikolaj 2016/05/30 16:35 nikolaj 2016/03/13 17:52 nikolaj 2016/03/13 17:50 nikolaj 2016/03/13 01:21 nikolaj 2016/03/13 01:09 nikolaj 2016/03/10 11:21 nikolaj 2016/03/10 11:20 nikolaj 2016/03/10 11:14 nikolaj 2016/03/10 11:13 nikolaj 2016/03/10 11:12 nikolaj 2016/03/10 11:10 nikolaj 2016/03/10 11:07 nikolaj 2016/03/10 11:05 nikolaj 2016/03/10 11:02 nikolaj 2016/03/10 10:51 nikolaj 2016/03/10 10:51 nikolaj 2016/03/08 16:13 nikolaj 2016/03/08 16:12 nikolaj 2016/03/08 16:05 nikolaj 2016/03/08 16:05 nikolaj 2016/03/08 15:56 nikolaj Next revision Previous revision 2019/09/22 01:30 nikolaj removed2019/09/21 00:34 nikolaj 2019/09/09 22:45 nikolaj 2019/09/09 22:33 nikolaj 2017/11/25 23:02 nikolaj 2017/07/02 22:57 nikolaj 2017/07/02 21:59 nikolaj 2016/11/11 21:48 nikolaj 2016/05/31 20:38 nikolaj 2016/05/30 16:35 nikolaj 2016/03/13 17:52 nikolaj 2016/03/13 17:50 nikolaj 2016/03/13 01:21 nikolaj 2016/03/13 01:09 nikolaj 2016/03/10 11:21 nikolaj 2016/03/10 11:20 nikolaj 2016/03/10 11:14 nikolaj 2016/03/10 11:13 nikolaj 2016/03/10 11:12 nikolaj 2016/03/10 11:10 nikolaj 2016/03/10 11:07 nikolaj 2016/03/10 11:05 nikolaj 2016/03/10 11:02 nikolaj 2016/03/10 10:51 nikolaj 2016/03/10 10:51 nikolaj 2016/03/08 16:13 nikolaj 2016/03/08 16:12 nikolaj 2016/03/08 16:05 nikolaj 2016/03/08 16:05 nikolaj 2016/03/08 15:56 nikolaj 2016/03/08 15:49 nikolaj 2016/03/08 15:48 nikolaj 2016/03/03 11:06 nikolaj 2016/03/03 10:54 nikolaj 2016/03/03 10:44 nikolaj 2016/03/03 10:44 nikolaj 2016/03/03 10:43 nikolaj 2016/03/03 10:12 nikolaj 2016/03/03 10:03 nikolaj 2015/12/31 15:14 nikolaj 2015/12/31 15:04 nikolaj 2015/12/31 01:33 nikolaj 2015/12/30 20:33 nikolaj 2015/12/27 15:09 nikolaj 2015/12/11 13:05 nikolaj 2015/12/11 13:04 nikolaj 2015/12/11 13:03 nikolaj 2015/12/11 13:03 nikolaj 2015/12/09 10:17 nikolaj 2015/12/09 10:17 nikolaj 2015/12/09 10:17 nikolaj Line 1: Line 1: - ===== Minus twelve . Note ===== - ==== Note === - - https://​upload.wikimedia.org/​wikipedia/​commons/​thumb/​4/​4b/​Geometric_progression_convergence_diagram.svg/​350px-Geometric_progression_convergence_diagram.svg.png - - There are theories in math that give meaning to infinite sums, and the standard one, analysis (or, to some reach, calculus) has a million applications for practical applications,​ in particular physics and engineering. The picture above demonstrates the claim - - $$1+\dfrac{1}{2}+\dfrac{1}{3}+\dfrac{1}{4}+\dots = 2$$ - - which can be proven, in analysis. Here's another claim - - $$1+\dfrac{1}{2^2}+\dfrac{1}{3^2}+\dfrac{1}{4^2}+\dots = \dfrac{\pi^2}{6} \approx 1.64493$$ - - This goes by the name //Basel problem// and was proven, in a context similar to what is analysis today, by Leonhard Euler in 1735. This one is interesting in various aspects. In particular, it shows how an infinite quantity of rational numbers can sum up to be an irrational number! - - Well, then. You may or may not have heard of the claim - - $$"​1+2+3+4+\dots = - \dfrac{1}{12}"​$$ - - An infinite sum of natural numbers is supposed to be a negative rational number? - - Let's take a step back. - - $1=1$ - - $1+2=3$ - - $1+2+3=6$ - - $1+2+3+4=10$ - - $1+2+3+4+5=\dots$ - - Here we take a finite sum of a number and, in each step, add another positive number, resulting in a longer sum. In the roughly 150 year old theory of Peano arithmetic (to which I'll soon come in my series), we can prove that each such sum is bigger than the previous one. In particular, if we add only positive number, we can prove that the sum will always be positive. ​ - - However, we can't use Peano arithmetic to prove a statement about infinite sums. An infinite sum is a creature of, for example, the theory of analysis. And, indeed, in calculus the claim with $-\frac{1}{12}$ is false! In particular, it's easy to show that - - $$\dfrac{1}{1+2+3+4+\dots} = 0$$ - - If you add more and more numbers in the denominator,​ the resulting sequence of rational numbers will come closer and closer to zero, which is exactly what this says. In particular, this reciprocal of the sum is definitely not $-12$! ​ - - Why would it be $\dfrac{1}{-12}$,​ or all numbers? Well, the fact is that this number isn't random. ​ - - Consider this little gimmick: The difference between the integral and the sum of a smooth function is given by a very particular sum that involves $\dfrac{1}{-12}$ at the second place. It starts as out as - - $$\int_a^b f(n)\,​{\mathrm d}n = \sum_{n=a}^{b-1} f(n) + \left(\lim_{x\to b}-\lim_{x\to a}\right)\left(\dfrac{1}{2}-\dfrac{1}{12}\dfrac{d}{dx}+\dots\right)f(x)$$ - - e.g. - - $$\int _m^n f(x)~{\rm d}x=\sum _{i=m}^n f(i)-\frac 1 2 \left( f(m)+f(n) \right) -\frac 1{12}\left( f'​(n)-f'​(m)\right) + \frac 1{720}\left( f'''​(n)-f'''​(m)\right) + \cdots.$$ - - and if you want to see a full version, check out the 300 year old //​Euler–Maclaurin formula//. The building blocks of many functions are monomials $f(n)=n^{k-1}$ and for those the formula is particularly simple, because most all high derivatives vanish. The formula then tells us that - $$\int_a^b n^{k-1}\,​{\mathrm d}n=\frac{b^k}{k}-\frac{a^k}{k}$$ ​ - equals $\sum_{n=a}^{b-1} n^{k-1}$ minus some lower order corrections. ​ - - So as an example, with $k=3,​a=2,​b=4$ you get the identity - $$\frac{1}{3}(4^3-2^3)=(2^2+3^2)+\frac{1}{2}(4^2-2^2)-\frac{1}{12}\,​2\,​(4^1-2^1)$$ - and there are literally infinitely many identities involving $-\frac{1}{12}$ because of this formula. In case you're wondering, both sides of the equation above simplify to $\tfrac{56}{3}$. - - Okay, but why $-\frac{1}{12}$?​ I'd like to tell you something about the theory of combinatorics at this point, but we'll only get the much later in my series. ​ - So besides just making you curious, the goal of this post is to show an innocent way in which you still can tie a sum that's very related to $-\frac{1}{12}$. Sadly, I've already written a whole lot and I don't want to make this boring either. So I'm gonna assume you're familiar with the logarithmic function and go from there. As a side note, tell me below which part of this are tough and which work will. - - So the geometric series for a real number $z$ in the interval $(0,1)$ is - - $$\sum_{n=0}^\infty z^n = \dfrac{1}{1-z}$$ - - We've seen this above, actually, in the special case of $z=-\frac{1}{2}$. Indeed, $\frac{1}{1-1/​2}=\frac{1}{1/​2}=2$ and the first formula in this post was - - $$1+\dfrac{1}{2}+\dfrac{1}{3}+\dfrac{1}{4}+\dots = 2$$ - - The smooth analogous to the sum with $z$ is - - $$\int_0^\infty z^t\,​{\mathrm d}t=\int_0^\infty {\mathrm e}^{t\log(z)}\,​{\mathrm d}t=\frac{1}{-\log(z)}$$ - - where I used $\int_0^y {\mathrm e}^{-a\cdot{}x}\,​{\mathrm d}x = \frac{1}{-a}({\mathrm e}^{-y}-1)$. Note that the logarithm is negative for $z$ in the interval $(0,1)$. - - Applying the derivative $z\frac{d}{dz}$ to the first yields - - $$z\dfrac{d}{dz}\dfrac{1}{1-z} = \sum_{n=0}^\infty n \, z^n$$ - - and applying it to the second yields - - $$z\dfrac{z}{dz}\dfrac{1}{-\log(z)} = \dfrac{1}{\log(z)^2}$$ - - How do those two differ? - - We have the Taylor expansion for the logarithm - - $$\log(1+r) = \sum_{k=1}^{\infty}\dfrac{1}{k}(-r)^k ​ = r - \dfrac{1}{2}r^2 + \dfrac{1}{3}r^3 + {\mathcal O}(r^4)=$$ - $$= r \left(1 - \dfrac{1}{2}r + \dfrac{1}{3}r^2 + {\mathcal O}(r^3) \right)$$ - - and using the geometric series expansion, we get - - $$\dfrac {1} { \log(1+r)} = \dfrac {1} {r} \dfrac {1} {1 - \dfrac{1}{2}r + \dfrac{1}{3}r^2 + {\mathcal O}(r^3) } =$$ - $$= \dfrac {1} {r} \left( 1 + \dfrac{1}{2} r - \dfrac{1}{2\cdot 2\cdot 3} r^2 + {\mathcal O}(r^3) \right) = \dfrac {1} {r} + \dfrac{1}{2} - \dfrac{1}{12} r + {\mathcal O}(r^2)$$ - - With $r=z-1$ we see - - $$\dfrac {1} { \log(z)} = - \dfrac {1} {1-z} + \dfrac{1}{2} - \dfrac{1}{12} (z-1) + {\mathcal O}((z-1)^2)$$ - - and taking the derivative, we get - - $$\sum_{n=0}^\infty n \, z^n - \dfrac {1} { \log(z)^2} = - \dfrac{1}{12} + {\mathcal O}((z-1))$$ - - For $z$ approaching $1$, which is the upper bound of the interval $(0,1)$ that we considered, this says $1+2+3+4+\cdot$ minus a  \dfrac {1} { \log(z)^2} is exactly $- \dfrac{1}{12}$. That's a number symptomatic between discrete and smooth expressions. We can visualize this: - - Here the blue and the red functions are $\sum_{n=0}^\infty n \, z^n$ and $\frac {1} { \log(z)^2}$ and both diverge at $z=1$. However, the diverge in a way that their difference converges exactly to $-\frac{1}{12}$ - - You can use the computer or a size like WolframAlpha to verify this result. - - Tell me if that was interesting and if you'd like to hear more about this. - - ----- - - >>​7926966 - Well a=1 and b=p(1). - - >>​7927364 - The geometric series for z in (0,1) is - - $\sum_{n=0}^\infty z^n = \dfrac {1} {1-z} [/​math] ​ - - The smooth analogous is - - [math] \int_0^\infty z^n \, d n = \int_0^\infty e^{ n \, \log (z) } \ d n = \dfrac {1} {-\log(z)} [/​math] ​ - - Applying ​ [math] z \dfrac {d} {dz}$ to the first yields - - $z \dfrac {z} {dz} \dfrac {1} {1-z} = \sum_{n=0}^\infty n \, z^n [/​math] ​ - - and applying it to the second yields - - [math] z \dfrac {z} {dz} \dfrac{1} {-\log(z)} = \dfrac {1} { \log(z)^2 } [/​math] ​ - - How do those two differ? - - We have the Taylor expansion - - [math] \log(1+r) = \sum_{k=1}^{\infty} \dfrac {1} {k} (-r)^k = r - \dfrac {1} {2} r^2 + \dfrac {1} {3} r^3 + O (r^4) = r \left( 1 - \dfrac {1} {2} r + \dfrac {1} {3} r^2 + O (r^3) \right) [/​math] ​ - - Using the geometric series expansion, we get - - [math] \dfrac {1} { \log (1+r) } = \dfrac {1} {r} \dfrac {1} {1 - \dfrac {1} {2} r + \dfrac {1} {3} r^2 + O (r^3) } = \dfrac {1} {r} \left( 1 + \dfrac {1} {2} r - \dfrac {1} { 2 \cdot 2 \cdot 3 } r^2 + O (r^3) \right) = \dfrac {1} {r} + \dfrac {1} {2} - \dfrac {1} {12} r + O (r^2) [/​math] ​ - - With r=z-1 we see - - [math] \dfrac {1} { \log(z) } = - \dfrac {1} {1-z} + \dfrac {1} {2} - \dfrac {1} {12} (z-1) + O ((z-1)^2) [/​math] ​ - - and taking the derivative, we get - - ​[math] \sum_{n=0}^\infty n \, z^n - \dfrac {1} { \log(z)^2 } = - \dfrac {1} {12} + O ((z-1)) [/​math] ​ - - For z to 1 this says 1+2+3+4+... minus a 1/log^2 divergence is [math]- \dfrac {1} {12} [/​math]. ​ - - For gathering the data bits [math]z^n[/​math],​ that limit doesn’t exist for neither the operation [math]\sum_{n=0}^\infty[/​math] nor [math]\int_0^\infty dn$, but it does for [math]\sum_{n=0}^\infty - \int_0^\infty dn[/​math]. ​ - - The space(time) your physical field theories are defined on generally fuck with you, but there are often such systematic renormalizations of your physical expressions,​ and for the addition of integers it relates to that relational. - - ----- - - We're going to investigate their divergent behaviour at $z=1$. ​ - - As a first remark, consider the reciprocal expressions - - $\dfrac{1}{\sum_{n=0}^\infty z^n}=1-z$ - - $\dfrac{1}{\sum_{n=0}^\infty n\,​z^n}=\left(\dfrac{1}{z}+z\right)-2$ - - At $z=1$ they both vanish, of course, but the second function also diverges at $z=0$. This is also the behaviour of $\log(z)$ or, for that matter, for any $\log(z)^k$ for any nonzero $k$. - The next paragraph has the purpose of motivating why to look at the divergent behaviour of the $\log$ function. - ----- - - - http://​math.stackexchange.com/​questions/​1327812/​limit-approach-to-finding-1234-ldots/​1332159#​1332159 - - ----- - $1+1+1+1+\dots = -\tfrac{1}{2}$ - - $1+2+3+4+\dots = -\tfrac{1}{12}$ - - <​code>​ - Sum[(1 - d)^k, {k, 0, n - 1}] - Integrate[(1 - d)^k, {k, 0, m}] - - Series[1/d + 1/Log[1 - d], {d, 0, 2}] - ​ - - ----- - - **Precursor:​** I can point out the occurrence of $-1/12$ in a completely classical computation involving a smoothing process, that’s not analytic continuation. The only thing we'll use is exansions like $\frac{1}{1-x}=1+x+x^2+{\mathcal O}(x^3)$, where they are valid in analysis. - - To do so, we first investigate the divergence behaviour of the logarithmic function. The logarithmic function is relevant because the exponential function is relevant. (The exponential function is relevant e.g. because shift along the $x$-axis can be written as $\exp(a\frac{{\mathrm d}}{{\mathrm d}x})$.) - - At $z=0$, the function $\log(z)$ has a divergence. At $z=1$ it is zero with and it's slope is $1$. That is $\log(1+r)=r+{\mathrm O}(r)$ and thus - $\dfrac{r}{\log(1+r)}=1+a\,​r+b\,​r^2+{\mathrm O}(r^3)$ - or - $\dfrac{1}{\log(1+r)}-\dfrac{1}{r}=a+b\,​r+{\mathrm O}(r^3)$. - or - $\dfrac{1}{1-x}-\dfrac{1}{\log(x)}=-a-b\,​(x-1)+{\mathrm O}(r^3)$. - Below we show $a=\frac{1}{2}$ and $b=-\frac{1}{12}$. - - Darivation: Because computing Taylor series at the origin is particularly simple, let's for now make a substituiton to $r$ given by $r:=z-1$ and consider ​ - - ${\log(1+r)} = \sum_{k=1}^{\infty}\dfrac{1}{k}(-r)^k ​ = r - \dfrac{1}{2}r^2 + \dfrac{1}{3}r^3 + {\mathcal O}(r^4) = r \left(1 - \dfrac{1}{2}r + \dfrac{1}{3}r^2 + {\mathcal O}(r^3) \right)$ - - CHECK - - So this says that when the $\log$-function passes through zero (which is at $r=1$), it behaves, to first order, linearly: ${\log(1+r)} \propto r$. The number $\dfrac{1}{2\cdot 2\cdot 3}=\dfrac{1}{12}$ first pops up if we quantify the difference to the linear function in the reciprocal form, up to second: - - $\dfrac {1} { \log(1+r)} = \dfrac{1}{r} \dfrac {1} {1 - \dfrac{1}{2}r + \dfrac{1}{3}r^2 + {\mathcal O}(r^3) } = \dfrac{1}{r} \left(1 + \dfrac{1}{2} r - \dfrac{1}{12} r^2 + {\mathcal O}(r^3)\right)$ - - CHECK - - This implies - - $\dfrac {1} { \log(1+r)^2 } = \dfrac{1}{r^2} + \dfrac{1}{r} + \dfrac{1}{12} + {\mathcal O}(r) = \dfrac{r+1}{((r+1)-1)^2} + \dfrac{1}{12} + {\mathcal O}(r)$ - - CHECK - - In terms of $z=r+1$, this reads - - $\dfrac {1} { \log(z)^2 } = \dfrac{z}{(z-1)^2} + \dfrac{1}{12} + O(r)$ - - or, using - - $\dfrac{z}{(z-1)^2}=z\dfrac{z}{dz}\dfrac{1}{1-z}=z\dfrac{d}{dz}\sum_{n=0}^\infty z^n=\sum_{n=0}^\infty n \, z^n$ - - this says - - $\sum_{k=0}^\infty n \, z^n - \dfrac {1} {\log(z)^2} = - \dfrac {1} {12} + O( (z-1)^1 )$ - - ----- - - == How the log pops up when passing from descrete to smooth == - We're interested in series coefficients $a_n$ vs. inverse integral transforms $a(t)$. - - We may compare the sum and integral of monomials $\sum_{n=0}^\infty z^n$ and $\int_0^\infty z^t\,​{\mathrm d}t$. We find, of course, they differ by Euler-MacLaurin sum terms - - $\sum_{n=0}^\infty z^n=\dfrac{1}{1-z}$ - - $\int_0^\infty z^t\,​{\mathrm d}t=\int_0^\infty {\mathrm e}^{t\log(z)}\,​{\mathrm d}t=-\dfrac{1}{\log(z)}=-\dfrac{1}{1-z}-\dfrac{1}{2}+\dfrac{1}{12}(z-1)+{\mathcal O}\left((z-1)^2\right).$ - - To match the integral computation to the sum, i.e. $-\log(z)\leftrightarrow 1-z$, we would first have to perform the conformal mapping $z\mapsto {\mathrm e}^{z-1}$. Indeed, ​ - - $\int_0^\infty ({\mathrm e}^{z-1})^t\,​{\mathrm d}t=\int_0^\infty {\mathrm e}^{(z-1)t}\,​{\mathrm d}t=\dfrac{1}{1-z}$. - - In other words, the smooth version of summands given by pairing $a_n z^n$ doesn'​t so much correspond to the pairing $a(t)\,z^t$ under the integral, but rather $a(t)\,​{\mathrm e}^{(z-1)t}$. Make that $a(t)\,​{\mathrm e}^{-st}$ after a shift $z\mapsto 1-s$, so that $a(t)=1$ is connected to $\dfrac{1}{s}$. - - If $a_0,​a_1,​a_2,​\dots$ is a series, the function ​ - - $G[a](z)=\sum_{n=0}^\infty a_nz^n$ ​ - - is called it's generating function. ​ - If $a(t)$ is a function, the corresponding function (shifted by one) is the [[Laplace transform]] ​ - - $L[a](s):​=\int_0^\infty a(t)\,​{\mathrm e}^{-st}\,​{\mathrm d}t$. - - For those two, the constant series/​function $a_n=a(t)=1$ both lead to $\frac{1}{1-z}=\frac{1}{s}$. - - ----- - - <​code>​ - a[n_]:​=Product[Product[k,​{k,​1,​m}],​{m,​1,​n}] - - Table[a[n], {n,1,5}] - ​ - - ---- - - Now, as a precursor, consider the function $f(k):​=k^2$. - It's value at the point $k=3$ is $9$ and it's value at the point $k=4$ is $16$. - - Now say we don't know which point k we really deal with, we only know that it's in the interval $[3, 4]$. - A good estimate for the function evaluated on the unknown value should be the average - - $\langle f(k) \rangle = \int_{3}^{4} \, f(k) \, dk = 12.333 = 9 + \frac {10} {3}$ - - The continuous version of the evaluation differs by a term of $- \frac {10} {3}$. - - Now the sum $0 + 1 + 2 + 3 + …$ equals the limit $\lim_{z \to 1}$ of the sum $0 + 1 \, z^1 + 2 \, z^2 + 3 \, z^3 + …$. - - For z in (0,1) this can be computed - - $\sum_{k=0}^\infty k \, z^k = z \dfrac {d} {dz} \sum_{k=0}^\infty z^k = z \dfrac {d} {dz} \dfrac {1} {1-z} = \dfrac{z} { (z-1)^2 }$ - - As already knew and now can read off explicitly, this expression diverges for z to 1. - So let's consider the sum of smooth deviations around integers values. With - - $\langle k\,z^k \rangle = z \dfrac {d} {dz} \langle z^k \rangle = z \dfrac {d} {dz} \langle e^{k \log(z) } \rangle = z \dfrac {d} {dz} \dfrac { z^{k'} } { \log(z) } |_{k}^{k+1}$. - - we find the sum $\sum_{k=0}^n \langle k \, z^k \rangle$ involves canceling upper and lower bounds and we're left with $\dfrac {1} {\log(z)^2}$plus terms suppressed by $z^n$. - - Using the expansion of the log above, we find the difference - - $\sum_{k=0}^\infty (k \, z^k - \langle k \, z^k \rangle ) = \dfrac {z} {(z-1)^2} - \dfrac {1} {\log(z)^2} = - \dfrac {1} {12} + O( (z-1)^1 )$ - - We can bridge the connection to the finite sum: - Look at - - $\sum_{k=0}^m \left( k \, z^k - (1-q) \langle k \, z^k \rangle \right)$ - - Mathematica tells us that this is $\dfrac{(1-q) (z-1)^2 \left(z^{m+1}-1\right)+z \log (z) \left(z^m \left((m+1) (q-1)(z-1)^2+(m (z-1)-1) \log(z)\right)+\log (z)\right)}{(z-1)^2 \log ^2(z)}$ - - Taking the limit z to 1 results in $\frac {1} {2} (m+1) (q\,​m+(q-1))$ and for q to 0 we find the classical result $\sum_{k=0}^m k=\frac{1}{2}m(m+1)$ - - On the other hand, taking the limit m to infinity before going with z to 1 gives us - - $\frac {z} {(z-1)^2} - \frac {1-q} {\log(z)^2}$ - - === Ramanujan summation === - A finite difference shift $\Delta_h$ can be defined by ${\mathbb e}^{tD}-{\mathbb e}^0$. The first order approximation to that would be ${\mathbb e}^{tD}'​\cdot{t}$. - - Fact 1: Given a series $f(z)=\sum_{k=0}^\infty a_k z^k$, one has - - $\dfrac{f(0)}{f(f(0)\,​z)}=\dfrac{a_0}{\sum_{k=0}^\infty a_k\left(a_0\,​z\right)^k}=\dfrac{1}{1+\sum_{k=1}^\infty\dfrac{a_k}{a_0}\left(a_0\,​z\right)^k}=1-a_1\,​z+\left(a_1\,​a_1-a_0\,​a_2\right)\,​z^2-\left(a_1\,​a_1\,​a_1-2\,​a_0\,​a_1\,​a_2+a_0\,​a_0\,​a_3\right)\,​z^3+{\mathcal O}(z^3).$ - - TODO: write this down for $a_0=1$ - - <​code>​ - Series[(E^t - 1)/t, {t, 0, - 4}](* you know this expansion if yuo know the expansion of Exp[t] *) - \ - Series[t/​(E^t - 1), {t, 0, 4}] - f[z_] = Sum[t^k/​a[k],​ {k, 0, 17}]; - a[0] = 1; - - Series[f[t],​ {t, 0, 4}] - - Series[1/​f[t],​ {t, 0, 4}] - - % /. {a[n_] :> (n + 1)!} - - ​ - - Fact 2: The Bernoulli polynomials $B_k(x)$ can be defined as the coefficients of - - $\dfrac{z}{{\mathrm{e}^z}-1}{\mathrm{e}}^{x\,​z}=1+\sum_{k=1}^\infty \frac{1}{k!}B_k(x) \,​z^k=1+\frac{1}{1!}\left(x-\frac{1}{2}\right) z+\frac{1}{2!}\left(x^2-x+\frac{1}{6} \right) ​ ​z^2+\frac{1}{3!} \left(x^3-\frac{3 x^2}{2}+\frac{x}{2}+0\right) z^3+{\mathcal O}(z^4).$ - - The constant coeffient $B_k:​=B_k(0)$ is called Bernoulli number and pop up - all over the place, see e.g. [[http://​en.wikipedia.org/​wiki/​Baker%E2%80%93Campbell%E2%80%93Hausdorff_formula#​Matrix_Lie_group_illustration|Baker–Campbell–Hausdorff formula]]. - - (The are also characterized by $\int_x^{x+1}B_k(y){\mathrm d}y=x^n$ and fulfill $B_k'​(x)=k\,​B_{k-1}$ .) - - - ----- - - Consider the finite difference $\Delta_h f(x)$ of a function $f$ about $x$, when shifted by $h$. With the Taylor expansion ​ - - $\Delta_h f(x)=f(x+h)-f(x)=f'​(x)\,​h+\sum_{k=2}^\infty\frac{1}{k!}f^{(k)}(x)\,​h^k$ ​ - - we can compute the correction factor, by which the first order approximation fails to capture the finite difference: - - $\dfrac{f'​(x)\,​h}{\Delta_h f(x)}=1-\dfrac{f''​(x)}{2!}\left(\dfrac{h}{f'​(x)}\right)+\left(\dfrac{f''​(x)\,​f''​(x)}{2!\,​2!}-\dfrac{f'​(x)\,​f'''​(x)}{1!\,​3!}\right)\left(\dfrac{h}{f'​(x)}\right)^2+{\mathcal O}(h^3).$ - - Spoiler: Note that $\dfrac{1}{2!\,​2!}-\dfrac{1}{1!\,​3!}=\dfrac{1}{2!\,​3!}(3-2)=\dfrac{1}{12}$,​ which is $\dfrac{1}{2!}B_2$. ​ - - The above expansion is a conglomerate of derivatives of higher order than $n$, respectively weighted by $\frac{1}{k!}B_k$. - - <​code>​ - Series[-(f'​[x] h)/(f[x + h] - f[x]), {h, 0, 2}] - - Table[-BernoulliB[k]/​k!,​ {k, 0, 10}] - ​ - - MacLaurin was interested the approximation of integrals $\int_a^b f(n)\,​{\mathrm d}n$ by sums $f(a)+f(a+1)+f(a+2)+\dots$. You can do a similar expansion as above, use the fundamental theorem of (h-)calculus and find the [[http://​en.wikipedia.org/​wiki/​Euler%E2%80%93Maclaurin_formula|Euler–MacLaurin formula]]: - - $\dfrac{\sum_{n=a}^{b-1} f(n)}{\int_a^b f(n)\,​{\mathrm d}n} = 1+\left(\lim_{x\to b}-\lim_{x\to a}\right)\left(-\dfrac{1}{2}\dfrac{d}{dx}+\dfrac{1}{12}\dfrac{d^2}{dx^2}+\dots\right)\dfrac{\int_a^x f(n)\,​{\mathrm d}n}{\int_a^b f(n)\,​{\mathrm d}n}$. - - or - - $\int_a^b f(n)\,​{\mathrm d}n = \sum_{n=a}^{b-1} f(n)+\left(\lim_{x\to b}-\lim_{x\to a}\right)\left(\dfrac{1}{2}-\dfrac{1}{12}\dfrac{d}{dx}+\dots\right)f(x)$. - - It says that a an integral of a function $f$ is a sum of its base points minus Bernoulli-weighted curvature corrections. ​ - - The building blocks of analytical functions are monomials $f(n)=n^{k-1}$ and for those the formula is particularly simple, because most derivatives vanish. The formula immediately tells us that $\int_a^b n^{k-1}\,​{\mathrm d}n=\frac{b^k}{k}-\frac{a^k}{k}$ is $\sum_{n=a}^{b-1} n^{k-1}$ minus some lower order corrections. ​ - - Example 1: With $k=3,​a=2,​b=4$ you get the identity - - $\frac{1}{3}(4^3-2^3)=\int_2^4 n^2 dn=(2^2+3^2)+\frac{1}{2}(4^2-2^2)-\frac{1}{12}2(4^1-2^1)$,​ - - Example 2: For $k=2,a=0$ you can easily visualize the result: The integral $\int_0^b n\,{\mathrm d}n$ is the surface under $f(n)=n$, i.e. the triangle area $\frac{b^2}{2}$. The sum of $0+1+2+\dots+(b-2)+(b-1)$ corresponds to the surface under a staircase. Hence for ever step, the sum misses $\frac{1}{2}$ for each step. You get $\int_0^b n=\sum_{n=0}^{b-1} n-\frac{b}{2}$,​ which you might write as $\sum_{n=0}^{b-1} n=\frac{b(b-1)}{2}$. - - Since some integrals are easier to solve than sums, you can use the formula also to compute sums. For example, form the above you get the classical formula ​ - - $\sum_{n=a}^{b-1} n^{k-1} = \dfrac{1}{k}B_k(b)-\dfrac{1}{k}B_k(a)$, ​ - - where $B_k(x)=x^k+\dots$ are the Bernoulli polynomials. For example, with $k=2,a=0$, were $B_2(x)=x^2-x+\frac{1}{6}$,​ you find - - $\sum_{n=0}^{b-1} n = (\frac{b^2}{2}-\frac{b}{2}+\frac{1}{12})-(0-0+\frac{1}{12})=\dfrac{b(b-1)}{2}$. - - Note that $f(n)=n$ has no second order curvature, so the $\frac{1}{12}$'​s cancel. ​ - - Now $\lim_{b\to\infty}$ of $\sum_{n=0}^{b-1} n$ is clearly undefined. In Ramanujans theory of previously undefined infinite sums, you go back to the Euler–Maclaurin formula and disregard the whole upper bound $\lim_{x\to b}$ (which contains one of the $-\frac{1}{12}$'​s) and also the integral if if it's divergent. For $\sum_{n=0}^\infty n$, you're left with the $-\lim_{a\to 0}$ term, which is $-\frac{1}{12}$. - - === zeta reg === - Peano arithmetic defines addition recursively for finite sums of natural numbers. In analysis, arbitrarily small numbers are used to give the epsilon-definition of an infinite limit of a sequence. Let $f$ be an expression defining a sequence $f(n)$. The classical conception of what an "​infinite sum" is that it's the limit of its partial sums - - $\sum_{n=0}^\text{Classical} f(n):​=\lim_{m\to\infty}\sum_{n=0}^m f(n)$, - - see [[Limit in a metric space]]. We're going to give another one. - - For functions $f,H$ given as series expansions around points $p,q$ (let's fix $p=q=0$) write - - $f({\frac{\partial}{\partial z}})\,g(z) := \sum_{k=0}^\text{Classical} f^{(k)}(p) \frac{1}{k!} \left( {\frac{\partial^k}{\partial z^k}} \sum_{j=0}^\text{Classical} g^{(j)}(q) \frac{1}{j!} ​ z^j \right)$, - - see [[smooth function of a linear operator]]. Since $E\,​{\mathrm e}^{-\beta\ E} = -\frac{\partial}{\partial \beta}\,​{\mathrm e}^{-\beta\ E}$, we have $F(E)\,​{\mathrm e}^{-\beta\ E} = F(-\frac{\partial}{\partial \beta})\,​{\mathrm e}^{-\beta\ E}$. Using this with $F:=f\circ H^{-1}$ and $E:=H(n)$, we can write - - $f(n) = \lim_{\beta\to 0} f(n)\,​{\mathrm e}^{-\beta\ H(n)} = \lim_{\beta\to 0} f(H^{-1}(-\frac{\partial}{\partial \beta}))\,​{\mathrm e}^{-\beta\ H(n)}$. - - Note that the argument $n$ only occurs in the exponential function, while the function $f$ is fully encoded in the linear differential operator. We can pull it though a finite sum to get a more complicated representation of the classical sum - - $\sum_{n=0}^\text{Classical} f(n) - := - \lim_{m\to\infty} ​ - \lim_{\beta\to 0} - f(H^{-1}(\frac{\partial}{-\partial \beta})) ​ - \sum_{n=0}^m {\mathrm e}^{-\beta\ H(n)} -$. - - Now the middle part doesn'​t depend on $m$. Pulling the operator out even further would be equivalent to commuting $m$- and $\beta$-limits. This suggests an alternative definition of the infinite sum: - - $\sum_{n=0}^\text{Analytical} f(n) - := - \lim_{\beta\to 0} - f(H^{-1}(-\frac{\partial}{\partial \beta}))\, - Z(\beta) -$, - - where $Z(\beta)$ is defined as the analytical continuation $\sum_{n=0}^\text{Classical} {\mathrm e}^{-\beta\ H(n)}$. - - From the perspective of the rewritten expression of the classical sum, we can summarize: Before we had to take a limit of a function which already has been evaluated at $\beta=0$, and this limit might be undefined. Now, after the switching of limits, a function is defined by a sum which has to converge for //any// one $\beta\in{\mathbb C}$, and only then do we investigate if the result is defined for $\beta=0$. - - (I've used notation in analogy to the partition function. What's different from statistical mechanics is that the initial task to compute $\lim_{m\to\infty}\sum_{n=0}^m f(n)$ isn't an average. That would be $\lim_{m\to\infty}\sum_{n=1}^{m-1} f(n)\frac{1}{m}$ with the factor $\frac{1}{m}$ representing a mean and in fact the weights ${\mathrm e}^{-\beta\ H(n)}$ are often part of the average by construction. Note that also $m=\lim_{\beta\to 0}\sum_{n=1}^{m-1}{\mathrm e}^{-\beta\ H(n)}=\lim_{\beta\to 0}Z(\beta)$ and in fact the weight is usually $p(n,​\beta):​={\mathrm e}^{-\beta\ H(n)}/​Z(\beta)$. But with $\beta$ in the denominator the derivative trick doesn'​t work and so an extra exponential ${\mathrm e}^{-J\,​\Pi(n)}$ is introduced.) - - ----- - - Consider $H(n)=n$ so that ${\mathrm e}^{-\beta\ H(n)}={\mathrm e}^{-\beta\ n}=({\mathrm e}^{-\beta})^n$ and - $Z(\beta)=\dfrac{1}{1-{\mathrm e}^{-\beta}}$. This can be expanded as - - $=\dfrac{1}{-\sum_{k=1}\frac{1}{k!}(-\beta)^k}$ - - $=\dfrac{1}{\beta}\dfrac{1}{1-\left(-\frac{1}{2}(-\beta)-\frac{1}{6}(-\beta)^2+\dots\right)}$ - - $=\dfrac{1}{\beta}\left(1-\frac{1}{2}(-\beta)-\frac{1}{6}(-\beta)^2+\left(-\frac{1}{2}(-\beta)\right)^2 +\dots\right)$ - - $=\frac{1}{\beta}+\frac{1}{2}+\frac{1}{12}\beta+\dots$ - - So - - $\sum_{n=0}^\text{Analytical} f(n)=\lim_{\beta\to 0} f(-\frac{\partial}{\partial \beta}) \left(\frac{1}{\beta}+\frac{1}{2}+\frac{1}{12}\beta+\dots\right)$ - - For $f(n)=n$, our sum $\sum_{n=0}^\text{Analytical}$ is still undefined - - $\sum_{n=0}^\text{Analytical} n = -\frac{1}{12}+\lim_{\beta\to 0}\frac{1}{\beta^2}$ - - ----- - - Now consider $H(n)=\log(n)$ so that ${\mathrm e}^{-\beta\ H(n)}={\mathrm e}^{-\beta\,​\log(n)}=n^{-\beta}$ and $Z(\beta)=\zeta(\beta)$,​ which is well defined at $\beta=0$. - - Let $T_d:​=\exp(-d\frac{\partial}{\partial \beta})$ be the shift operator and note that $H^{-1}(-\frac{\partial}{\partial z})=T_1$. If $a$ has a Taylor expansion around $0$, then - - $\sum_{n=0}^\text{Analytical} f(n) =\sum_{k=0}^\text{Classical} \frac{1}{k!} f^{(k)}(0)\,​ \zeta(-k)$ - - For $f(n)=E_0+E_1\,​n$ we now find a finite value - - $\sum_{n=0}^\text{Analytical} (E_0+E_1\,​n) =E_0\,\zeta (0)+E_1\,​\zeta (-1) =-\frac{1}{2}E_0-\frac{1}{12}E_1$ - - We have the special cases - - $1+1+1+1+\dots - :​=\sum_{n=0}^\text{Analytical} 1 =-\frac{1}{2}$ - - $1+2+3+4+\dots - :​=\sum_{n=0}^\text{Analytical} n =-\frac{1}{12}$ - - === Reparametrization of infinite sum, Abel summation style === - - Here’s one way to look at it: - - Firstly, it’s important to note that the rate of convergence of different series with the same limit can be totally different. E.g. we have - - $\pi=4\sum_{k=0}^\infty\dfrac{(-1)^k}{2k+1}$ - - $\pi=2\sqrt{3}\sum_{k=0}^\infty\dfrac{(-1/​3)^k}{2k+1}$ - - but the series is famous for converging ridiculously slow. If a sequence is steadily growing, then moving larger terms forward in the sum will make it converge faster. - There are actually methods to change a series representation to make it converge faster, e.g. - http://​en.wikipedia.org/​wiki/​Shanks_transformation. - - Say you’re interested in - - $1+\dfrac{1}{2}+\dfrac{1}{4}+\dfrac{1}{8}+\dots$ - - but too drunk to realize what it converges to. You can interpret this as - - $1+\dfrac{1}{2}X+\dfrac{1}{4}X^2+\dfrac{1}{8}X^3+\dots=\sum_{n=0}^\infty (X/2)^n$ - - which is valid around $X=1$. You might remember that this is $\dfrac{1}{1-X/​2}$,​ which can be written as $2\dfrac{1}{2-X}$,​ and now it’s evident that the series at $X=1$ is equal to 2. - From this we can obtain many new sums converging against 2. Consider a re-parametrization - - $X(x):​=2-\exp(1-x)$ - - which doesn’t move the point of interest because - - $X(x=1)=2-\exp(0)=1$ - - Now - - $1+\dfrac{1}{2}X(x)+\dfrac{1}{4}X(x)^2+\dots=2\dfrac{1}{2-X(x)}=$ - - $=2\dfrac{1}{2-(2-\exp(x-1))}=2\exp(x-1)=2+\sum_{k=1}^\infty\dfrac{1}{k!}(x-1)^k$ - - By expressing the sum in a series in $(x-1)$, the result actually converges after the first term. - - Now to the sum of all numbers. Convince yourself that for $|X|<1$, we have the following expansion - - $I(X):​=\sum_{k=1}^\infty k\,​X^k=X\dfrac{d}{dX}\sum_{k=0}^\infty X^k=$ - $=X\dfrac{d}{dX}\dfrac{1}{1-X}=-\dfrac{1}{2}\dfrac{1}{1-\dfrac{1}{2}(X+X^{-1})}$ - - http://​www.wolframalpha.com/​input/?​i=Series[1%2F%28%28-2%29*%281-%28x%2B1%2Fx%29%2F2%29%29%2C+{x%2C+0%2C+6}] - - The first terms - - $X+2 X^2+3 X^3+\dots$ - - are harmless for any values of $X$, but as we approach $X=1$, the later summands $\dots+8999+9000+9002+\dots$ get bigger and bigger and $I(X)$ diverges. ​ - - Now to better understand the divergence of $I(X)$ at $X=1$, we are going to re-parameterize $X$ in a way so that the divergence occurs right in the first term, while the later instead terms become harmless. ​ - We consider - - $X(x):​=\exp(x-1)$ - - (It should be noted that this function is ad hoc. The good analytical behavior of $\exp$ eventually corresponds to the analytic continuation.) - - The pole is again at $x=1$, because $I(X(1))=I(\exp(0))=I(1)$. - - However, since - - $X^n=\exp(x-1)^n=\exp(nx-n)=e^{-n}\sum_{k=0}^\infty\dfrac{n^k}{k!}x^k$ ​ - - for all $X^n$ in $X+2 X^2+3 X^3+\dots$, we have that even the large coefficient of $X^{9000}$ now contributes to the early coefficient of $x^3$. The divergence moves forward in the sum. - - Coming back to - - $I(X)=X(x)+2\,​X(x)^2+3\,​X(x)^4+4\,​X(x)^4+\dots=-\dfrac{1}{2}\dfrac{1}{1-\dfrac{1}{2}(X+X^{-1})}$,​ - - since by definition - - $\dfrac{1}{2}(X(x)+X^{-1}(x))=\dfrac{1}{2} \left(\exp(x-1)+\exp(-(x-1))\right)=\cosh(x-1)$ - - and since $\cosh$ has Taylor series - - $1-\cosh(x-1)=-\sum_{k=1}^\infty \dfrac{1}{(2k)!}(x-1)^{2k}=\left(-\dfrac{1}{2}\right)(x-1)^2 \left(1-\left(-\dfrac{1}{3\cdot 4}\right)(x-1)^2+\dots\right)$ - - http://​www.wolframalpha.com/​input/?​i=Series[-2+%28Cosh[1+-+x]+-+1%29%2C+{x%2C+1%2C+6}] - - we find - - $I(X)=\dfrac{1}{(x-1)^2}-\dfrac{1}{12}+\dots$ - - (Note how the fraction $(-2)\dfrac{1}{4!}=-\dfrac{1}{12}$ arises as the coefficient of the linear term of a cosh-expansion.) - - In this representation,​ we see that as $X(x)$ approaches 1, the sum grows towards $\dfrac{1}{(x-1)^2}-\dfrac{1}{12}$. You cannot set $x=1$, as then the first term, and hence the whole sum, becomes undefined. Ramanujan sets the value to the remainder, $-\dfrac{1}{12}$. ​ - - === Motivation === - You don't need the Riemann Zeta Function, there are many was to get that result, but that's all besides the point. ​ - - You say - >Prove to me otherwise that 1+2+3... =/= Infinity - but proving that it's infinity amounts to just as much mathematical machinery. - You might axiomize the nautral numbers via Peanos axioms, implement that on a computer, but you can still not conclude that adding up all natural numbers is infinite. - The machinery which assigns this "​value",​ 1+2+3...=Infinity,​ does not come with addition and multiplication of numbers. The notion of the limit of partial sums, and in particular the convergence criterium for infinite sums which map to a natural number this way, - such as 1/​2+1/​4+1/​8+...=1 is beyond the scope of the theory of addition which you can write down in a few axioms. ​ - Most people couldn'​t tell what a metric space and a norm is - yet you're okay with 1/​2+1/​4+1/​8+...=1 because the geometric pic related -the model for which the metric convergence method has been invented- makes sense to you. The method which assigns -1/12 to 1+2+3+... happens not to have models in the geometry you know (but other physics, QFT etc.), and so you disregard it. - - Nevermind that you can't add an infinite amount of numbers anyway, just as much as you cannot add 1/​2+1/​4+1/​8+... . The infinite is a barrier here. What you can do is capture the infinite with iteration ​ - "if I cut the tile in half each time and fit it into the full space left, then I still got half as much space and can repeat the procedure" ​ - At this point you do an inductive argument over the geometric which is the model of the syntactic expressions called numbers, you don't actually add an infinite amount of number 1/2^n. - The result 1+2+3... = -1/12 is another nice consistent assignment - just like the 1/​2+1/​4+1/​8+... = 1 one, but you just lack knowledge of models for it. You feel it's absurd because you've came to think infinite sums derive from addition of things, while it's involved. - - ########## - ############​ - ############​ - - https://​archive.moe/​sci/​thread/​6910171/#​6910259 - https://​archive.moe/​sci/​thread/​6910171/#​6910262 - https://​archive.moe/​sci/​thread/​6910171/#​6910268 - - >>​6910230 - >>​6910237 - I've written a longer answer in another thread some time ago, and I'm just gonna copy paste it. Maybe it helps: - - (1/3) - >take one grain of wheat, add it to another grain of wheat and how many grains do you have? 2. - >take one grain of wheat, add 2 grains to that, add 3 grains etc… and you end up with -1/12 grains of wheat. - >sorry, but if that makes sense to you, you're an idiot. - - What you say here isn't an argument against 1+2+3+...=-1/​12. - Let me elaborate: - (the first part is gonna be basic) - Yes, most here will agree that 1+1=2. That's “obviously self-evident”. People have know how to add grains of wheat and written down the rules for addition accordingly. - - So what is 1+3 (one grain of wheat plus three more grains) or 4+8, or 2+3? In your head, you reduce the expressions to 4, 12 and 5, respectively. How does your head do it? You can imagine to make a list of all results: Store "1+3 is 4" and "4+8 is 12" and so on. But you couldn’t even store all those results in your head - (side note: never mind that the number of such equations is actually uncountable! P(N)=R) - - You can make computation of numbers general possible and also much more efficient by setting up the computational rules!: ​ - Store only specific representations of numbers, e.g. 5 is ( ( ( 1 + 1 ) + 1 ) + 1 ) + 1, and then the rule a + ( b + 1 ) is ( a + b ) + 1 - That's arithmetic as in - http://​en.wikipedia.org/​wiki/​Peano_axioms#​Arithmetic - and it let's you -compute- the value of 1+3: - 1+3 - is 1 + ( ( 1  + 1  ) + 1 ) by the stored information 3 is ( 1 + 1 ) + 1, and now this - is ( 1 + ( 1 + 1 ) ) + 1 by the computational rule, and this is - is ( ( 1 + 1 ) + 1 ) + 1 by the computational rule again used inside the brackets, and this - is 4 by the stored information 4 is ( ( 1 + 1 ) + 1 ) + 1 - That’s all a computer can do, btw. - - (2/3) - - Now this naive addition will not provide you with a suggestion on how to deal with infinite sum. - So apriori, the sums - >​1+2+3+4+... - >​1+1/​2+1/​4+1/​8+... - >​1-1+1-1+1-1+... - are all just compactly describable expressions. No evaluation is stored or prescribed. - - Here comes your "​fallacy":​ - You have, probably through school, obtained an intiution for why the second sum should be 2. - That's pic related. - It's not like you can add 1 plus 1/2 plus 1/4 grains of wheat irl, but you have the geometric picture and mathematicians ought to make this computable. - - A couple of hundred years ago, they succeed. We are able to speak of metrics, norms etc. and a sum might converge to the limit of its partial sums. That's a technique. - - Now you say "The infinite sum 1+1/​2+1/​4+1/​8+... is 2" and the inifite sum "​1+2+3+4+..."​ is not defined, or infinite (which not even part of the set of natural numbers). ​ - In doing so, you act as if the more well "​weak"​ known summation technique is somehow "​natural"​ or "​given"​. ​ - For one, you fail to realize that e.g. Ramanujan summation, a technique which will assign -1/12 to 1+2+3+4+... does not break any of the metric-definition-sum results. The more modern assignments merely extend the old one. AND they are taken to be "​morally right",​ because they are actually used. They application isn't plane geometry, but just some other mathematical description of the physical world (quantum field theory, say) - You argue: "​Firstly,​ I will not accept any value assigned to an expression where the naive of the metric-limit definition doesn'​t succeed to find a number value. Secondly, naive (Peano) arithmetic will never lead to two positive numbers being a negative number, and neither does Eulers method! Therefore I'm right in rejecting the method!!"​ - - (3/3) - - What you want to do is find mathematics which is nice and useful. Of course, if you stumble upon a system which you find to be inconsistent,​ like if it demonstrates 0=1, then you will reject it. But 1+2+3+4+...=-1/​12 isn't an inconsistent one, and people rightfully say - "​1+2+3+4+...=-1/​12"​ - because time has shown it's morally right. - If you don't want to confuse people, you can change your language to "​informally 1+2+3+4+...=-1/​12,​ where the equality is to be understood as..." but I'd actually refrain from doing so. Because if you do, you artificially put emphasis on the notation that "​infinite sum = x" has to be read with the (weak) limit/​metric evaluations as default. - - Do you guys know how ln(-1) = i·\pi. Maybe that helps, it's a simple computation with line integrals and also pases around the complex plane to -1. - - ############​ - ############​ - ############​ - - >>​6910287 - >Because it just doesn'​t make sense with our convention of maths - Do you actually read the thread and think about what's being said? Or do you just go into it, asking a question without actually wanting to know the why? - - Let me ask you like this: What is it, that doesn'​t make sense? - Fact: You've never seen any things in the world being added in a way so that the whole resulting collection could be associated with a negative fraction. - Right, okay. - But that's not incompatible with 1+2+3+...=-1/​12,​ as this is about an infinite sum. - Fact 2: You've maybe never seen anyone considering an infinite sequence of positive numbers and computing the indefinite sum to be a negative fractions. - Right, okay. I've never been in Sidney, but I don't deny its existence. - Fact 2: In analysis you might have learned that for an expression ​ - 1/​(a_1+a_2+a_3+...) ​ - involving an infinite sum, - where each different a_{n+1}-a_n is bigger than one, you need to assign the number 0. - Did you conclude the last rule by adding apples in your kitchen? - No? Was this result, if at all, only used within the physical theory (-representing- the real world) involving differential calculus formalized over the reals with their standard topology (some theory people came up with 400 years ago, once they have come up with enough math). - What makes you think another formalization is disallowed? - - ############​ - ############​ - ############​ - - https://​archive.moe/​sci/​thread/​6910610/​ - - === References === - Wikipedia: ​ - [[http://​en.wikipedia.org/​wiki/​Euler%E2%80%93Maclaurin_formula|Euler–Maclaurin formula]], - [[http://​en.wikipedia.org/​wiki/​Baker%E2%80%93Campbell%E2%80%93Hausdorff_formula#​Matrix_Lie_group_illustration|Baker–Campbell–Hausdorff formula]] - - ----- - [[Riemann zeta function]]