© Cordon Art-Baarn-the Netherlands

Many of you must be aware of the works of the famous Dutch artist Maurits Cornelis Escher who explored the notions of symmetry and  infinity and depicted visual paradoxes and impossible worlds through his art. He worked extensively on regular divisions of the plane, also called tessellations, which are arrangements of closed, interlocking planar shapes which cover the whole plane without any gaps. Escher distorted basic planar figures such as triangles, squares and other polygons into organic forms to construct his tessellations. Even though he was not mathematically-trained, Escher displayed a keen intuition and creativity which appeals to mathematicians and non-mathematicians alike. You can visit Escher in the Classroom, a fantastic site which walks you through some of Escher’s constructions.

Inspired by a drawing by H. S. M. Coxeter, Escher created his Circle Limit series, which uses the Poincare disk model of Hyperbolic space

Math and the art of M. C. Escher and   the mathematical art of M. C. Escher are also wonderful sites which talks in detail about the mathematical nature of Escher’s work.

One of Escher’s most fascinating works is the Print Gallery. It shows a young man standing in an exhibition gallery, viewing a print of a Mediterranean seaport. As his eyes follow the buildings shown on the print from left to right and then down, he discovers among them thevery same gallery in which he is standing. A circular white patch in the middle of the lithograph contains Escher’s monogram and signature. Artists and mathematicians have wondered if at all this white patch could  be filled. It was only in 2002 that Henrik Lenstra, a professor of Mathematics, came up with a mathematical explanation for how the Print Gallery can be constructed and his solution provided him with a way to fill the mysterious hole in the center. For more details you can refer to the excellent article titled Artful Mathematics: The heritage of M. C. Escher in the Notices of the American Mathematical Society, and visit the website of this project titled Escher and the Droste Effect, in which a step-by-step process at arriving the solution is explained.

Achieving the Unachievable is an award-winning film that unravels the mystery behind the Print Gallery. It has been screened in many universities across the world and is a must-watch for everyone. You can enjoy the film at the screening organized by Singularity at 11:00 a.m. on Saturday, March 26 in L1. For now, here’s a trailer:

Advertisements

Relationship between real and Complex differentiability The most commonly used definition of differentiability of a function {f: \mathbb{R}\rightarrow \mathbb{R}} is as follows. The function {f} is (real) differentiable at {x_0 \in \mathbb{R}} if the limit

\displaystyle \begin{array}{rcl} \lim_{h\rightarrow 0}\frac{f(x_0+ h)- f(x_0)}{h} \end{array}

exists. Equivalently, We say that {f:\mathbb{R}\rightarrow \mathbb{R}} is differentiable at {x_0} if there exists a linear transformation {A: \mathbb{R}\rightarrow \mathbb{R}} satisfying

\displaystyle \begin{array}{rcl} \lim_{h\rightarrow 0}\frac{f(x_0+ h)- f(x_0)- Ah}{h}= 0. \end{array}

Such a linear transformation, if it exists, is unique and is called the derivative of {f} at {x_0}. We then write {A= f^{\prime}(x_0)}. More generally, a function {f: U \subset \mathbb{R}^n\rightarrow \mathbb{R}^m} is differentiable at {\mathbf{x_0} \in \mathbb{R}^n} if there exists a linear transformation {A: \mathbb{R}^n\rightarrow \mathbb{R}^m} such that

\displaystyle \begin{array}{rcl} \lim_{h\rightarrow 0}\frac{||f(x_0+ h)- f(x_0)- Ah||}{|h|}= 0, \end{array}

where the norms {||\cdot||} and {|\cdot |} are the standard norms on {\mathbb{R}^n} and {\mathbb{R}^m} respectively. The linear transformation, if it exists, is unique and is called the Jacobian of {f} at {x_0} and is denoted by {Df(x_0)}. If we write {f= (f_1, f_2, \cdots, f_m)} and {x= (x-1, x_2, \cdots, x_n0 \in \mathbb{R}^n}, then we can write the Jacobian in matrix form as

\displaystyle \begin{array}{rcl} \left[Df(x_0)\right]_{ij}= \left[\frac{\partial f_i}{\partial x_j}(x_0)\right] \end{array}

Before looking at complex differentiability we recall Proposition: Let {A: \mathbb{C}\rightarrow \mathbb{C}} be{\mathbb{R}}-linear. The following statements are equivalent:

  1. There exists {\alpha \in \mathbb{C}} suchthat {Az= \alpha z};
  2. {A} is {\mathbb{C}}-linear;
  3. {A(i)= iA(1)};
  4. The matrix with respect to the canonical basis {\{1+ 0i, 0+ i\}} has the form\displaystyle \begin{array}{rcl} \left(\begin{array}{cc}a & -b\\b & a\end{array}\right), \quad a, b \in\mathbb{R}. \end{array}

Now, a function {V \subset \mathbb{C} \rightarrow \mathbb{C}} is complex-differentiable at {z_0 \in \mathbb{C}} if the limit

\displaystyle \begin{array}{rcl} \lim_{h\rightarrow 0}\frac{f(z_0+ h)- f(z_0)}{h}=: f^{\prime}(z_0) \end{array}

exists. This can also be reformulated as follows. We say that {f} is complex-differentiable at {z_0} if there exists {\alpha \in \mathbb{C}} such that

\displaystyle \begin{array}{rcl} \lim_{h\rightarrow 0}\frac{|f(z_0+ h)- f(z_0)|}{|h|}= 0, \end{array}

and we have {\alpha= f^{\prime}(z_0)}.

From the above discussion, it is clear that if {f: V \subset \mathbb{C}\rightarrow \mathbb{C}} is complex-differentiable at {z_0= x_0+ iy_0}, then it is real-differentiable at {(x_0, y_0)} and the Jacobian {Df(x_0, y_0)} is {\mathbb{C}}-linear.

Typically, when students first learn about algebraic equations in one variable, the discussion starts with linear equations (i.e. equations of the form {ax= b}) and stops at quadratic equations (the general form being {ax^2+ bx+ c= 0}) and their solution via the quadratic formula {x= \frac{-b\pm \sqrt{b^2- 4ac}}{2a}}. The curious student often wonders why algebraic equations of degree three and more are not discussed. Indeed, many questions come to mind. One may ask if there exist a solution to the general algebraic equation of degree {n}

\displaystyle \begin{array}{rcl} a_nx^n+ a_{n-1}x^{n-1}+ a_{n-2}x^{n-2}+\ldots a_1x+ a_0= 0, \end{array}

for every positive integer {n} and given coefficients {a_i}. If so, how many solutions are there? Furthermore, can each solution be expressed “in radicals”, that is, by a formula that involves only elementary algebraic opertations (addition, subtraction, multiplication, division and extraction of roots)? In this series of posts, we will and answer these questions. The historical notes are taken from the excellent books by Stillwell and Pesic respectively.

Mathematics and its history, by John Stillwell, Springer, 2010.
Abel’s Proof, by Peter Pesic, MIT Press, 2003.

Babylonians (2000BC-600 BC) were probably the first to think of and solve quadratic equations. However, they only considered positive roots since the concept of a negative number did not exist at that time. Brahmagupta (628 AD) provided an explicit formula for the roots of a quadratic which included negative roots as well. Later on, Al-Khwarizmi (825 AD) wrote a treatise on equation-solving, named Al-kitab almukhtasar fi hisab al-jabr w’al-muqabla (The book on restoration and balancing). It is from here that the term Algebra came into being.

The cubic
After the quadratic was solved, not much progress was made in solving equations of higher degree until the sixteenth century when some Italian mathematicians provided the breakthrough. To be precise, del Ferro came up with a solution to the cubic (equation of the form {x^3+ ax^2+ bx+ c=0}) in 1520 but he did not publish it (although he did reveal it to a student of his). Later on, this solution was rediscovered by Niccolo Fontana, who was nicknamed Tartaglia (which means “stammerer” in Italian), and the name stuck. Tartaglia fought a scientific duel with del Ferro’s student and beat him badly. This was considered an important problem at that time and the news of this discovery reached Cardano in 1539 and he made several requests to Tartaglia to reveal his solution. The latter resisted the temptation for a while but was eventually persuaded. However, he coded the solution in the form of a poem and asked for an oath of secrecy to never reveal the solution to anyone. However, Cardano was able to inspect del Ferro’s solution (after his death) and he found that the solution found by Tartaglia was already present in del Ferro’s notes. Cardano thus did not feel that he needed to keep his oath and published the result in (his famous book) Ars Magna in 1545. This started a bitter feud as Tartaglia accused Cardano of plagiarism and challenged him to a mathematical duel. Luciodo Ferrari (1545) was a servant in Cardano’s house but he showed such mathematical skills that he soon became a student. It was Ferrari who accepted Tartaglia’s challenge on behalf of Cardano and he won convincingly. As a consequence, Ferrari received a number of big offers, including a professorship at a good university. Soon after, Ferrari was able to solve the general quartic (polynomial equation of degree four).

We now describe the del Ferro-Tartaglia-Cardano solution of the cubic. First, observe that the equation {x^3+ ax^2+ bx+ c= 0} can be reduced to the form {y^3= py+ q} via the substitution {y= x- a/3}. Now, let {y= u+ v}; substituting this in the previous equation, we get

\displaystyle \begin{array}{rcl} u^3+ v^3 &=& q\\ 3uv &=& p. \end{array}

We can eliminate {v} from the second equation so that the first equation becomes
\displaystyle \begin{array}{rcl} u^6- qu^3+ \left(\frac{p}{3}\right)^3= 0 \end{array}

which is a cubic in {u^3} and had the roots
\displaystyle \begin{array}{rcl} \frac{q}{2}\pm\sqrt{\left(\frac{q}{2}\right))^2- \left(\frac{p}{3}\right)^3}. \end{array}

By symmetry (and the fact that the sum of the roots is {q}), the two roots are {u^3} and {v^3} and thus
\displaystyle \begin{array}{rcl} u^3 &=& \frac{q}{2}+ \sqrt{\left(\frac{q}{2}\right))^2- \left(\frac{p}{3}\right)^3}\\ v^3 &=& \frac{q}{2}- \sqrt{\left(\frac{q}{2}\right))^2- \left(\frac{p}{3}\right)^3}. \end{array}

Hence, we have
\displaystyle \begin{array}{rcl} y= u+ v= \sqrt[3]{\frac{q}{2}+\sqrt{\left(\frac{q}{2}\right))^2- \left(\frac{p}{3}\right)^3}}+ \sqrt[3]{\frac{q}{2}-\sqrt{\left(\frac{q}{2}\right))^2- \left(\frac{p}{3}\right)^3}}. \end{array}

This is the famous del Ferro-Tartaglia-Cardano solution of the cubic.

Complex numbers

As noted in Stillwell, many books on complex numbers mention that complex numbers first came up in the context of quadratic equations but this is false. Neither the Babylonians nor the Indians considered square roots of negative numbers. The greeks too used to think of solutions geometrically and for them such solutions were “impossible.” It was in fact Cardano who considered the case when {\left(\frac{q}{2}\right))^2- \left(\frac{p}{3}\right)^3< 0}. In such a case, one is forced to consider imaginary numbers. Note that every cubic equation has at least one real solution (this follows from Bolzano’s theorem). How does one reconcile this with the solution of the cubic? This is exactly the question that Bombelli addressed (1672). He consider the equation {x^3= 15x+ 4}, which (by the above method) has a solution

\displaystyle \begin{array}{rcl} x= \sqrt[3]{2+\sqrt{-11}}+ \sqrt[3]{2- \sqrt{-11}}. \end{array}

However, it can be verified directly that {x= 4} is a solution as well. Bombelli’s idea was that the terms {\sqrt[3]{2+\sqrt{-11}}} and {\sqrt[3]{2-\sqrt{-11}}}be written as {2+ a\sqrt{-1}} and {2- a\sqrt{-1}} and so their sum would be {4} and this was found to be correct. Indeed, you can verify that {(2+ \sqrt{-1})^3= (2+ \sqrt{-11})}.

In the next post we will look at Ferrari’s solution to the quartic and then discuss the case of the quintic

Just got back from Srinagar, Garhwal where I was invited to give a lecture at an INSPIRE Science camp. As you may know, the INSPIRE program is run by the Department of Science and Technology and it is terrific initiative of the government to popularize and promote science throughout the country. This particular component which I went to is called  Scheme for Early Attraction of Talent, which provides an opportunity for the top students of each board to attend camps in which they get exposed to advanced areas of science through lectures given by researchers and experts from India.

Srinagar, Garhwal is a beautiful town located in the foothills of the Himalayas and on the banks of the Alaknanda river. Even though the surroundings are peaceful and serene, life is not easy for people living in this region. The terrain is extremely hilly and there are frequent landslides which affect life in a very direct way. Nonetheless, there’s a certain warmth and friendliness which the people here display which is sometimes hard to find in cities.

The lecture was meant for the top 11th and 12th grade students of Uttarakhand board, most of whom are from Hindi medium background and this is why I had to choose a topic which was sufficiently accessible to them and which still provided them a glimpse of the kind of mathematics they will see at the senior undergraduate level. I talked about the solvability of algebraic equations in one variable in general and showed why it is not possible to solve the general quintic (equation of degree five) by radicals (i.e. using elementary algebraic operations of addition, subtraction, multiplication, division and extracting a root). I discussed the work of Abel and the generalization by Galois. Considering the audience, I decided to give the lecture in Hindi. The response from students was very good. Many of them were very enthusiastic and had a few questions for me after the lecture.

I will have an expanded version of the lecture here on the blog sometime next week. Watch this space for more details.

I have talked about some of the major open problems in mathematics with my MTH 101 students (either in class or outside) and these include the Riemann Hypothesis, the Goldbach Conjecture and the Collatz Conjecture. One important problem which was resolved recently is the Poincare Conjecture . As you may know, Grigory Perelman provided a sketch of the proof in his arxiv papers which was subsequently verified by various prominent mathematicians and at the 2006 International Congress of Mathematicians held in Madrid, Spain, he was awarded the Fields medal (which he declined; more recently, he also declined the million dollar prize from Clay Mathematics Institute). I have been wanting to mention this for some time but it was not easy to find a context to talk about this problem in a Calculus-1 class…..until now.

I read the OU Math Club blog regularly and over there I found a news item which I am sure my MTH 101 class will find interesting (particularly since many of them recently participated in a Fashion Show at the annual fest EnthuZia 2010).  Now, Perelman actually proved a more general result, called Thurston’s Geometrization Conjecture which implies the Poincare conjecture. Cornell math professor William Thurston proposed this conjecture in 1982 and it says that, roughly speaking, a three dimensional geometric object (more precisely, a closed, oriented 3-manifold) can be cut into geometric pieces, each of which has one of the eight geometries in dimension three. These consists of spherical, hyperbolic and flat geometries and five other kinds which are somewhat difficult to explain here. For his seminal contributions, Thurston was awarded the Fields Medal in 1982.

Inspired by Thurston’s work, Issey Miyake fashion designer Dai Fujiwara created his fall-winter 2010-2011 collection which attempts to illustrate the eight geometries that are sufficient to describe a three-dimensional form.  The collection was displayed in Paris in March this year and Thurston, wearing a designer jacket, attended the show. Here’s an ABC news story on the event. You can watch the interview with Fujiwara and Thurston here:

and the fashion show can be seen here:

As we discussed in the previous post, the Bernoulli brothers Johann and Jakob were both intrigued by the divergence of the harmonic series. Jakob in particular was fascinated by infinite series in general, and he turned his attention to a problem which had been posed by Pietro Mengoli in 1644.

Find the exact sum (and not just an estimate) of the series

\displaystyle \begin{array}{rcl} \sum^{\infty}_{n=1}\frac{1}{n^2}= 1+ \frac{1}{4}+ \frac{1}{9}+ \ldots. \end{array}

In 1655, John Wallis, an English mathematician,  had communicated that he had found the sum to three decimal places but he was unable to say anything more concrete. Jakob Bernoulli wrote about this problem in 1689 and it came to be known as the Basel problem, after the hometown of the Bernoullis as well as that of the eventual solver of the problem, Leonhard Euler. There’s a lot of literature available on this problem and some of what we say below is from William Dunham’s award-winning book titled  Euler: The master of us all and the paper by Raymond Ayoub titled Euler and the zeta function.
It was known that the series converges but finding the exact sum proved to be remarkably difficult. Many prominent mathematicians, including Leibnitz, Mengoli and the Bernoullis brothers had tried their hand at solving it but had failed. Indeed, after growing increasingly frustrated with his failure, Jakob said the following about this problem [Dunham]:

“If anyone finds and communicates to us that which thusfar has eluded our efforts, great will be our gratitude”

In 1721, a young Euler began studying mathematics under the mentorship of Jakob’s younger brother Johann Bernoulli (who was one of the most prominent mathematicians of the world at that time). In all probability, it was Johann who first told Euler about the Basel problem but it is unclear exactly when he did so. Nonetheless, by 1728 Euler had started working on the problem. It was around the same time when Daniel Bernoulli (Johann’s son) wrote to Christian Goldbach that he had found an approximate value of the sum of the series (the value he gave was {8/5}). Goldbach replied that he had found that the sum is between {\frac{41}{35}= 1.64} and {\frac{5}{3}= 1.66}. One of Euler’s earliest attempts was to find numerical approximations of some of the partial sums of the series but these were not too helpful. Indeed, the sum of the first thousand terms is {1.643937} but this is only accurate up to the first two digits (the problem being that the series converges very slowly).

We now discuss Euler’s solution to the Basel problem.

Theorem (Euler, 1734) {\sum^{\infty}_{n=1}\frac{1}{n^2}= \frac{\pi^2}{6}}.

This is truly a remarkable result and anyone who sees it for the first time cannot help but be amazed. Euler’s proof is a shining example of his ingenuity and mathematical prowess. In order to prove the result he used the well-known series expansion for {\sin x}:

\displaystyle \begin{array}{rcl} \sin x= x- \frac{x^3}{3!}+ \frac{x^5}{5!}- \frac{x^7}{7!}+\ldots \end{array}

Next, he considered the “infinite polynomial”

\displaystyle \begin{array}{rcl} P(x)= 1- \frac{x^2}{3!}+ \frac{x^4}{5!}- \frac{x^6}{7!}+\ldots \end{array}

and observed that {P(x)= \frac{\sin x}{x}} for {x \neq 0}. Now, the roots of the polynomial are given by all {x} such that {\sin x= 0} (and {x \neq 0)}. This gives us {x= \pm n\pi} and this means that for each {n \geq 1}, there are two roots. Using this observation, Euler factored {P(x)} as follows:

\displaystyle \begin{array}{rcl} 1 &-&\frac{x^2}{3!}+ \frac{x^4}{5!}- \frac{x^6}{7!}-\ldots= P(x)\\ &=& \left(1- \frac{x}{\pi}\right)\left(1- \frac{x}{-\pi}\right)\left(1- \frac{x}{2\pi}\right)\left(1- \frac{x}{-2\pi}\right)\ldots\\ &=&\left(1- \frac{x^2}{\pi^2}\right)\left(1- \frac{x^2}{4\pi^2}\right)\left(1- \frac{x^2}{9\pi^2}\right)\left(1- \frac{x^2}{16\pi^2}\right)\ldots \end{array}

The next step illustrates Euler’s foresight and his genius, for he expanded the right-hand side as follows

\displaystyle \begin{array}{rcl} 1- \frac{x^2}{3!}+ \frac{x^4}{5!}- \frac{x^6}{7!}+\ldots= 1- \left(\frac{1}{\pi^2}+ \frac{1}{4\pi^2}+ \frac{1}{9\pi^2}+ \ldots\right)x^2+ \ldots, \end{array}

where the remaining terms in the expansion on the right are not relevant to the problem at hand. Euler then compared the coefficients of {x^2} in the above equation to get

\displaystyle \begin{array}{rcl} \frac{1}{3!}= \left(\frac{1}{\pi^2}+ \frac{1}{4\pi^2}+ \frac{1}{9\pi^2}+ \ldots\right) \end{array}

which gave him the celebrated solution to the Basel problem. This was a triumphant moment for Euler and it truly established his reputation as the foremost mathematician at that time. While the general reaction on the result was that of amazement,  a few remained skeptical. In his proof Euler had performed manipulations on infinite series considering them as polynomials and Daniel Bernoulli objected to this approach. Euler himself was less convinced about the rigor of the method so he devised other proofs justifying the formula. He also looked at the $p$-series {\sum^{\infty}_{n=1}\frac{1}{n^p}} for {p> 2}.

These ideas were used by Bernhard Riemann in the 19th century in studying the Riemann-zeta function

\displaystyle \begin{array}{rcl} \zeta(s)= \sum^{\infty}_{n=1}\frac{1}{n^s}, \end{array}

in connection with his investigation of the distribution of primes. In this notation, the solution of the Basel problem reads as {\zeta(2)= \pi^2/6}. Remarkably enough, in 1740 Euler gave a more general formula for calculating {\zeta(2k)} if {k\geq 1} is an integer:

\displaystyle \begin{array}{rcl} \zeta(2k)= \frac{(-1)^{k-1}B_{2k}(2\pi)^{2k}}{2(2k)!}, \end{array}

where {B_{2k}} are the Bernoulli numbers.

Since then a number of modern proofs have popped up. A nice summary of some of the available proofs is given by Robin Chapman in his article titled Evaluating {\zeta(2)} . Interestingly, while we know the sum of the {p}-series when {p} is positive and even, not a lot is known about the sum of the series when {p} is odd. As far as I know, there is no exact formula for {\zeta(3)}.

In this post we look at the harmonic series

\displaystyle \begin{array}{rcl} \sum^{\infty}_{n=1}\frac{1}{n}= 1+ \frac{1}{2}+ \frac{1}{3}+\ldots. \end{array}

Students often have difficulty coming to grips with this fact that this series diverges when they encounter it in an elementary calculus course. The result may seem somewhat counter-intuitive at first, since as {n} gets larger, only very small terms get added to the existing sum. The harmonic series serves as an example of a divergent series whose {nth} term tends to zero, thus justifying the statement that the converse of the following theorem does not hold in general.

Theorem If a series {\sum^{\infty}_{n=1} a_n} converges then {\lim_{n\rightarrow \infty}a_n= 0}.

How do we show that the harmonic series diverges? One of the standard proofs presented in many textbooks goes like this.
Let {s_n= \sum^{n}_{k=1}\frac{1}{k}} be the {n}-th partial sum. Observe that

\displaystyle \begin{array}{rcl} s_1 &=& 1+ 0\left(\frac{1}{2}\right)\\ s_2 &=& 1+ \frac{1}{2}= 1+ 1\left(\frac{1}{2}\right)\\ s_4 &=& 1+ \frac{1}{2}+ \left(\frac{1}{3}+ \frac{1}{4}\right)\\ & > & 1+ \frac{1}{2}+ \left(\frac{1}{4}+ \frac{1}{4}\right)= 1+ 2\left(\frac{1}{2}\right)\\ s_8 &=& 1+ \frac{1}{2}+ \left(\frac{1}{3}+ \frac{1}{4}\right)+ \left(\frac{1}{5}+ \frac{1}{6}+ \frac{1}{7}+ \frac{1}{8}\right)\\ & > & 1+ \frac{1}{2}+ \left(\frac{1}{4}+ \frac{1}{4}\right)+ \left(\frac{1}{8}+ \frac{1}{8}+ \frac{1}{8}+ \frac{1}{8}\right)= 1+ \frac{3}{2}, \end{array}

and in general,
\displaystyle \begin{array}{rcl} s_{2^{n}} \geq 1+ \frac{n}{2} \end{array}

This implies that {\lim_{n\rightarrow \infty}s_{2^{n}}= \infty} and hence the harmonic series diverges (why?). This proof (which was given by Oresme around 1350) is fairly simple and easy to understand by most students and it makes you wonder if there are any other proofs out there which are more “hands on” and which only use techniques that first-year calculus students can understand. It turns out that there are at least thirty-nine proofs of the divergence of the harmonic series which can be found in the excellent article by Kifowit and Stamps titled The Harmonic Series Diverges Again and Again and by Kifowit titled More Proofs of the Divergence of Harmonic Series.

A proof by Johann Bernoulli

In this section we will discuss a remarkable proof of the divergence of the harmonic series which first appeared in the work of Jakob Bernoulli who generously attributed it to his brother Johann Bernoulli. For a fascinating account of the history behind this proof and more, please look at the following article:

William Dunham, The Bernoullis and the harmonic series, College Mathematics Journal, 18 (1987), 18-23.

Before looking at Bernoulli’s proof, let’s put things in perspective a little bit. The proof was devised around 150 years before the concept of convergence of series was made precise (by Cauchy) via the notion of convergence of partial sums. It is therefore understandable that Bernoulli looked at infinite series in a somewhat “naive” way (at least compared to the modern notion of convergence of a series). Nonetheless, the proof is quite refreshing and ingenious because it is based on, strangely enough, the convergence of the series {\sum^{\infty}_{n=1}\frac{1}{n(n+1)}}. In what follows, we present a modern version of Bernoulli’s proof given in the aforementioned article by Kifowit and Stamps.
The series

\displaystyle \begin{array}{rcl} \frac{1}{2}+ \frac{1}{6}+ \frac{1}{12}+ \frac{1}{20}+\ldots= \sum^{\infty}_{n=1}\frac{1}{n(n+1)}= \sum^{\infty}_{n=1}\left(\frac{1}{n}- \frac{1}{n+1}\right) \end{array}

is a convergent telescoping series whose sum is 1. In a similar manner, it can be shown that {\sum^{\infty}_{n=k}\frac{1}{n(n+1)}= \frac{1}{k}} for {k=1, 2, 3, \ldots}. Bernoulli’s proof is by contradiction. Suppose that the harmonic series converges and suppose that its sum is {S}. Then,
\displaystyle \begin{array}{rcl} S &=& 1+ \frac{1}{2}+ \frac{1}{3}+ \frac{1}{4}+ \frac{1}{5}+ \frac{1}{6}+ \frac{1}{7}+ \frac{1}{8}+ \ldots\\ &=& 1+ \frac{1}{2}+ \frac{2}{6}+ \frac{3}{12}+ \frac{4}{20}+ \frac{5}{30}+ \frac{6}{42}+ \frac{7}{56}+ \ldots\\ &=& 1+ \left(\frac{1}{2}+ \frac{1}{6}+ \frac{1}{12}+ \frac{1}{20}+ \ldots\right)+ \left(\frac{1}{6}+ \frac{1}{12}+ \frac{1}{20}+ \frac{1}{30}+ \ldots\right)\\ &+& \left(\frac{1}{12}+ \frac{1}{20}+ \frac{1}{30}+ \frac{1}{42}+\ldots\right)+ \ldots\\ &=& 1+ 1+ \frac{1}{2}+ \frac{1}{3}+ \ldots\\ &=& 1+ S. \end{array}

This final statement {S= 1+ S} is a contradiction. Thus the harmonic series diverges.