Skip to main content

Section Power Series

From the first semester of calculus, we know that the best linear (\(1^{st}\) degree polynomial) approximation to the function, \(f(x) = e^x\) at the point \(P=(2,e^2)\) is the tangent line to the function at this point. Let's call that approximation \(L\) and note that \(L(2) = f(2)\) and \(L'(2) = f'(2)\text{.}\) That is, \(L\) and \(f\) intersect at \(P\) and \(L\) and \(f\) share the same derivative at \(P\text{.}\) What is the best quadratic (\(2^{nd}\) degree polynomial) approximation? If the best first degree approximation to the curve agrees at the point and in the first derivative, then the best second degree approximation should agree with the function at the point, in the first derivative, and in the second derivative.

Example 6.73.

Compute best linear and quadratic approximations to \(f(x) = x^3-16x\) at \(x=1\text{.}\)

Problem 6.74.

Let \(f(x) = e^x.\) Find the best linear approximation, \(L(x) = mx+ b,\) to \(f\) at \((2,e^2).\) Find the best quadratic approximation, \(Q(x) = ax^2 + bx +c,\) to \(f\) at \((2,e^2).\) Graph all three functions on the same pair of coordinate axes.

Problem 6.75.

Let \(f(x) = \cos(x).\) Find the best linear (L, \(1^{st}\) degree), quadratic (Q, \(2^{nd}\) degree), and quartic (C, \(4^{th}\) degree) approximations to \(f\) at \((\pi, \cos(\pi)).\) Sketch the graph of \(f\) and all three approximations on the same pair of coordinate axes. Compute and compare \(\dsp f(\frac{3\pi}{2})\text{,}\) \(\dsp L(\frac{3\pi}{2})\text{,}\) \(\dsp Q(\frac{3\pi}{2})\text{,}\) and \(\dsp C(\frac{3\pi}{2}).\)

From our work on geometric series, we know that

\begin{equation*} \displaystyle{\sum_{n=0}^{\infty} t^n = \frac{1}{1-t}} \end{equation*}

for \(|t| \lt 1\text{.}\) Therefore the function \(\dsp g(t) = \frac{1}{1-t} \mbox{ where } |t| \lt 1\) can be written as a series,

\begin{equation*} \displaystyle{g(t) = \frac{1}{1-t} = 1 + t + t^2 + t^3 + \cdots = \sum_{n=0}^{\infty} t^n}. \end{equation*}

Replacing \(t\) by \(-x^2\text{,}\) we have

\begin{equation*} \dsp \frac{1}{1+x^2}=1-x^2+x^4- \cdots + (-1)^{n-1} x^{2n-2} + \cdots. \end{equation*}

Based on these two observations, we see that we can write at least two rational functions as infinite series (or infinite polynomials). And we can write every polynomial as an infinite series, since

\begin{equation*} f(x) = a_0 + a_1x + a_2x^2 + \cdots + a_Nx^N = \sum_{i=0}^{\infty} a_ix^i \end{equation*}

where \(a_i = 0\) for \(i > N\text{.}\) Our goal is a systematic way to write any differentiable function (like \(sin\) or \(cos\) or \(\ln\)) as an infinite series.

Definition 6.76.

\(0^0 = 1\)

Notation. To keep from confusing the powers of \(f\) with the derivatives of \(f\text{,}\) we use \(f^{(n)}\) to represent \(f\) if \(n=0\) and the \(n^{th}\) derivative of \(f\) if \(n \ge 1.\) Therefore when \(n \geq 1\text{,}\) \(f^{(n)}(c)\) means the \(n^{th}\) derivative of \(f\) evaluated at the number \(c\text{.}\)

Problem 6.77.

Let \(\displaystyle{f(x) = \sum_{i=0}^{\infty} a_ix^i}.\) Write out \(S_6\text{,}\) the \(6^{th}\) partial sum of this series. Compute \(S_6'\) and \(S_6''\text{.}\) Compute \(S_6'(0)\) and \(S_6''(0).\)

Problem 6.78.

Let \(\displaystyle{f(x) = \sum_{i=0}^{\infty} a_ix^i}.\) Compute \(f'\) and \(f''.\) Compute \(f(0), f'(0), f''(0), \dots\text{?}\) If \(n\) is any positive integer, what is the \(n^{th}\) derivative at \(0\text{,}\) \(f^{(n)}(0)\text{?}\)

Definition 6.79.

If \(f\) is a function with \(N\) derivatives, then the \(N^{th}\) degree Taylor polynomial of \(f\) expanded at \(c\) is defined by

\begin{equation*} T_N(x) = \sum_{n=0}^{N} \frac{f^{(n)}(c)}{n!}(x-c)^n. \end{equation*}
Example 6.80.

Compute the Taylor series polynomials of degree 1, 2, 3 and 4 for \(f(x) = 1/x\) expanded at 1. Compute the Taylor series polynomial for the same function expanded at 1.

Problem 6.81.

Suppose that \(\dsp T(x) = \sum_{n=0}^{\infty} \frac{f^{(n)}(c)}{n!}(x-c)^n.\) Write out the first few terms of this series and compute \(T'(c)\text{,}\) \(T''(c)\text{,}\) \(T'''(c)\text{,}\) ….

Definition 6.82.

The series in the last problem is called the Taylor Series for \(f\) expanded at c. When \(c=0\) this is called the McLaurin Series for \(f\).

Problem 6.83.

Let \(f(x) = e^x.\) Compute the first, second, and third degree Taylor polynomials for \(f\) expanded at 2. Compare to the result of problem 6.74.

Example 6.84.

Compute the Taylor series for \(f(x)=1/x\) at \(c=1\) and use the Ratio Test to determine the interval of convergence. Also check convergence at both endpoints of the interval of convergence.

Problem 6.85.

For each function, write the infinite degree Taylor polynomial as a series.

  1. \(f(x) = e^x\) expanded at 0

  2. \(f(x) = \sin(x)\) expanded at 0

Problem 6.86.

Compute the Taylor series for \(\dsp f(x) = \frac{\cos(x)}{x}\) expanded at 0 by first computing the Taylor series for \(\cos(x)\) expanded at \(x=0\) and then multiplying this entire series by \(1/x\text{.}\)

Problem 6.87.

Let \(f(x) = \ln(x).\) Compute the first, second, and third degree Taylor polynomial for \(f\) expanded at 1. Compute \(|f(2.5) - T_N(2.5)|\) for \(N=1,2,3.\)

We now can write certain functions as infinite series, but we know that not every infinite series converges. Therefore, our expressions only make sense for the values of \(x\) for which the series converges. For this reason, every time we write down an infinite series to represent a function, we need to determine the values of \(x\) for which the series converges. This is called interval of convergence (or domain) of the series.

Definition 6.88.

A { power series expanded at \(c\)} is any series of the form \(\displaystyle{\sum_{n=0}^{\infty} a_n(x-c)^n}\text{.}\) Taylor and McLaurin series are special cases of power series. The largest interval for which a power series converges is called its interval of convergence. Half the length of this interval is called the radius of convergence.

Problem 6.89.

Use the ratio test to find the interval and radius of convergence for each power series.

  1. \(\dsp \sum_{n=0}^{\infty} \frac{x^n}{n+1}\)

  2. \(\dsp \sum_{n=0}^{\infty} \frac{x^n}{n!}\)

  3. \(\dsp \sum_{n=0}^{\infty} n!(x+2)^n\)

The next theorem says that the interval of convergence of a power series expanded at the point \(c\) is either (i) only the one point, \(c\) (bad, since this means the series is useless), (ii) an interval of radius \(R\) centered at \(c,\) (good, and the endpoints might be contained in the domain), or (iii) all real numbers (very good).

When we write the power series for a function (like \(e^x\)) and it converges for all values of \(x\) then we simply have a different way to write the function that turns out to be both computationally useful, because of convergence, and theoretically useful. Speaking loosely, \(e^x\) is just an infinite polynomial — how cool is that?!

Problem 6.91.

Find the interval of convergence for each of the following power series.

  1. \(\dsp \sum_{n=0}^{\infty} \frac{(x-3)^n}{\ln(n+2)}\)

  2. \(\dsp \sum_{n=1}^{\infty} \frac{(-x)^n}{n}\)

  3. \(\dsp \sum_{n=0}^{\infty} \frac{\log(n+1)}{(n+1)^4} (x+1)^n\)

  4. \(\dsp \sum_{n=0}^{\infty} \frac{n^3}{3^n} (x+2)^n\)

The following theorem says that if you have a power series, then its derivative is exactly what you want it to be! Just take the derivative of the series term by term and the new infinite series you get is the derivative of the first and is defined on the same domain.

Problem 6.93.

Suppose that \(\displaystyle{f(x)=\sum_{n=0}^{\infty} a_n(x-c)^n}\) with radius of convergence \(r > 0\text{.}\) Show that \(\dsp a_n\) must equal \(\dsp \frac{f^{(n)}(c)}{n!}\) for \(n = 0, 1, 2,\) and \(3.\)

How does the calculator on your phone approximate numbers like \(\sqrt{5}\) or \(\sin(\pi/7)\text{?}\) It uses Taylor series. For example, if we want to approximate \(\sqrt{5}\) accurate to 3 decimal places, we could compute the Taylor series for \(f(x) = \sqrt{x}\) expanded about \(c=4\) because 4 is the integer closest to 5 for which we actually know the value of \(\sqrt{4}\text{.}\) Then we can use this polynomial to approximate \(\sqrt{5}.\) But there's a problem. While the Taylor series approximates the function, it's pretty hard to store an infinite polynomial in a phone. So, how can we determine what degree Taylor polynomial we need to compute in order to assure a given accuracy? We look at the remainder of the series. For any positive integer, \(N,\) \(f\) can be rewritten as the sum of the Taylor polynomial of degree \(N\) plus the remainder of the series by writing,

\begin{equation*} f(x) = \sum_{n=0}^N \frac{f^{(n)}(c)}{n!} (x-c)^n + \sum_{n=N+1}^\infty \frac{f^{(n)}(c)}{n!} (x-c)^n \end{equation*}

or

\begin{equation*} f(x) = T_N(x) + R_N(x), \end{equation*}

where

\begin{equation*} T_N(x) = \sum_{n=0}^N \frac{f^{(n)}(c)}{n!} (x-c)^n \mbox{ is the } N^{th} \mbox{ degree Taylor polynomial} \end{equation*}

and

\begin{equation*} R_N(x)=\sum_{n=N+1}^\infty \frac{f^{(n)}(c)}{n!}(x-c)^n \mbox{ is called the } N^{th} \mbox{ degree Taylor remainder} . \end{equation*}

Since \(f(x) = T_N(x) + R_N(x),\) for all \(x\) in the interval of convergence, the error for any \(x\) in the interval of convergence is just \(| f(x) - T_N(x)| = |R_N(x)|.\) If we could just approximate the size of the remainder term, we would know how accurate the \(N^{th}\) degree Taylor polynomial was. The next theorem says that the remainder, even though it is an infinite sum, can be written completely in terms of the \((N+1)^{st}\) derivative, so if we can find a bound for the \((N+1)^{st}\) derivative then we can get a bound on the error of the Taylor series.

Problem 6.95.

Let \(f(x) = \sqrt{x}\) and compute \(T_3(x)\text{,}\) the third degree Taylor series for \(f\) expanded at 4. Estimate the error \(|f(5)-T_3(5)|\) by using Theorem 6.94 to find the maximum of \(|R_3(5)|\text{.}\) Now use your calculator to compute \(|f(5)-T_3(5)|\) and see how this compares to your error estimate.

Problem 6.96.

Let \(f(x) = \sin(x)\) and compute \(T_3(x)\text{,}\) the third degree McLaurin series \(f.\) Estimate the error \(|f(\frac{\pi}{7})-T_3(\frac{\pi}{7})|\) by using Theorem 6.94 to find the maximum of \(|R_3(\frac{\pi}{7})|\text{.}\) Now use your calculator to compute \(|f(\frac{\pi}{7})-T_3(\frac{\pi}{7})|\) and see how this compares to your error estimate.