# Section8.5Taylor Polynomials and Taylor Series¶ permalink

##### Motivating Questions
• What is a Taylor polynomial? For what purposes are Taylor polynomials used?

• What is a Taylor series?

• How are Taylor polynomials and Taylor series different? How are they related?

• How do we determine the accuracy when we use a Taylor polynomial to approximate a function?

In our work to date in Chapter 8, essentially every sum we have considered has been a sum of numbers. In particular, each infinite series that we have discussed has been a series of real numbers, such as

$$1 + \frac{1}{2} + \frac{1}{4} + \cdots + \frac{1}{2^k} + \cdots = \sum_{k=0}^{\infty} \frac{1}{2^k} \text{.}\tag{8.5.1}$$

In the remainder of this chapter, we will expand our notion of series to include series that involve a variable, say $x\text{.}$ For instance, if in the geometric series in Equation (8.5.1) we replace the ratio $r = \frac{1}{2}$ with the variable $x\text{,}$ then we have the infinite (still geometric) series

$$1 + x + x^2 + \cdots + x^k + \cdots = \sum_{k=0}^{\infty} x^k \text{.}\tag{8.5.2}$$

Here we see something very interesting: since a geometric series converges whenever its ratio $r$ satisfies $|r|\lt 1\text{,}$ and the sum of a convergent geometric series is $\frac{a}{1-r}\text{,}$ we can say that for $|x| \lt 1\text{,}$

$$1 + x + x^2 + \cdots + x^k + \cdots = \frac{1}{1-x} \text{.}\tag{8.5.3}$$

Note well what Equation (8.5.3) states: the non-polynomial function $\frac{1}{1-x}$ on the right is equal to the infinite polynomial expresssion on the left. Moreover, it appears natural to truncate the infinite sum on the left (whose terms get very small as $k$ gets large) and say, for example, that

\begin{equation*} 1 + x + x^2 + x^3 \approx \frac{1}{1-x} \end{equation*}

for small values of $x\text{.}$ This shows one way that a polynomial function can be used to approximate a non-polynomial function; such approximations are one of the main themes in this section and the next.

In Preview Activity 8.5.1, we begin our explorations of approximating non-polynomial functions with polynomials, from which we will also develop ideas regarding infinite series that involve a variable, $x\text{.}$

##### Preview Activity8.5.1

Preview Activity 8.3.1 showed how we can approximate the number $e$ using linear, quadratic, and other polynomial functions; we then used similar ideas in Preview Activity 8.4.1 to approximate $\ln(2)\text{.}$ In this activity, we review and extend the process to find the “best” quadratic approximation to the exponential function $e^x$ around the origin. Let $f(x) = e^x$ throughout this activity.

1. Find a formula for $P_1(x)\text{,}$ the linearization of $f(x)$ at $x=0\text{.}$ (We label this linearization $P_1$ because it is a first degree polynomial approximation.) Recall that $P_1(x)$ is a good approximation to $f(x)$ for values of $x$ close to $0\text{.}$ Plot $f$ and $P_1$ near $x=0$ to illustrate this fact.

2. Since $f(x) = e^x$ is not linear, the linear approximation eventually is not a very good one. To obtain better approximations, we want to develop a different approximation that “bends” to make it more closely fit the graph of $f$ near $x=0\text{.}$ To do so, we add a quadratic term to $P_1(x)\text{.}$ In other words, we let

\begin{equation*} P_2(x) = P_1(x) + c_2x^2 \end{equation*}

for some real number $c_2\text{.}$ We need to determine the value of $c_2$ that makes the graph of $P_2(x)$ best fit the graph of $f(x)$ near $x=0\text{.}$

Remember that $P_1(x)$ was a good linear approximation to $f(x)$ near $0\text{;}$ this is because $P_1(0) = f(0)$ and $P'_1(0) = f'(0)\text{.}$ It is therefore reasonable to seek a value of $c_2$ so that

\begin{align*} P_2(0) \amp = f(0)\text{,} \amp P'_2(0) \amp = f'(0)\text{,} \amp \text{and }P''_2(0) \amp = f''(0)\text{.} \end{align*}

Remember, we are letting $P_2(x) = P_1(x) + c_2x^2\text{.}$

1. Calculate $P_2(0)$ to show that $P_2(0) = f(0)\text{.}$

2. Calculate $P'_2(0)$ to show that $P'_2(0) = f'(0)\text{.}$

3. Calculate $P''_2(x)\text{.}$ Then find a value for $c_2$ so that $P''_2(0) = f''(0)\text{.}$

4. Explain why the condition $P''_2(0) = f''(0)$ will put an appropriate “bend” in the graph of $P_2$ to make $P_2$ fit the graph of $f$ around $x=0\text{.}$

# Subsection8.5.1Taylor Polynomials

Preview Activity 8.5.1 illustrates the first steps in the process of approximating complicated functions with polynomials. Using this process we can approximate trigonometric, exponential, logarithmic, and other nonpolynomial functions as closely as we like (for certain values of $x$) with polynomials. This is extraordinarily useful in that it allows us to calculate values of these functions to whatever precision we like using only the operations of addition, subtraction, multiplication, and division, which are operations that can be easily programmed in a computer.

We next extend the approach in Preview Activity 8.5.1 to arbitrary functions at arbitrary points. Let $f$ be a function that has as many derivatives at a point $x=a$ as we need. Since first learning it in Section 1.8, we have regularly used the linear approximation $P_1(x)$ to $f$ at $x=a\text{,}$ which in one sense is the best linear approximation to $f$ near $a\text{.}$ Recall that $P_1(x)$ is the tangent line to $f$ at $(a,f(a))$ and is given by the formula

\begin{equation*} P_1(x) = f(a) + f'(a)(x-a) \text{.} \end{equation*}

If we proceed as in Preview Activity 8.5.1, we then want to find the best quadratic approximation

\begin{equation*} P_2(x) = P_1(x) + c_2(x-a)^2 \end{equation*}

so that $P_2(x)$ more closely models $f(x)$ near $x=a\text{.}$ Consider the following calculations of the values and derivatives of $P_2(x)\text{:}$

\begin{align*} P_2(x) \amp = P_1(x) + c_2(x-a)^2 \amp P_2(a) \amp = P_1(a) = f(a)\\ P'_2(x) \amp = P'_1(x) + 2c_2(x-a) \amp P'_2(a) \amp = P'_1(a) = f'(a)\\ P''_2(x) \amp = 2c_2 \amp P''_2(a) \amp = 2c_2\text{.} \end{align*}

To make $P_2(x)$ fit $f(x)$ better than $P_1(x)\text{,}$ we want $P_2(x)$ and $f(x)$ to have the same concavity at $x=a\text{.}$ That is, we want to have

\begin{equation*} P''_2(a) = f''(a) \text{.} \end{equation*}

This implies that

\begin{equation*} 2c_2 = f''(a) \end{equation*}

and thus

\begin{equation*} c_2 = \frac{f''(a)}{2} \text{.} \end{equation*}

Therefore, the quadratic approximation $P_2(x)$ to $f$ centered at $x=0$ is

\begin{equation*} P_2(x) = f(a) + f'(a)(x-a) + \frac{f''(a)}{2!}(x-a)^2 \text{.} \end{equation*}

This approach extends naturally to polynomials of higher degree. In this situation, we define polynomials

\begin{align*} P_3(x) \amp = P_2(x) + c_3(x-a)^3\text{,}\\ P_4(x) \amp = P_3(x) + c_4(x-a)^4\text{,}\\ P_5(x) \amp = P_4(x) + c_5(x-a)^5\text{,} \end{align*}

and so on, with the general one being

\begin{equation*} P_n(x) = P_{n-1}(x) + c_n(x-a)^n \text{.} \end{equation*}

The defining property of these polynomials is that for each $n\text{,}$ $P_n(x)$ must have its value and all its first $n$ derivatives agree with those of $f$ at $x = a\text{.}$ In other words we require that

\begin{equation*} P^{(k)}_n(a) = f^{(k)}(a) \end{equation*}

for all $k$ from 0 to $n\text{.}$

To see the conditions under which this happens, suppose

\begin{equation*} P_n(x) = c_0 + c_1(x-a) + c_2(x-a)^2 + \cdots + c_n(x-a)^n \text{.} \end{equation*}

Then

\begin{align*} P^{(0)}_n(a) \amp = c_0\\ P^{(1)}_n(a) \amp = c_1\\ P^{(2)}_n(a) \amp = 2c_2\\ P^{(3)}_n(a) \amp = (2)(3)c_3\\ P^{(4)}_n(a) \amp = (2)(3)(4)c_4\\ P^{(5)}_n(a) \amp = (2)(3)(4)(5)c_5 \end{align*}

and, in general,

\begin{equation*} P^{(k)}_n(a) = (2)(3)(4) \cdots (k-1)(k)c_k = k!c_k \text{.} \end{equation*}

So having $P^{(k)}_n(a) = f^{(k)}(a)$ means that $k!c_k = f^{(k)}(a)$ and therefore

\begin{equation*} c_k = \frac{f^{(k)}(a)}{k!} \end{equation*}

for each value of $k\text{.}$ In this expression for $c_k\text{,}$ we have found the formula for the degree $n$ polynomial approximation of $f$ that we seek.

##### Taylor Polynomials

The $n$th order Taylor polynomial of $f$ centered at $x = a$ is given by

\begin{align*} P_n(x) =\mathstrut \amp f(a) + f'(a)(x-a) + \frac{f''(a)}{2!}(x-a)^2 + \cdots + \frac{f^{(n)}(a)}{n!}(x-a)^n\\ =\mathstrut \amp \sum_{k=0}^n \frac{f^{(k)}(a)}{k!}(x-a)^k\text{.} \end{align*}

This degree $n$ polynomial approximates $f(x)$ near $x=a$ and has the property that $P_n^{(k)}(a) = f^{(k)}(a)$ for $k = 0 \ldots n\text{.}$

##### Example8.5.1

Determine the third order Taylor polynomial for $f(x) = e^x\text{,}$ as well as the general $n$th order Taylor polynomial for $f$ centered at $x=0\text{.}$

Solution
##### Activity8.5.2

We have just seen that the $n$th order Taylor polynomial centered at $a = 0$ for the exponential function $e^x$ is

\begin{equation*} \sum_{k=0}^{n} \frac{x^k}{k!} \text{.} \end{equation*}

In this activity, we determine small order Taylor polynomials for several other familiar functions, and look for general patterns that will help us find the Taylor series expansions a bit later.

1. Let $f(x) = \frac{1}{1-x}\text{.}$

1. Calculate the first four derivatives of $f(x)$ at $x=0\text{.}$ Then find the fourth order Taylor polynomial $P_4(x)$ for $\frac{1}{1-x}$ centered at 0.

2. Based on your results from part (i), determine a general formula for $f^{(k)}(0)\text{.}$

2. Let $f(x) = \cos(x)\text{.}$

1. Calculate the first four derivatives of $f(x)$ at $x=0\text{.}$ Then find the fourth order Taylor polynomial $P_4(x)$ for $\cos(x)$ centered at 0.

2. Based on your results from part (i), find a general formula for $f^{(k)}(0)\text{.}$ (Think about how $k$ being even or odd affects the value of the $k$th derivative.)

3. Let $f(x) = \sin(x)\text{.}$

1. Calculate the first four derivatives of $f(x)$ at $x=0\text{.}$ Then find the fourth order Taylor polynomial $P_4(x)$ for $\sin(x)$ centered at 0.

2. Based on your results from part (i), find a general formula for $f^{(k)}(0)\text{.}$ (Think about how $k$ being even or odd affects the value of the $k$th derivative.)

It is possible that an $n$th order Taylor polynomial is not a polynomial of degree $n\text{;}$ that is, the order of the approximation can be different from the degree of the polynomial. For example, in Activity 8.5.3 we found that the second order Taylor polynomial $P_2(x)$ centered at 0 for $\sin(x)$ is $P_2(x) = x\text{.}$ In this case, the second order Taylor polynomial is a degree 1 polynomial.

# Subsection8.5.2Taylor Series

In Activity 8.5.2 we saw that the fourth order Taylor polynomial $P_4(x)$ for $\sin(x)$ centered at 0 is

\begin{equation*} P_4(x) = x - \frac{x^3}{3!} \text{.} \end{equation*}

The pattern we found for the derivatives $f^{(k)}(0)$ describe the higher-order Taylor polynomials, e.g.,

\begin{align*} P_5(x) \amp= x - \frac{x^3}{3!} + \frac{x^{(5)}}{5!}\text{,}\\ P_7(x) \amp= x - \frac{x^3}{3!} + \frac{x^{(5)}}{5!} - \frac{x^{(7)}}{7!}\text{,}\\ P_9(x) \amp= x - \frac{x^3}{3!} + \frac{x^{(5)}}{5!} - \frac{x^{(7)}}{7!} + \frac{x^{(9)}}{9!}\text{,} \end{align*}

and so on. It is instructive to consider the graphical behavior of these functions; the following figure shows the graphs of a few of the Taylor polynomials centered at 0 for the sine function.

Notice that $P_1(x)$ is close to the sine function only for values of $x$ that are close to 0, but as we increase the degree of the Taylor polynomial the Taylor polynomials provide a better fit to the graph of the sine function over larger intervals. This illustrates the general behavior of Taylor polynomials: for any sufficiently well-behaved function, the sequence $\{P_n(x)\}$ of Taylor polynomials converges to the function $f$ on larger and larger intervals (though those intervals may not necessarily increase without bound). If the Taylor polynomials ultimately converge to $f$ on its entire domain, we write

\begin{equation*} f(x) = \sum_{k=0}^{\infty} \frac{f^{(k)}(a)}{k!}(x-a)^k \text{.} \end{equation*}
##### Definition8.5.3

Let $f$ be a function all of whose derivatives exist at $x=a\text{.}$ The Taylor series for $f$ centered at $x=a$ is the series $T_f(x)$ defined by

\begin{equation*} T_f(x) = \sum_{k=0}^{\infty} \frac{f^{(k)}(a)}{k!}(x-a)^k \text{.} \end{equation*}

In the special case where $a=0$ in Definition 8.5.3, the Taylor series is also called the Maclaurin series for $f\text{.}$ From Example 8.5.1 we know the $n$th order Taylor polynomial centered at 0 for the exponential function $e^x\text{;}$ thus, the Maclaurin series for $e^x$ is

\begin{equation*} \sum_{k=0}^{\infty} \frac{x^k}{k!} \text{.} \end{equation*}
##### Activity8.5.3

In Activity 8.5.2 we determined small order Taylor polynomials for a few familiar functions, and also found general patterns in the derivatives evaluated at 0. Use that information to write the Taylor series centered at 0 for the following functions.

1. $f(x) = \frac{1}{1-x}$

2. $f(x) = \cos(x)$ (You will need to carefully consider how to indicate that many of the coefficients are 0. Think about a general way to represent an even integer.)

3. $f(x) = \sin(x)$ (You will need to carefully consider how to indicate that many of the coefficients are 0. Think about a general way to represent an odd integer.)

4. Determine the $n$ order Taylor polynomial for $f(x) = \frac{1}{1-x}$ centered at $x=0\text{.}$

The next activity further considers the important issue of the $x$-values for which the Taylor series of a function converges to the function itself.

##### Activity8.5.4

1. Plot the graphs of several of the Taylor polynomials centered at 0 (of order at least 5) for $e^x$ and convince yourself that these Taylor polynomials converge to $e^x$ for every value of $x\text{.}$

2. Draw the graphs of several of the Taylor polynomials centered at 0 (of order at least 6) for $\cos(x)$ and convince yourself that these Taylor polynomials converge to $\cos(x)$ for every value of $x\text{.}$ Write the Taylor series centered at 0 for $\cos(x)\text{.}$

3. Draw the graphs of several of the Taylor polynomials centered at 0 for $\frac{1}{1-x}\text{.}$ Based on your graphs, for what values of $x$ do these Taylor polynomials appear to converge to $\frac{1}{1-x}\text{?}$ How is this situation different from what we observe with $e^x$ and $\cos(x)\text{?}$ In addition, write the Taylor series centered at 0 for $\frac{1}{1-x}\text{.}$

The Maclaurin series for $e^x\text{,}$ $\sin(x)\text{,}$ $\cos(x)\text{,}$ and $\frac{1}{1-x}$ will be used frequently, so we should be certain to know and recognize them well.

# Subsection8.5.3The Interval of Convergence of a Taylor Series

In the previous section (in Figure 8.5.2 and Activity 8.5.4) we observed that the Taylor polynomials centered at 0 for $e^x\text{,}$ $\cos(x)\text{,}$ and $\sin(x)$ converged to these functions for all values of $x$ in their domain, but that the Taylor polynomials centered at 0 for $\frac{1}{1-x}$ converged to $\frac{1}{1-x}$ for only some values of $x\text{.}$ In fact, the Taylor polynomials centered at 0 for $\frac{1}{1-x}$ converge to $\frac{1}{1-x}$ on the interval $(-1,1)$ and diverge for all other values of $x\text{.}$ So the Taylor series for a function $f(x)$ does not need to converge for all values of $x$ in the domain of $f\text{.}$

Our observations to date suggest two natural questions: can we determine the values of $x$ for which a given Taylor series converges? Moreover, given the Taylor series for a function $f\text{,}$ does it actually converge to $f(x)$ for those values of $x$ for which the Taylor series converges?

##### Example8.5.4

Graphical evidence suggests that the Taylor series centered at 0 for $e^x$ converges for all values of $x\text{.}$ To verify this, use the Ratio Test to determine all values of $x$ for which the Taylor series

$$\sum_{k=0}^{\infty} \frac{x^k}{k!} \tag{8.5.4}$$

converges absolutely.

Solution

One key question remains: while the Taylor series for $e^x$ converges for all $x\text{,}$ what we have done does not tell us that this Taylor series actually converges to $e^x$ for each $x\text{.}$ We'll return to this question when we consider the error in a Taylor approximation near the end of this section.

We can apply the main idea from Example 8.5.4 in general. To determine the values of $x$ for which a Taylor series

\begin{equation*} \sum_{k=0}^{\infty} c_k (x-a)^k \text{,} \end{equation*}

centered at $x = a$ will converge, we apply the Ratio Test with $a_k = | c_k (x-a)^k |$ and recall that the series to which the Ratio Test is applied converges if $\lim_{k \to \infty} \frac{a_{k+1}}{a_k} \lt 1\text{.}$

Observe that

\begin{equation*} \frac{a_{k+1}}{{a_k}} = | x-a | \frac{| c_{k+1} |}{| c_{k} |} \text{,} \end{equation*}

so when we apply the Ratio Test, we get that

\begin{equation*} \lim_{k \to \infty} \frac{a_{k+1}}{a_k} = \lim_{k \to \infty} |x-a| \frac{c_{k+1}}{c_k} \text{.} \end{equation*}

Note further that $c_k = \frac{f^{(k)}(a)}{k!}\text{,}$ and say that

\begin{equation*} \lim_{k \to \infty} \frac{c_{k+1}}{c_k} = L \text{.} \end{equation*}

Thus, we have found that

\begin{equation*} \lim_{k \to \infty} \frac{a_{k+1}}{a_k} = |x-a| \cdot L \text{.} \end{equation*}

There are three important possibilities for $L\text{:}$ $L$ can be 0, a finite positive value, or infinite. Based on this value of $L\text{,}$ we can therefore determine for which values of $x$ the original Taylor series converges.

• If $L = 0\text{,}$ then the Taylor series converges on $(-\infty, \infty)\text{.}$

• If $L$ is infinite, then the Taylor series converges only at $x = a\text{.}$

• If $L$ is finite and nonzero, then the Taylor series converges absolutely for all $x$ that satisfy

\begin{equation*} |x-a| \cdot L \lt 1 \text{.} \end{equation*}

In other words, the series converges absolutely for all $x$ such that

\begin{equation*} |x-a| \lt \frac{1}{L} \text{,} \end{equation*}

which is also the interval

\begin{equation*} \left(a-\frac{1}{L}, a+\frac{1}{L}\right) \text{.} \end{equation*}

Because the Ratio Test is inconclusive when the $|x-a| \cdot L = 1\text{,}$ the endpoints $a \pm \frac{1}{L}$ have to be checked separately.

It is important to notice that the set of $x$ values at which a Taylor series converges is always an interval centered at $x=a\text{.}$ For this reason, the set on which a Taylor series converges is called the interval of convergence. Half the length of the interval of convergence is called the radius of convergence. If the interval of convergence of a Taylor series is infinite, then we say that the radius of convergence is infinite.

##### Activity8.5.5

1. Use the Ratio Test to explicitly determine the interval of convergence of the Taylor series for $f(x) = \frac{1}{1-x}$ centered at $x=0\text{.}$

2. Use the Ratio Test to explicitly determine the interval of convergence of the Taylor series for $f(x) = \cos(x)$ centered at $x=0\text{.}$

3. Use the Ratio Test to explicitly determine the interval of convergence of the Taylor series for $f(x) = \sin(x)$ centered at $x=0\text{.}$

The Ratio Test tells us how we can determine the set of $x$ values for which a Taylor series converges absolutely. However, just because a Taylor series for a function $f$ converges, we cannot be certain that the Taylor series actually converges to $f(x)$ on its interval of convergence. To show why and where a Taylor series does in fact converge to the function $f\text{,}$ we next consider the error that is present in Taylor polynomials.

# Subsection8.5.4Error Approximations for Taylor Polynomials

We now know how to find Taylor polynomials for functions such as $\sin(x)\text{,}$ as well as how to determine the interval of convergence of the corresponding Taylor series. We next develop an error bound that will tell us how well an $n$th order Taylor polynomial $P_n(x)$ approximates its generating function $f(x)\text{.}$ This error bound will also allow us to determine whether a Taylor series on its interval of convergence actually equals the function $f$ from which the Taylor series is derived. Finally, we will be able to use the error bound to determine the order of the Taylor polynomial $P_n(x)$ for a function $f$ that we need to ensure that $P_n(x)$ approximates $f(x)$ to any desired degree of accuracy.

In all of this, we need to compare $P_n(x)$ to $f(x)\text{.}$ For this argument, we assume throughout that we center our approximations at 0 (a similar argument holds for approximations centered at $a$). We define the exact error, $E_n(x)\text{,}$ that results from approximating $f(x)$ with $P_n(x)$ by

\begin{equation*} E_n(x) = f(x) - P_n(x) \text{.} \end{equation*}

We are particularly interested in $|E_n(x)|\text{,}$ the distance between $P_n$ and $f\text{.}$ Note that since

\begin{equation*} P^{(k)}_n(0) = f^{(k)}(0) \end{equation*}

for $0 \leq k \leq n\text{,}$ we know that

\begin{equation*} E^{(k)}_n(0) = 0 \end{equation*}

for $0 \leq k \leq n\text{.}$ Furthermore, since $P_n(x)$ is a polynomial of degree less than or equal to $n\text{,}$ we know that

\begin{equation*} P_n^{(n+1)}(x) = 0 \text{.} \end{equation*}

Thus, since $E^{(n+1)}_n(x) = f^{(n+1)}(x) - P_n^{(n+1)}(x)\text{,}$ it follows that

\begin{equation*} E^{(n+1)}_n(x) = f^{(n+1)}(x) \end{equation*}

for all $x\text{.}$

Suppose that we want to approximate $f(x)$ at a number $c$ close to 0 using $P_n(c)\text{.}$ If we assume $|f^{(n+1)}(t)|$ is bounded by some number $M$ on $[0, c]\text{,}$ so that

\begin{equation*} \left|f^{(n+1)}(t)\right| \leq M \end{equation*}

for all $0 \leq t \leq c\text{,}$ then we can say that

\begin{equation*} \left|E^{(n+1)}_n(t)\right| = \left|f^{(n+1)}(t)\right| \leq M \end{equation*}

for all $t$ between 0 and $c\text{.}$ Equivalently,

$$-M \leq E^{(n+1)}_n(t) \leq M \tag{8.5.5}$$

on $[0, c]\text{.}$ Next, we integrate the three terms in Inequality (8.5.5) from $t = 0$ to $t = x\text{,}$ and thus find that

\begin{equation*} \int_0^x -M \ dt \leq \int_0^x E^{(n+1)}_n(t) \ dt \leq \int_0^x M \ dt \end{equation*}

for every value of $x$ in $[0, c]\text{.}$ Since $E^{(n)}_n(0) = 0\text{,}$ the First FTC tells us that

\begin{equation*} -Mx \leq E^{(n)}_n(x) \leq Mx \end{equation*}

for every $x$ in $[0, c]\text{.}$

Integrating the most recent inequality, we obtain

\begin{equation*} \int_0^x -Mt \ dt \leq \int_0^x E^{(n)}_n(t) \ dt \leq \int_0^x Mt \ dt \end{equation*}

and thus

\begin{equation*} -M\frac{x^2}{2} \leq E^{(n-1)}_n(x) \leq M\frac{x^2}{2} \end{equation*}

for all $x$ in $[0, c]\text{.}$

Integrating $n$ times, we arrive at

\begin{equation*} -M\frac{x^{n+1}}{(n+1)!} \leq E_n(x) \leq M\frac{x^{n+1}}{(n+1)!} \end{equation*}

for all $x$ in $[0, c]\text{.}$ This enables us to conclude that

\begin{equation*} \left|E_n(x)\right| \leq M\frac{|x|^{n+1}}{(n+1)!} \end{equation*}

for all $x$ in $[0, c]\text{,}$ which shows an important bound on the approximation's error, $E_n\text{.}$

Our work above was based on the approximation centered at $a = 0\text{;}$ the argument may be generalized to hold for any value of $a\text{,}$ which results in the following theorem.

The Lagrange Error Bound for $P_n(x)\text{.}$ Let $f$ be a continuous function with $n+1$ continuous derivatives. Suppose that $M$ is a positive real number such that $\left|f^{(n+1)}(x)\right| \le M$ on the interval $[a, c]\text{.}$ If $P_n(x)$ is the $n$th order Taylor polynomial for $f(x)$ centered at $x=a\text{,}$ then

\begin{equation*} \left|P_n(c) - f(c)\right| \leq M\frac{|c-a|^{n+1}}{(n+1)!} \text{.} \end{equation*}

This error bound may now be used to tell us important information about Taylor polynomials and Taylor series, as we see in the following examples and activities.

##### Example8.5.5

Determine how well the 10th order Taylor polynomial $P_{10}(x)$ for $\sin(x)\text{,}$ centered at 0, approximates $\sin(2)\text{.}$

Solution
##### Activity8.5.6

Let $P_n(x)$ be the $n$th order Taylor polynomial for $\sin(x)$ centered at $x=0\text{.}$ Determine how large we need to choose $n$ so that $P_n(2)$ approximates $\sin(2)$ to 20 decimal places.

##### Example8.5.6

Show that the Taylor series for $\sin(x)$ actually converges to $\sin(x)$ for all $x\text{.}$

Solution
##### Activity8.5.7

1. Show that the Taylor series centered at 0 for $\cos(x)$ converges to $\cos(x)$ for every real number $x\text{.}$

2. Next we consider the Taylor series for $e^x\text{.}$

1. Show that the Taylor series centered at 0 for $e^x$ converges to $e^x$ for every nonnegative value of $x\text{.}$

2. Show that the Taylor series centered at 0 for $e^x$ converges to $e^x$ for every negative value of $x\text{.}$

3. Explain why the Taylor series centered at 0 for $e^x$ converges to $e^x$ for every real number $x\text{.}$ Recall that we earlier showed that the Taylor series centered at 0 for $e^x$ converges for all $x\text{,}$ and we have now completed the argument that the Taylor series for $e^x$ actually converges to $e^x$ for all $x\text{.}$

3. Let $P_n(x)$ be the $n$th order Taylor polynomial for $e^x$ centered at 0. Find a value of $n$ so that $P_n(5)$ approximates $e^5$ correct to 8 decimal places.

# Subsection8.5.5Summary

• We can use Taylor polynomials to approximate complicated functions. This allows us to approximate values of complicated functions using only addition, subtraction, multiplication, and division of real numbers. The $n$th order Taylor polynomial centered at $x=a$ of a function $f$ is

\begin{align*} P_n(x) =\mathstrut \amp f(a) + f'(a)(x-a) + \frac{f''(a)}{2!}(x-a)^2 + \cdots + \frac{f^{(n)}(a)}{n!}(x-a)^n\\ =\mathstrut \amp \sum_{k=0}^n \frac{f^{(k)}(a)}{k!}(x-a)^k\text{.} \end{align*}
• The Taylor series centered at $x=a$ for a function $f$ is

\begin{equation*} \sum_{k=0}^{\infty} \frac{f^{(k)}(a)}{k!}(x-a)^k \text{.} \end{equation*}
• The $n$th order Taylor polynomial centered at $a$ for $f$ is the $n$th partial sum of its Taylor series centered at $a\text{.}$ So the $n$th order Taylor polynomial for a function $f$ is an approximation to $f$ on the interval where the Taylor series converges; for the values of $x$ for which the Taylor series converges to $f$ we write

\begin{equation*} f(x) = \sum_{k=0}^{\infty} \frac{f^{(k)}(a)}{k!}(x-a)^k \text{.} \end{equation*}
• The Lagrange Error Bound shows us how to determine the accuracy in using a Taylor polynomial to approximate a function. More specifically, if $P_n(x)$ is the $n$th order Taylor polynomial for $f$ centered at $x=a$ and if $M$ is an upper bound for $\left|f^{(n+1)}(x)\right|$ on the interval $[a, c]\text{,}$ then

\begin{equation*} \left|P_n(c) - f(c)\right| \leq M\frac{|c-a|^{n+1}}{(n+1)!} \text{.} \end{equation*}

##### 6

In this exercise we investigation the Taylor series of polynomial functions.

1. Find the 3rd order Taylor polynomial centered at $a = 0$ for $f(x) = x^3-2x^2+3x-1\text{.}$ Does your answer surprise you? Explain.

2. Without doing any additional computation, find the 4th, 12th, and 100th order Taylor polynomials (centered at $a = 0$) for $f(x) = x^3-2x^2+3x-1\text{.}$ Why should you expect this?

3. Now suppose $f(x)$ is a degree $m$ polynomial. Completely describe the $n$th order Taylor polynomial (centered at $a = 0$) for each $n\text{.}$

##### 7

The examples we have considered in this section have all been for Taylor polynomials and series centered at 0, but Taylor polynomials and series can be centered at any value of $a\text{.}$ We look at examples of such Taylor polynomials in this exercise.

1. Let $f(x) = \sin(x)\text{.}$ Find the Taylor polynomials up through order four of $f$ centered at $x = \frac{\pi}{2}\text{.}$ Then find the Taylor series for $f(x)$ centered at $x = \frac{\pi}{2}\text{.}$ Why should you have expected the result?

2. Let $f(x) = \ln(x)\text{.}$ Find the Taylor polynomials up through order four of $f$ centered at $x = 1\text{.}$ Then find the Taylor series for $f(x)$ centered at $x = 1\text{.}$

3. Use your result from (b) to determine which Taylor polynomial will approximate $\ln(2)$ to two decimal places. Explain in detail how you know you have the desired accuracy.

##### 8

We can use known Taylor series to obtain other Taylor series, and we explore that idea in this exercise, as a preview of work in the following section.

1. Calculate the first four derivatives of $\sin(x^2)$ and hence find the fourth order Taylor polynomial for $\sin(x^2)$ centered at $a=0\text{.}$

2. Part (a) demonstrates the brute force approach to computing Taylor polynomials and series. Now we find an easier method that utilizes a known Taylor series. Recall that the Taylor series centered at 0 for $f(x) = \sin(x)$ is

$$\sum_{k=0}^{\infty} (-1)^{k} \frac{x^{2k+1}}{(2k+1)!} \text{.}\tag{8.5.7}$$
1. Substitute $x^2$ for $x$ in the Taylor series (8.5.7). Write out the first several terms and compare to your work in part (a). Explain why the substitution in this problem should give the Taylor series for $\sin(x^2)$ centered at 0.

2. What should we expect the interval of convergence of the series for $\sin(x^2)$ to be? Explain in detail.

##### 9

Based on the examples we have seen, we might expect that the Taylor series for a function $f$ always converges to the values $f(x)$ on its interval of convergence. We explore that idea in more detail in this exercise. Let $f(x) = \begin{cases}e^{-1/x^2} \amp \text{ if } x \neq 0, \\ 0 \amp \text{ if } x = 0. \end{cases}$

1. Show, using the definition of the derivative, that $f'(0) = 0\text{.}$

2. It can be shown that $f^{(n)}(0) = 0$ for all $n \geq 2\text{.}$ Assuming that this is true, find the Taylor series for $f$ centered at 0.

3. What is the interval of convergence of the Taylor series centered at 0 for $f\text{?}$ Explain. For which values of $x$ the interval of convergence of the Taylor series does the Taylor series converge to $f(x)\text{?}$