# CAGD/Polynomials

We will quickly review basic power polynomials while we transition to understanding Bernstein polynomials.

# Power and Bernstein Polynomials

In basic algebra, we learn all about power polynomials. We encounter them all over the place and have learned lots of ways to manipulate them. They take the form:

$\textstyle P(t) = \sum_{i=0}^n p_i t^i$,

where pi are the coefficients of the polynomial. Although this might be sufficient for a lot of mathematical operations, oftentimes it's easier to use Bernstein polynomials. They take the form:

$\textstyle B(t) = \sum_{i=0}^n b_i (1-t)^{n-i} t^i$,

where bi are the Bernstein coefficients of the polynomial. Although it doesn't seem computationally faster, it does provide a benefit in some often-used cases. The Bézier curve is built off of the Bernstein polynomial as you can see from the similiarity of form. In fact, a Bézier curve can be expressed as a collection of Bernstein polynomials. If the points that define the curve P are defined by the coordinates (X,Y,Z,W), we can express them as separate Bernstein polynomials.

$x(t) = \sum_{i=0}^n x_i (1-t)^{n-i} t^i$
$y(t) = \sum_{i=0}^n y_i (1-t)^{n-i} t^i$
$z(t) = \sum_{i=0}^n z_i (1-t)^{n-i} t^i$
$w(t) = \sum_{i=0}^n w_i (1-t)^{n-i} t^i$

# Basis Conversion

Sometimes we have a curve in one basis but would like to convert to the other basis. A proof of how to get the conversion formula is as follows:

$B(t) \equiv B(0) + B'(0)t + \frac{B''(0)t^2}{2!} + \cdots + \frac{B^{(i)}(0)t^i}{i!}+\cdots$

If the power polynomial is equal to the Bernstein polynomial, the following relation must be true:

$\frac{P^{(i)}(0)t^i}{i!} = \frac{B^{(i)}(0)t^i}{i!}$

For power polynomials:

$p_i = \frac{P^{(i)}(0)}{i!}$

This makes the relationship:

$p_i = \frac{B^{(i)}(0)}{i!}, i = 0, \cdots ,n$

We apply the general derivative for Bézier curves to Bernstein polynomials:

$B^{(i)}(0) = n(n-1)\cdots(n-i+1) \sum_{j=0}^i (-1)^{(i+j-1)} \binom{i}{j} b_j$

A recurrence formula can be made:

$b_i^j = b_{i+1}^{j-1} - b_i^{j-1}, b_i^0 \equiv b_i$

Through manipulation of the recurrence formula and definition of the derivative:

$B^{(i)}(0) = \frac{n!}{(n-i)!} b_0^i$

Thus, we can simplify further:

$p_i = \frac{n!}{(n-i)!i!} b_0^i = \binom{n}{i} b_0^i$

This allows us to convert between the bases very easily. Just like with the de Casteljau algorithm, we can set up a difference table with our Bernstein basis coefficients in the top row and compute the power basis coefficients very easily.

\begin{matrix} b_0^0 & b_1^0 & b_2^0 & \cdots & b_n^0 \\ b_0^1 & b_1^1 & \cdots & b_{n-1}^1 \\ b_0^2 & \cdots & b_{n-2}^2 \\ \vdots \\ b_0^n \\ \end{matrix} \Rightarrow \begin{align} b_0^0 & = p_0\\ b_0^1 & = p_1 / \textstyle{\binom{n}{1}} \\ b_0^2 & = p_2 / \textstyle{\binom{n}{2}}\\ \vdots \\ b_0^n & = p_n / \textstyle{\binom{n}{n}}\\ \end{align}

# Evaluating Polynomials

## Brute Force Method

The knee-jerk reaction to evaluating polynomials is simply to plug in the value we know and do the operations. For a general polynomial f(x) = a + bx + cx2 + dx3 + ..., we would start with a as our result, multiply b times our value and add it to the result, multiply c times our value squared and add that to our result, and so on. This takes n addition operations, but n! multiplication operations. As the degree gets higher, the number of multiplication operations increase, increasing our algorithm time and possibly introducing some floating-point error in our calculations. There are a couple other techniques that allow us to evaluate polynomials that will be presented here.

## Forward Differencing

Forward differencing is a technique to evaluate polynomials quickly. This technique works when we are evaluating evenly spaced arguments. For example, if we are evaluating the polynomial at f(a), we could apply forward differencing to get f(2a), f(3a), and so on. This method is fast, requiring only addition. We construct a difference table and can quickly evaluate the polynomial. In the following example, we have some values for our polynomial:

$\begin{matrix} t: & t_i & t_{i+1} & t_{i+2} & t_{i+3} & t_{i+4} & t_{i+5} & t_{i+6} & t_{i+7} \\ f(t): & 1 & 3 & 2 & 5 & 4 & -24 & -117 & -328 \\ \Delta _1(t): & 2 & -1 & 3 & -1 & -28 & -93 & -211\\ \Delta _2(t): & -3 & 4 & -4 & -27 & -65 & -118\\ \Delta _3(t): & 7 & -8 & -23 & -38 & -53\\ \Delta _4(t): & -15 & -15 & -15 & -15 \\ \Delta _5(t): & 0 & 0 & 0 \\ \end{matrix}$

As we see in the 4th difference row, -15 appears in every term. This is a magic number. If we simply add another -15 in the next cell to the right and compute the difference table backwards, when we get to the f(t) row, we will have the value of the polynomial evaluated at ti+8. It requires only n addition operations to evaluate the polynomial, making it very fast. However, this method relies heavily on the accuracy of the previous values. Because of floating-point error, this method is numerically unstable. It can be used as a good approximation, however.

## Horner's Algorithm

Horner's algorithm is the fastest algorithm to evaluate a power polynomial at a certain value. Using a brute-force algorithm to evaluate a polynomial, it will require a lot of addition and multiplication operations. Horner's algorithm accomplishes evaluation of an nth degree polynomial with n adds and n multiplies. It takes advantage of the nested form of a power polynomial:

$y(t) = p_0 + t(p_1 + t(p_2 + t(...+t p_n)))$

We can write some pseudocode that allows us to implement Horner's algorithm:

hornerEval(double[] p, double t)
{
result = p[p.size-1];
for(int i = 0; i < p.size - 1; i++)
{
result = t*result + p[i];
}
return result;
}