# Analytic continuation

Many of the functions that one meets in calculus, including trigonometric (sine, cosine, tangent), exponential and logarithm functions, can take complex numbers as its argument (the "input"), and the value of the function (the "output") would also be a complex number. One might think that there are many different ways to "extend" or "continue" these functions to the complex domain, but—with a mild criterion—there is a unique way to do so. Since the time of Lagrange, these and a host of other functions are known as "analytic", which has come to mean something precise—locally it can be represented by a power series—and in the complex domain is equivalent to this rather mild criterion. The process and rules of **analytically continuing** a function (i.e., extending the function's domain so as to remain analytic), not just from real to complex, but from one part of the complex plane to another, have come to be an important object of study in complex analysis. Despite its reputation of being difficult and mysterious, analytic continuation embodies the "essence" of functions of (one or more) complex variables—that *the identity of a function is completely determined, or "encoded", by its values over an arbitrarily small neighborhood, anywhere*—and deserves to be made a central theme of the whole subject.

Extending familiar functions to complex numbers is not just an idle pursuit by the pure mathematician, but it often provides new insights, or even holds the key, to questions that one might ask on the real line. For example,

- Is there a way to define (always the natural log) of negative numbers?

- Why does the Taylor series of converge for all , but that of or only does for , even though they are infinitely differentiable for all ?
- Why is it that we typically only need to check trigonometric identities for acute angles, and they are automatically true for all reals?
- What do people mean when they say that ?

Prior knowledge of complex analysis is not assumed except some acquaintance with the algebra of . For a quick visual introduction, specifically aiming at the last question, see Visualizing the Riemann zeta function and analytic continuation by 3blue1brown. Due to a lack of illustrative graphics, this article shall, for the most part, stay away from the geometric perspective.

### Prelude: Analytic continuation on the real line[edit | edit source]

It is possible to discuss analytic continuation *on the real line*, and it may be instructive to do so. The trigonometric functions are originally defined for acute angles, i.e., for , and it is not a trivial matter as it may seem to extend to all reals. Yes, we are told in school to define it by way of the unit circle, which has the appeal of being periodic and "sine-wavy", but how are we sure that the plethora of identities that are derived from geometry would still be valid for all real numbers? Instead, let's take the identities and as the *definition* of sine and cosine, respectively, for , but does it still hold that there? That's not hard to check (algebraically), and we can proceed to analytically continue it again, to , and so on to infinity. As cumbersome as it is, the same process is one of most useful techniques of analytic continuation, namely via a *functional equation*. We shall revisit it later.

### From real to complex[edit | edit source]

The most basic type of functions would be the polynomials, such as , and to extend that to the complex plane we only need to know how to add and multiply two complex numbers. Moreover, the complex numbers form a *field*, i.e., we can also divide a complex number by any nonzero complex number, so we can easily extend any *rational* function, i.e. the quotient of two polynomials, to the complex plane, minus a finite number of points. The rule seems to be that, as long as it is expressed with algebraic operations, we have no problem extending it to complex numbers. (That is just the "mechanical" or "algorithmic" aspect of what the function is; to get a more complete picture, we would need to *see* the geometry of complex numbers, which incidentally is the key to the Fundamental Theorem of Algebra.)

What about this function?

Next, consider transcendental functions, e.g., . One approach is to approximate it by polynomials, which can be extended to the complex plane. Indeed, given the Taylor series

We can now evaluate :

What about ? Its Taylor series at , is

*literally*a radius.

Now we have analytically continued the to the disk , we still can't evaluate . Had we started with the Taylor series at a different point, say , we would have extended to a bigger disk, namely . Crucially, the two extensions agree on the overlap. Furthermore, nothing stops us from taking the Taylor series off of the real axis, and curiously the disk of convergence always has the origin on its boundary circle, i.e., it converges in as big a disk as possible, for we know must have a "singularity" at . We could then in theory reach by successive Taylor expansion, even though it is difficult to calculate with.

There is another way to analytically continue , and that is to start from the formula

Now we can answer the question of log of negative numbers. The singularity at zero poses an obstacle to analytic continuation, so we have to "get around" it. It turns out the answer

*simply-connected*region, meaning that a loop inside it can always be shrunk to a point. A third option is to keep analytically continuing, and accept that the function shall be

*multi-valued*. More on this later.

The general rule seems to be that, if a function can be expressed as an integral, then we can extend it to as large a *simply-connected* domain as the integrand makes sense. Other examples include the inverse trigonometric functions:

Of a different flavor is the Gamma function

*functional equation*(an equation relating the function with itself)

*definition*for . Having analytically continued the -function one strip to the left, we can use the same formula for , and so on.

For a functional equation that involves the derivative, consider for *any* smooth function with compact support,

*special functions*; here we have one for each and each .

Let's conclude this long overview with a summary of techniques of analytic continuation:

- direct extension of algebraic operations
- (succesive) Taylor series expansion
- integral along a path in the complex domain
- functional equation

The big question that should be answered is: Why in the universe should all these different analytic continuations result in the same function? We shall first explore this principle of analytic continuation and its implications, then discuss the various obstacles of analytic continuation and how the principle may fail, and finally what the "mild condition" is and the theorem that formalizes the principle — almost the opposite of the logical order.

### The Principle of Analytic Continuation[edit | edit source]

It is a remarkable feature that analytic continuation, if possible, is unique: two different methods of analytic continuation would have to agree, and we are free to choose whichever method that is convenient.

### Obstructions to analytic continuation[edit | edit source]

We have seen that analytic continuation can have an obstruction if the function "blows up", or has a singularity, and that we may "get around it" by entering the complex plane. One could classify the different ways that analytic continuation is achieved.

- The obstruction is a single point, and analytic continuation from the two sides of it actually agree. The standard examples are the rational functions, which has a
**pole**whenever the denominator is zero. In general, the quotient of two holomorphic functions also gives rise to poles, e.g., , and such functions are called**meromorphic**. For technical reasons, there is one other type of singularity that is not considered a pole. - The obstruction is a single point, but analytic continuation from the two sides don't agree. Examples are algebraic functions that involve radicals, such as . If we allow the two analytic continuations to carry on, as if they are on two separate "sheets" or branches, they may agree after meeting the second time. We have now constructed a Riemann surface on which the function is naturally defined. In other cases, such as , it never comes to agree, so you get an "infinite-sheeted" Riemann surface.

### Analyticity and holomorphic functions[edit | edit source]

As diverse as the ways of defining functions on the complex domain, there is a simple criterion that they all satisfy and turns out is all you need, and that is what's known by the term **holomorphic,** that be complex-differentiable *throughout* an open set (a domain) of the complex plane. It is the highlight of Cauchy's theory that holomorphic implies analyticity.

Contrast that with the real variable case, where functions that are differentiable are very far from being analytic. Even being infinitely differentiable (smooth) does not make it (real) analytic, by the existence of this function: