Analytic continuation

From Wikiversity
Jump to navigation Jump to search

Many of the functions that one meets in calculus, including trigonometric (sine, cosine, tangent), exponential and logarithm functions, can take complex numbers as its argument (the "input"), and the value of the function (the "output") would also be a complex number. One might think that there are many different ways to "extend" or "continue" these functions to the complex domain, but—with a mild criterion—there is a unique way to do so. Since the time of Lagrange, these and a host of other functions are known as "analytic", which has come to mean something precise—locally it can be represented by a power series—and in the complex domain is equivalent to this rather mild criterion. The process and rules of analytically continuing a function (i.e., extending the function's domain so as to remain analytic), not just from real to complex, but from one part of the complex plane to another, have come to be an important object of study in complex analysis. Despite its reputation of being difficult and mysterious, analytic continuation embodies the "essence" of functions of (one or more) complex variables—that the identity of a function is completely determined, or "encoded", by its values over an arbitrarily small neighborhood, anywhere—and deserves to be made a central theme of the whole subject.

Extending familiar functions to complex numbers is not just an idle pursuit by the pure mathematician, but it often provides new insights, or even holds the key, to questions that one might ask on the real line. For example,

  • Is there a way to define (always the natural log) of negative numbers?
  • Why does the Taylor series of converge for all , but that of or only does for , even though they are infinitely differentiable for all ?
  • Why is it that we typically only need to check trigonometric identities for acute angles, and they are automatically true for all reals?
  • What do people mean when they say that ?

Prior knowledge of complex analysis is not assumed except some acquaintance with the algebra of . For a quick visual introduction, specifically aiming at the last question, see Visualizing the Riemann zeta function and analytic continuation by 3blue1brown. Due to a lack of illustrative graphics, this article shall, for the most part, stay away from the geometric perspective.

Prelude: Analytic continuation on the real line[edit | edit source]

It is possible to discuss analytic continuation on the real line, and it may be instructive to do so. The trigonometric functions are originally defined for acute angles, i.e., for , and it is not a trivial matter as it may seem to extend to all reals. Yes, we are told in school to define it by way of the unit circle, which has the appeal of being periodic and "sine-wavy", but how are we sure that the plethora of identities that are derived from geometry would still be valid for all real numbers? Instead, let's take the identities and as the definition of sine and cosine, respectively, for , but does it still hold that there? That's not hard to check (algebraically), and we can proceed to analytically continue it again, to , and so on to infinity. As cumbersome as it is, the same process is one of most useful techniques of analytic continuation, namely via a functional equation. We shall revisit it later.

From real to complex[edit | edit source]

The most basic type of functions would be the polynomials, such as , and to extend that to the complex plane we only need to know how to add and multiply two complex numbers. Moreover, the complex numbers form a field, i.e., we can also divide a complex number by any nonzero complex number, so we can easily extend any rational function, i.e. the quotient of two polynomials, to the complex plane, minus a finite number of points. The rule seems to be that, as long as it is expressed with algebraic operations, we have no problem extending it to complex numbers. (That is just the "mechanical" or "algorithmic" aspect of what the function is; to get a more complete picture, we would need to see the geometry of complex numbers, which incidentally is the key to the Fundamental Theorem of Algebra.)

What about this function?

We can extend the part and the part separately, and have them meet at the vertical (imaginary) axis. Looking at the point , the value is on one side and on the other, so it's not continuous there. You may try to push the boundary to the left or right, but it can never be continuous, let alone differentiable. Even at the vicinity of , the two sides don't "fit together" even though along the real axis it is differentiable. You may try functions that are twice (or more) differentiable, and you would run into the same problem. It's best that we declare that such can't be extended to the complex domain. We might add to our rule of thumb that "piecewise defined" functions, including expressions that uses the absolute value, are not permitted.


Next, consider transcendental functions, e.g., . One approach is to approximate it by polynomials, which can be extended to the complex plane. Indeed, given the Taylor series

it is natural to define to be the limit of the successive polynomial approximations, i.e., the power series, which does converge for all . That is a quick and efficient way to extend functions, including sine and cosine, to the whole complex plane.


We can now evaluate :

The connection between trigonometric and exponential functions is much celebrated in Euler's identity: , which is both a gem in itself and extremely useful in deriving other formulas.


What about ? Its Taylor series at , is

and for complex , the series converges only inside the disk . It is a general rule that any power series would converge in a circular disk, and diverge outside it (and may or may not converge at a point on the circle). The "radius of convergence" from calculus is literally a radius.


Now we have analytically continued the to the disk , we still can't evaluate . Had we started with the Taylor series at a different point, say , we would have extended to a bigger disk, namely . Crucially, the two extensions agree on the overlap. Furthermore, nothing stops us from taking the Taylor series off of the real axis, and curiously the disk of convergence always has the origin on its boundary circle, i.e., it converges in as big a disk as possible, for we know must have a "singularity" at . We could then in theory reach by successive Taylor expansion, even though it is difficult to calculate with.


There is another way to analytically continue , and that is to start from the formula

and now to integrate from 1 to a complex point , i.e., we will need to pick a path in the complex plane. For illustration, we shall carry out the calculation for . Let the path be the "quarter circle" from 1 to , so that the in the integrand is of the form (a change of variable, if you will), so—blindly following the calculus of differentials—we have , so that
That may not be as surprising as it seems if we turn it around: , which is Euler's identity. But what if we had taken a different path, say from 1 straight up to , then straight to the left to ? Again trusting calculus,
but they are not obvious to evaluate. However, we could write a computer program to approximate the integrals, which would only involve algebraic operations, and we'd find that it matches with the earlier result, . It is the hallmark of complex analysis that integration in the complex domain is independent of the path taken (with important caveat; see later), and to even define these integrals properly is part of the standard course in complex analysis (see Cauchy's integral theorem).


Now we can answer the question of log of negative numbers. The singularity at zero poses an obstacle to analytic continuation, so we have to "get around" it. It turns out the answer

depends on which way to go around it (going above or below it), but not on the specific method (either by succesive Taylor expansion or path integral). That's why in real variable calculus, we leave log of negative numbers undefined. Now we have to make a choice: whether to choose one of the values (and accept that the function is no longer continuous on the negative real axis), or leave it undefined, so the domain of log is a simply-connected region, meaning that a loop inside it can always be shrunk to a point. A third option is to keep analytically continuing, and accept that the function shall be multi-valued. More on this later.

The general rule seems to be that, if a function can be expressed as an integral, then we can extend it to as large a simply-connected domain as the integrand makes sense. Other examples include the inverse trigonometric functions:

and
where the "obstacles" are located at and , respectively.


Of a different flavor is the Gamma function

which makes sense (i.e., the integral converges) for . To extend it to the left, we make use of the functional equation (an equation relating the function with itself)
which, among other things, implies that at the positive integers (that's why the -function is an "interpolation" of the factorial). That is, let be the definition for . Having analytically continued the -function one strip to the left, we can use the same formula for , and so on.


For a functional equation that involves the derivative, consider for any smooth function with compact support,

which again converges for , with fixed (and real). It is easy to check that , so we can analytically continue to the full -plane as before, strip by strip. This goes to show that complex functions go well beyond those isolated examples of special functions; here we have one for each and each .


Let's conclude this long overview with a summary of techniques of analytic continuation:

  • direct extension of algebraic operations
  • (succesive) Taylor series expansion
  • integral along a path in the complex domain
  • functional equation

The big question that should be answered is: Why in the universe should all these different analytic continuations result in the same function? We shall first explore this principle of analytic continuation and its implications, then discuss the various obstacles of analytic continuation and how the principle may fail, and finally what the "mild condition" is and the theorem that formalizes the principle — almost the opposite of the logical order.

The Principle of Analytic Continuation[edit | edit source]

It is a remarkable feature that analytic continuation, if possible, is unique: two different methods of analytic continuation would have to agree, and we are free to choose whichever method that is convenient.

Obstructions to analytic continuation[edit | edit source]

We have seen that analytic continuation can have an obstruction if the function "blows up", or has a singularity, and that we may "get around it" by entering the complex plane. One could classify the different ways that analytic continuation is achieved.

  • The obstruction is a single point, and analytic continuation from the two sides of it actually agree. The standard examples are the rational functions, which has a pole whenever the denominator is zero. In general, the quotient of two holomorphic functions also gives rise to poles, e.g., , and such functions are called meromorphic. For technical reasons, there is one other type of singularity that is not considered a pole.
  • The obstruction is a single point, but analytic continuation from the two sides don't agree. Examples are algebraic functions that involve radicals, such as . If we allow the two analytic continuations to carry on, as if they are on two separate "sheets" or branches, they may agree after meeting the second time. We have now constructed a Riemann surface on which the function is naturally defined. In other cases, such as , it never comes to agree, so you get an "infinite-sheeted" Riemann surface.

Analyticity and holomorphic functions[edit | edit source]

As diverse as the ways of defining functions on the complex domain, there is a simple criterion that they all satisfy and turns out is all you need, and that is what's known by the term holomorphic, that be complex-differentiable throughout an open set (a domain) of the complex plane. It is the highlight of Cauchy's theory that holomorphic implies analyticity.

Contrast that with the real variable case, where functions that are differentiable are very far from being analytic. Even being infinitely differentiable (smooth) does not make it (real) analytic, by the existence of this function:

We see more clearly why it fails to be analytic.