Breadcrumbs Section. Click here to navigate to respective pages.

Chapter

Chapter

# Introduction

DOI link for Introduction

Introduction book

# Introduction

DOI link for Introduction

Introduction book

## ABSTRACT

Classical asymptotic analysis studies the limiting behavior of functions when singular points are approached. It shares with analytic function theory the goal of providing a detailed description of functions, and it is distinguished from it by the fact that the main focus is on singular behavior. Asymptotic expansions provide increasingly better approximations as the special points are approached, yet they rarely converge to a function. The asymptotic expansion of an analytic function at a regular point is

the same as its convergent Taylor series there. The local theory of analytic functions at regular points is largely a theory of convergent power series. We have − ln(1 − x) = ∑∞k=1 xk/k; the behavior of the log near one is

transparent from the series, which also provides a practical way to calculate ln(1 − x) for small x. Likewise, to calculate z! := Γ(1 + z) = ∫∞

0 e−ttzdt for

small z we can use

ln Γ(1+z) = −γz+ ∞∑ k=2

(−1)kζ(k)zk k

, (|z| < 1), where ζ(k) := ∞∑ j=1

j−k (1.1)

and γ = 0.5772.. is the Euler constant (see Exercise 4.62 on p. 99). Thus, for small z we have

Γ(1 + z) = exp (−γz + pi2z2/12 · · · )

= exp

( −γz +

(−1)kζ(k)k−1zk ) (1 + o(zM )) (1.2)

where, as usual, f(z) = o(zj) means that z−jf(z)→ 0 as z → 0. Γ(z) has a pole at z = 0; zΓ(z) = Γ(1 + z) is described by the convergent

power series

zΓ(z) = exp

( −γz +

k−1(−1)kζ(k)k−1zk ) (1 + o(zM )) (1.3)

This is a perfectly useful way of calculating Γ(z) for small z. Now let us look at a function near an essential singularity, e.g., e−1/z

near z = 0. Of course, multiplication by a power of z does not remove the singularity, and the Laurent series contains all negative powers of z:

e−1/z = ∞∑ j=0

(−1)j j!zj

; (z 6= 0) (1.4)

Eq. (1.4) is fundamentally distinct from the first examples. This can be seen by trying to calculate the function from its expansion for say, z = 10−10: (1.1) provides the desired value very quickly, while (1.4), also a convergent series, is virtually unusable. Mathematically, we can check that now, if M is fixed and z tends to zero through positive values then

e−1/z − M∑ j=0

(−1)j j!zj

À z−M+1, as z → 0+ (1.5)

where À means much larger than. In this sense, the more terms of the series we keep, the worse the error is! The Laurent series (1.4) is convergent, but antiasymptotic: the terms of the expansion get larger and larger as z → 0. The function needs to be calculated there in a different way, and there are certainly many good ways. Surprisingly perhaps, the exponential, together with related functions such as log, sin (and powers, since we prefer the notation x to eln x) are the only ones that we need in order to represent many complicated functions, asymptotically. This fact has been noted already by Hardy who wrote [38], “No function has yet presented itself in analysis the laws of whose increase, in so far as they can be stated at all, cannot be stated, so to say, in logarithmico-exponential terms.” This reflects some important fact about the relation between asymptotic expansions and functions which will be clarified in §4.9. If we need to calculate Γ(x) for very large x, the Taylor series about one

given point would not work since the radius of convergence is finite (due to poles on R−). Instead we have Stirling’s series,

ln(Γ(x)) ∼ (x − 1/2) lnx − x + 1 2 ln(2pi) +

cjx −2j+1, x → +∞ (1.6)

where 2j(2j − 1)cj = B2j and {B2j}j≥1 = {1/6,−1/30, 1/42...} are Bernoulli numbers. This expansion is asymptotic as x → ∞: successive terms get

smaller and smaller. Stopping at j = 3 we get Γ(6) ≈ 120.00000086 while Γ(6) = 120. Yet, the expansion in (1.6) cannot converge to ln(Γ(x)), and in fact, it has to have zero radius of convergence, since ln(Γ(x)) is singular at all x ∈ −N (why is this an argument?). Unlike asymptotic expansions, convergent but antiasymptotic expansions

do not contain manifest, detailed information. Of course, this is not meant to understate the value of convergent representations or to advocate for uncontrolled approximations.