Almost nothing on this page constitutes original results. This is merely a collection of things I have found interesting, although many findings were discovered on my own.
Consider the formal power series \[ \sum_{n=1}^\infty \sum_{k=1}^\infty x^{nk} = \sum_{n=1}^\infty a_n x^n \] The coefficients \( a_n \) correspond to the number of times that \( nk \) attains some particular power, say \( nk = p \) for the \( p \)'th power. Since \( n \) and \( k \) are natural and similarly situated dummy variables, the coefficient is the number of ways \( p \) divided by \( n \) yields an integer \( k \). Therefore, this coefficient is \( \sigma_0(p) \), the number of divisors of \( p \). If \( |x| \) is less than one, \( |x^n| < 1 \) for \( n \geq 1 \) and approaches zero rapidly. This means summing in \( k \) is legitimate and the series converges. We hope that by examining the function near the boundary \( |x| = 1 \), asymptotic information can be obtained about the number of divisors function. This makes sense: as in the ratio and root tests for convergence, the asymptotic behavior of a sequence can be described by the radius of convergence, and the behavior around its radius of convergence, of its generating function.
Equating the two forms of the series and multiplying by \( 1 - x \), one finds \[ \sum_{n=1}^\infty \frac{x^n}{\frac{1-x^n}{1-x}} = (1 - x) \sum_{n=1}^\infty \sigma_0(n) x^n \] The left-hand side is nearly in the form of a generating function. If \( x \) is very close to \( 1 \), then \( \frac{1-x^n}{1-x} \) approaches \( n \) in the limit. This is more clear from examining the expression \( \frac{(x+1)^n - 1}{x} = n + \frac{n(n+1)}{2} + \mathcal{O}(x^2) \), but due to \( x^n \) still being large for big \( n \) when \( x \) is close to \( 1 \), it's difficult to assess the error in putting \( n \) for \( \frac{1 - x^n}{1-x} \).
We won't let that stop us! If this obstacle were to be surmounted, we might could say \begin{align} \sum_{n=1}^\infty \frac{x^n}{n} &\sim (1-x) \sum_{n=1}^\infty \sigma_0(n) x^n \\ \frac{1}{1-x} \sum_{n=1}^\infty \frac{x^n}{n} &\sim \sum_{n=1}^\infty \sigma_0(n) x^n \\ \sum_{n=1}^\infty H_n x^n &\sim \sum_{n=1}^\infty \sigma_0(n) x^n \end{align} where \( H_n \) is the \( n \)'th harmonic number. Equating coefficients, this hints that at least on average, the \(n\)'th harmonic number is approximately equal to the number of divisors of \( n \). This agrees with a more weak heuristic: if \( n \) is large, the probability of it being divisible by \( k \) is about \( k^{-1} \). Since all divisors are inclusively between \( 1 \) and \( n \), we could say \( H_n \) is "on average" the number of divisors of \( n \).
One flaw in this latter heuristic is that \( n \) doesn't actually have any divisors between \( \frac{n}{2} \) and \( n \); these terms shouldn't be included in the sum. As \( \sum_{k=\frac{n}{2}+1}^{n+1} k^{-1} \sim \log(2) \), this distinction only makes an \( \mathcal{O}(1) \) difference however. In a similar manner, one could argue that a more appropriate representation, instead of \( H_n \), is the asymptotically equivalent \( \log(n) \), as that expression better characterizes the divisor function's multiplicative properties.
This was a problem posed in N.G. de Bruijn's Asymptotic Methods in Analysis, and since
I thought it was quite challenging, I share one approach for tackling the problem here.
The solution does not obey the traditional form of reciprocal powers which are usually obtained
for Laplace transforms, and requires a more careful strategy.
Problem: Show that
\[ t \int_2^\infty e^{-xt} \log^{-1}(x) \, \mathrm{d}x \approx
\sum_{n=0}^\infty c_n \log^{-n-1}(t) \quad (t \to 0^+) \]
with the coefficients \( c_n = - \int_0^\infty e^{-y} \log^n(y) \, \mathrm{d}y \).
Proof. We make the substitution \( u = xt \) into the integral and rearrange
to find that, when \( t > 0 \),
\begin{align}
\int_2^\infty te^{-xt} \log^{-1}(x) \, \mathrm{d}x &= \int_{2t}^\infty e^{-u} \frac{\mathrm{d}u}{\log(\frac{u}{t})} \\
&= \int_{2t}^\infty e^{-u} \frac{\mathrm{d}u}{\log(u) - \log(t)} \\
&= -\frac{1}{\log(t)} \int_{2t}^\infty e^{-u} \frac{\mathrm{d}u}{1-\frac{\log(u)}{\log(t)}}
\end{align}
When \( t \) gets very small, so does the magnitude of \( \frac{\log(u)}{\log(t)} \), so applying the power series
for \( \frac{1}{1-x} \) should be a good approximation. Note that we leave the radius of convergence, for small
\( t \), at \( u = t^{-1} \), so examining the remainder we find
\[ \int_{t^{-1}}^\infty e^{-u} \frac{\mathrm{d}u}{1-\frac{\log(u)}{\log(t)}}
\leq \frac{1}{2} \int_{t^{-1}}^\infty e^{-u} \, \mathrm{d}u \]
Thus, as the remainder is exponentially small, we may restrict the integration range with no significant difference
from the asymptotic perspective. Thus our integral of interest is asymptotically equivalent to
\begin{align}
&-\frac{1}{\log(t)} \int_{2t}^{t^{-1}} e^{-u} \sum_{n=0}^\infty \left( \frac{\log(u)}{\log(t)} \right)^n \, \mathrm{d}t \\
=&-\sum_{n=0}^\infty \frac{1}{\log^{n+1}(t)} \int_{2t}^{t^{-1}} \log^n(u) e^{-u} \, \mathrm{d}t
\end{align}
The coefficients are nearly of the expected form; we wish to show that the integration range may be broadened to \( (0, \infty) \)
without significant consequence. The integral \( \int_0^{2t} \log^n(u) e^{-u} \, \mathrm{d}t \) can be shown to be
\( \mathcal{O}(t \log^n(2t)) \), and as this expression diminishes when \( t \) is small more rapidly than any of the terms
in our asymptotic expansion, this is therefore insignificant, and the lower bound of the integral defining our coefficients
may therefore be changed to zero without consequence.
Next we wish to show that the integral may be taken to infinity, again by showing that the consequence is smaller than any
term of the primary asymptotic expansion we are composing.
Thus, as the terms of the expansion take the form of being a reciprocal power of the logarithm, we wish to show for all natural \( k \) that
\[ \int_{t^{-1}}^\infty \log^n(u) e^{-u} \, \mathrm{d}u = \mathcal{o}(\log^{-k}(t)) \quad (t \to 0^+) \]
An application of L'Hopital's rule to the limit shows that no matter how large a \( k \) is chosen, \( t \) can always be taken
small enough so that the foregoing holds. Thus taking the integral to infinity does not cause any error that is not already
incurred in crafting our asymptotic expansion.
Therefore, since we showed the lower integration bound for our coefficients can be changed to zero,
and we showed that the upper bound can be changed to infinity, the whole integration interval can become \( (0, +\infty) \)
such that the integral does not depend on \( t \) anymore. Therefore, the coefficients can be computed that way
and we have shown what we wished to show.
This section is written with calculators in mind that have built-in numerical integration procedures which, however, may not be very smart or advanced. The calculator I am using is a TI-36X Pro. In general, this makes approximating special functions represented by integrals much easier on these calculators, but because they are not very smart, and typically do not support improper integrals, some care is required.
I have found that for evaluating the gamma function at values larger than \( 3 \), say, it is best to use Stirling's approximation and then to use Binet's formula for the remainder to make the approximation as exact as possible. I find that in this way, on my calculator, the absolute error is near-zero for larger values of \( x \), and is only a couple decimals off for smaller values of \( x \). The integral substitution \( x \to -\ln(x) \) is required to change the integration bounds from being infinite to zero and one. The approximation that works for me is \[ \ln(\Gamma(z + 1)) = \left(z + \frac{1}{2}\right) \ln(z) - z + \frac{\ln(2\pi)}{2} - \int_0^1 \frac{x^{z-1}}{\ln(x)} \left(\frac{x}{1 - x} + \frac{1}{\ln(x)} + \frac{1}{2} \right) \]
The results on my calculator are near-exact, so that exponentiating the results of this approximation incurs very little loss of precision. I experimented with applying integral substitutions directly to the definition of the gamma function, and although it worked the run time was very long.
For the majority of numerical purposes in applied mathematics, a scientific calculator will do.
This, instead of a graphing calculator, not only avoids proprietary software, but some would say it even enforces good habits.
Special functions are rarely needed; that's why they're called special functions.
However, there still needs to be a way to compute their values on those rare occasions, and this blurb documents
mediocre approximation methods that compromise between simplicity and precision.
I will update this page as I discover new formulae, but for now I concern myself only with the exponential integral.
It is trivial to apply the following strategy to the sine and cosine integrals, as well as different variants of the exponential integral.
Here we concern ourselves only with what is often denoted as \( E_1(x) \) in the literature.
We have
\[ \int_x^\infty \frac{e^{-t}}{t}\, \mathrm{d}t \approx \frac{e^{-x}}{x} \frac{x^3 + 8x^2 + 11x}{x^3 + 9x^2 + 18x + 6} \]
This can be derived by computing the Padé approximant of the well-known asymptotic expansion for large \( x \).
Because Padé approximants are magical, it just so happens that this approximation works well for medium-sized \( x \) as well.
For small \( x \), a different strategy should be employed.
For example, one can make a rational approximant out of the ordinary power series like so:
\[ \int_x^\infty \frac{e^{-t}}{t}\, \mathrm{d}t \approx -\gamma - \log(x) + \frac{18x^2 + 72x}{5x^2 + 36x + 72} \]
Unlike the rest of the results on this page, I believe these formulae are new.
The remainders after truncation of the conventional power series for \( \log(x) \) and \( \arctan(x) \)
may be expressed as inverse factorial series in the number of summands. These expressions give way to asymptotic
expansions and precise error estimates for generic values of \( x \), generalizing known expressions in the literature.
Lemma. We have for all \( t \in \mathbf{R} \neq 1 \) and \( k \in \mathbf{Z} \geq 0 \) that
\[ \frac{1}{1-t} = \sum_{n=0}^{k-1} t^n + \frac{t^k}{1-t} \]
This can be shown in a variety of ways, such as by induction of by multiplying the right-hand side by \( 1 - t \)
and examining that the expression is equal to one, and so we omit its straightforward proof for brevity.
From this Lemma, if one substitutes \( -t \) for \( t \) and performs term-by-term integration from \( 0 \) to \( x \),
we find an expression for \( \log(1+x) \) for \( x \in \mathbf{R} > -1 \):
\[ \log(1+x) = \sum_{n=0}^{k-1} \frac{(-1)^n x^{n+1}}{n+1} + (-1)^k \underbrace{\int_0^x \frac{t^k}{1+t} \, \mathrm{d}t}_{R_k(x)} \]
We denote the magnitude of the remainder \( R_k(x) \) and wish to examine its behavior,
particularly for fixed \( x \) and varying values of \( k \).
Theorem. We have, for all \(p \in \mathbf{Z} \geq 0 \), that
\[ R_k(x) = \sum_{n=1}^p \frac{x^{k+n} (n-1)!}{(k+1)(k+2)\dotsm (k+n) (1+x)^n}
+ \frac{p!}{(k+1)(k+2)\dotsm (k+p)} \int_0^x \frac{t^{k+p}}{(1+t)^{p+1}}\, \mathrm{d}t \]