The Exponential and Logarithmic Functions

R (Chandra) Chandrasekhar

2025-04-04 | 2025-04-04

Estimated Reading Time: 31 minutes

This is another in my series of blogs on fascinating and mathematically indispensable numbers. It follows on from blogs on zero, one, and π, and is likely to be followed by others. It happens that a single blog is sometimes too short to display the beauty of the subject, and I have had to segment the story into parts. Such will be the case here. While e is less well known to the general public than π, it is perhaps even more fundamental to all of Nature and pervades the entire realm of Mathematics. It would indeed be difficult to discover a nook or cranny of Nature that has not been penetrated by this omnipresent emissary of mathematical order.

Unfurling countless digits

Perversely, almost all important numbers like , , , etc., in our world are irrational. One simply cannot predict the decimal digit sequence.

“What if I were the creator of such a virtual world, populated like ours, by irrational numbers with unending and unpredictable digits? How would I sustain that world without an infinite memory to hold all those countless digits?”

I would need some convenient, succinct, shorthand method by which to unfurl their countless digits, one after the other. It might be an algorithm like a convergent infinite series or a recursive definition or an infinite continued fraction1.

This thought is a preface to many of the fascinating numbers we will encounter in these blogs.

I am opening this blog with an abrupt exposure to the idea of exponentials, without any courteous introduction or gentle historical note on , which will follow soon enough though. The reason for this is that I wanted to dispel a possible confusion between and that often exists in the mind of the mathematical novice. Such confusion is best dispelled using whole numbers, and before has made its august entrance, rather than afterward, when the door for even greater conceptual muddiness has been thrown open. In this blog, I will be zig-zagging repeatedly across the same concepts in different contexts, simply because what we are dealing with is a tad more abstract than usual.

Bases and Exponents

We have introduced the different types of numbers in the blog The Two Most Important Numbers: Zero and One. In that very same blog, we also introduced the idea of exponentiation, or raising (something) to a power, as repeated multiplication. That section is very important: do take a look at it again if it seems faint or foggy now, as some basic results from that blog are worth reviewing at this point.

Monomial power functions

At the very outset, it is important to clear up a possible source of confusion: monomial power functions and exponentials might look similar but are very different.

A monomial power function is a monomial , with the coefficient equal to one, and the value being a non-negative integer, i.e., Examples are , , , , etc., as shown by the graphs of these functions in Figure 1.

Figure 1: Monomial power functions of the form x^n where the x is the variable and n is the power. The curve for n = e, which is not an integer, is an exception, shown as a dashed line. Its curve lies between those of n=2 and n=3. Note that all curves in this family pass through the origin (0, 0).

The following points should be noted:

  1. In each case, varies, but is constant, as defined in Equation 1.

  2. When is even, like , etc., the graph of is symmetrical about the -axis. Such a function is called an even function, defined as .

  3. When is odd, like etc., the graph of exhibits rotational symmetry about the origin , i.e., if the graph is rotated 180° about the origin, the graph remains unchanged. Such a function is called an odd function, defined as .

  4. The graph of is constant and its behaviour is anomalous when compared to others in the family, as is apparent from Figure 1.

  5. The higher the value of the steeper the graph climbs as increases.

  6. Except for , the graphs of pass through for all other values of .

  7. The monomial power functions are a subset of the polynomials.

  8. As an exception, I have included in Figure 1 the special case of the positive non-integer power , which is the subject of this blog. This was simply to show that since lies between and its graph is sandwiched between the curves and . It is shown as a dashed line in Figure 1. But there ends the similarity. In fact, is not a monomial power function. Negative numbers cannot be raised to non-integer powers and still remain real numbers. So, the domain for alone is restricted to . If you find all this unhelpful or confusing, simply ignore it for now.

Exponentials

We now consider the second family of functions which might look like the monomial power functions but are really a bird of a different feather. The exponentials are generally defined as: Note that the value of is constant whereas varies. To keep matters simple, we will not consider the case of here. Moreover, for our purpose of comparing the behaviour of graphs of , we have restricted the definition to be: Graphs of this family of functions are shown in Figure 2.

Figure 2: The exponential functions of the form n^x for n = 1, 2, e, 3, 4. The special case of e^x is often called the exponential function or the natural exponential function and is the subject of our blog. It is shown as a dashed line. Note that all curves in ths family pass through (0, 1).

The following points are noteworthy:

  1. The graph for is anomalous and constant in value. It is shown only for completeness and may be excluded from the definition of exponentials as in Equation 2.

  2. All other graphs pass through the point , which is characteristic of all exponentials.

  3. For , the values of are greater than , but less than , and approach the asymptote as .

  4. As increases without bound, so does .

  5. The larger is the steeper the rise of for values of 𝑥 >1.

  6. The graph of 𝑒𝑥 =exp𝑥—shown as a dashed line—legitimately belongs to this class of curves and shares the same domain as other exponentials. Even as 2 <𝑒 <3, its graph is sandwiched between those of 2𝑥 and 3𝑥 as would be expected.

  7. The exponentials are neither odd nor even functions.

  8. Note how these exponential functions increase far more rapidly than the monomial power functions. The roles of 𝑛 and 𝑥 have been interchanged between the monomial power functions and the exponentials.

A tabular comparison of the values of 𝑥𝑛 and 𝑛𝑥 will better reveal the large-value behaviour of these two families of functions, as shown in Table 1.

Table 1: Exponential functions grow faster than power functions, as illustrated here for 𝑛 =2,3,4,5 and 𝑥 =10. Except for the anomalous case of 𝑛 =1,𝑛𝑥 >𝑥𝑛.
𝑛 𝑥 𝑥𝑛 𝑛𝑥
1 10 10 1
2 10 100 1,024
3 10 1,000 59,049
4 10 10,000 1,048,576
5 10 100,000 9,765,625

Computational complexity theory

I am belabouring this distinction between the polynomials (or monomial power functions) and the exponentials because many students, especially of computer science, are usually clueless when they encounter the rather forbidding topic called Computational complexity theory in their university studies.

The exponential functions tend to increase extremely rapidly compared to the polynomial functions. Such distinctions become vital when evaluating the efficiency and execution times of algorithms in computer science, and indeed even their solvability in finite time. Keep this difference in mind as we navigate our way through the number 𝑒 in this and subsequent blogs.

Introduction to the number 𝑒

We are now ready to make our formal acquaintance with the number 𝑒, which stands modestly behind 𝜋 in fame, though not in ubiquity. It appears interwoven into the very fabric of Nature and is pivotal to mathematics, science, and engineering.

Unlike 𝜋, though, it is relatively unknown to the public at large. Indeed, it did not have its own symbol until relatively recently, when the Swiss mathematician Leonhard Euler assigned it the letter 𝑒 around 1731. In fact, I wanted to call this blog, “Euler’s number 𝑒” before I realized that it was actually discovered by Jacob Bernoulli, and that there are several other candidates for Euler’s number besides 𝑒.

The number 𝑒 is associated with logarithms, exponential growth, exponential decay, compound interest, the differential and integral calculus, the circular and hyperbolic functions, probability, queueing and reliability theories, the Fourier transform, and many other areas of mathematics. This linkage, across sub-disciplines, was not known initially, but only recognized gradually as “things fell into place” later on. In this sense, the history of 𝑒 is like that of, say, wavelets [2] in recent times, when it transpired that physicists, electrical engineers, and pure mathematicians had all approached the same idea from different standpoints and terminologies. A sound theory was only born after these diverse viewpoints had been integrated into a coherent body of knowledge.

Among the important numbers of mathematics, the linkage between 𝜋, 𝑒 and 𝑖 is deeply entrenched. Here is an equation, which was raised to mystical status by an American professor of mathematics, Benjamin Peirce, who was photographed standing in front of a blackboard on which he had written: 𝑖𝑖=𝑒𝜋(4) He was quoted as saying, “Gentlemen, we have not the slightest idea what this equation means, but we may be sure that it means something very important [3,4].” We will re-visit this equation and de-mystify it later in another blog in this series.

While 𝜋 is the ratio of the circumference of a circle to its diameter, what exactly is 𝑒? And, if it is so important, why is 𝑒 not more widely known? What properties does 𝑒 possess that make it so useful and pervasive? We shall attempt to answer these questions and more in this and related blogs.

The power of the exponent

Did you read that heading carefully? And did you get the pun in it?

We have already peeked into exponentiation in Table 1. Just as multiplication is a shorthand for repeated addition so too is exponentiation a shorthand for repeated multiplication. It has been said that human beings are not very good when it comes to comprehending the very large and the very small.

If I gave you a stick that is one metre long and told you to divide it into one thousand equal parts, how long would each division be? If I now told you that the same stick represented one million divisions, and asked you to mark the first one thousandth part, where would you mark it?

I am not going to tell you, because this one is easy enough for you to figure out for yourself. It will tell you how good or bad your ability to estimate is. What happens if the scale is not linear but logarithmic? Let your mental cogwheels again start turning. If you find all this too exhausting, simply look at Figure 3 below.

Figure 3: A ruler with a linear scale on one side and a logarithmic scale on the other. Note that a logarithmic scale cannot have a zero, by definition. On the logarithmic side, the ruler spans more than a million, while it spans just eight units on the linear side. Try to get your head around this.

The power of two

There is a famous story about the person who invented the game of chess.2 The monarch of the realm was so pleased with the game that he wanted to reward the inventor. Feeling very expansive, he said “Ask for anything and I will give it to you.” The inventor rather diffidently asked the king for one grain of rice on the first square of the chess board, double that number of grains on the second, double that number of grains again on the third, and so on till all the sixty four squares had their quotas filled [cole-1998].

Figure 4: A chessboard in a state of play.

The king laughed and said, “Ask for something more. You deserve it.” The inventor quietly but persistently said, “Sire, kindly grant me what I have asked.” The king jovially asked his ministers to fulfil the inventor’s modest request, thinking all would be well. Little did he know that the entire granary of the kingdom would be emptied before each square received its quota of rice grains. Can you explain why?

Grains of rice on a chess board

Let us number the squares on the chess board from 1 to 64. The first square has one grain, which is 20. The second has two grains, which is 21 =2(21). Likewise, the 𝑘th square will have 2𝑘1 grains of rice.

The total number of grains of rice will be given by the formula: 𝑇=64𝑘=12𝑘1(5)

Recognizing this as the sum of a geometric series with 𝑎 =1, 𝑟 =2 >1, and 𝑛 =64, the sum 𝑇 is given by [5]: 𝑇=𝑎(𝑟𝑛1)𝑟1=1(2641)1=2641264(6)

Assuming that 50 grains of rice have a mass of one gram, the total mass of 264 grains of rice metric tonnes would be 26450×106 3.7 ×1011 metric tonnes. India’s total annual rice production in 2023–2024 was 1378.25 ×105 1.38 ×108 metric tonnes. The inventor of chess in the seventh century asked for more than 2,500 times the rice produced in India in 2023–2024! He certainly knew about the power of the exponent.

The moral of this story is that exponentials are beguilingly difficult for human beings to grasp. That is why logarithms and logarithmic scales, which linearize exponentials, were invented.

Napier and logarithms

Logarithms were developed by an eccentric3 Scottish laird called John Napier around 1614. He devoted twenty years of his life to achieve this. In these days of mobile phones with calculators, and computational packages on laptops, it is difficult to imagine a time when the tedium of calculations impelled people to seek methods to ease the burden.

It has been suggested that Napier got the idea for performing additions in place of multiplications from trigonometric identities such as sin𝐴cos𝐵=12[sin(𝐴+𝐵)+sin(𝐴𝐵)] He might just as well have gotten the idea from the geometric progression 1,𝑟,𝑟2,𝑟3,𝑟𝑛, where each successive term is obtained by multiplying the previous one by 𝑟: something which could equally well have been accomplished by adding the exponent of 𝑟—which is 1—to that of the previous term. This idea which may seem blasé to us now was profoundly significant in Napier’s time. The laws of indices which we now know, form the basis of the idea for logarithms.

Therefore, logarithms eventually reduced multiplications to additions and exponentiations to multiplications.4 Likewise, divisions became subtractions, and taking roots was replaced by divisions. This reduction in the hierarchy of the arithmetic operations came with a commensurate reduction in computational complexity. Logarithms were indeed a great labour saving device for arithmetic operations.

Where does 𝑒 fit into all this? In quite a roundabout way, really.

Napier coined the word logarithm which means “ratio number”. The scheme he devised was to produce a table of numbers 𝑁 against 𝐿 [6] where 𝑁=107(1107)𝐿(7) Comparing this with the modern notation introduced by the prolific Euler, that 𝑁=𝑏𝐿 we find that what might correspond to the base in Napier’s logarithms was 𝑏 =(1 107) =0.9999999, which because it is less than 1 means that his logarithms decreased with increasing numbers. Moreover, because of the factor 107, setting 𝐿 =0 gives 107 in Napier’s scheme whereas in modern notation, 𝐿 =0 gives 1 regardless of the base.

The strange thing is that logarithms to the base 𝑒, now called natural logarithms, used to be called Napierian logarithms, although he did not use 𝑒 as the base. How did this association then arise?

Let us manipulate Equation 7 step-by-step as shown below to achieve the form 𝑁 =𝑏𝐿: 𝑁=107(1107)𝐿; divide both sides by 107𝑁107=𝑁=(1107)𝐿; set 𝐿=107𝐿=(1107)107𝐿=[(1107)107]𝐿=[(11107)107]𝐿=𝑏𝐿(8) The interesting point of the above derivation is that the number 𝑏 =[(11107)107], which we may now associate with the base of Napier’s logarithms using modern convention, works out to 0.36787942297110, which is very close to 1𝑒 0.36787944117144 [7]. This means that Napier had unwittingly used 1𝑒 as the base of his logarithms and was tantalizingly close to discovering 𝑒, or its reciprocal.5 The fact that 𝑒 may be the result of a limiting process sets the scene for the next stage in the dénouement.

Compounding of interest

Banks charge or pay compound interest on money borrowed or invested with them. Let us assume that a sum 𝑃 is invested with a bank that pays compound interest at the rate of 𝑟 per annum, where 𝑟 is expressed, not as a percentage, but as a fraction between zero and one. Let this interest be paid annually. Then at the end of one year, the money would have grown to 𝑃(1 +𝑟). At the end of two years, the money would have grown to [𝑃(1 +𝑟)](1 +𝑟) =𝑃(1+𝑟)2. Thus after 𝑡 years, the money would have grown to 𝑃(1+𝑟)𝑡.

In point of fact, nowadays, banks do not compute interest on an annual basis. They do so on a daily basis. Let us assume that there are 𝑛 days in a year. Then, the interest rate per period, which in this case is the rate per day, is 𝑟𝑛 and there are 𝑛 periods of compounding in one year giving a sum at the end of the year of 𝑃(1+𝑟𝑛)𝑛. Likewise, in 𝑡 years, there are 𝑛𝑡 periods of compounding and the sum 𝑆 at the end of 𝑡 years will be: 𝑆=𝑃[1+𝑟𝑛]𝑛𝑡

Now, what happens when the number of compounding periods grows? What happens if banks do not compute interest daily but every hour, or every minute, or every second? Is there a possible “get rich quick scheme” that involves getting paid interest every millisecond, say, or every nanosecond?

Change in compounding period

We will write a simple program to investigate how money grows as the frequency of compounding keeps increasing. The equation we will use is 𝑆=𝑃[1+𝑟𝑛]𝑛𝑡(9) where 𝑃 is the principal, 𝑟 is the annual interest rate expressed as a fraction, 𝑛 is the number of compounding periods per annum, 𝑡 is the number of years and 𝑆 is the sum or amount at the end of 𝑡 years.

We assign 𝑃 =100, 𝑡 =1, 𝑟 =0.05, and allow 𝑛 to vary across annual, semi-annual, quarterly, monthly, weekly, daily, and hourly compounding periods. These correspond to values of 𝑛 equal to 1,2,4,12,52,365,8760 respectively.

𝑆 is computed using Equation 9 and the values of 𝑛 and 𝑆 are tabulated in Table 2. Two scripts are provided, one in Julia, and the other in Python 3, that accomplish this. The results are shown below in Table 2 where the last row has been added manually.

Table 2: Sums 𝑆 for principal 𝑃 =100, rate 𝑟 =0.05, and time 𝑡 =1, with interest compounded at intervals 𝑛 of once a year, twice a year, quarterly, monthly, weekly, daily, and hourly. The last row shows the upper bound, with continuous compounding, yielding 𝑆 =105.127109, as proved below.
𝑛 𝑆
1 105.000000
2 105.062500
4 105.094534
12 105.116190
52 105.124584
365 105.126750
8760 105.127095
105.127109

What do you find noteworthy about this? Regardless, of how frequently the interest is compounded, the amount 𝑆 is solidly stuck around 105.127 or thereabouts. One might be forgiven for thinking that if the interest were added with breathtaking rapidity, the sum would somehow multiply astronomically. But alas, that is not how it works.

There is one trend that is apparent from the figures in the above table, though. The numbers after the decimal place do increase very modestly even if they seem to bounded from above by some number. The one way to find that number is to progress from periodic compounding to instantaneous compounding. We derive the exact value of 𝑆 for instantaneous compounding later in this blog in What is the amount with instantaneous interest?.

With the word instantaneous, we are on thin ice. Instantaneous velocity gave us calculus, with its inbuilt inconsistencies of dividing by something that is close to but not quite zero. So, we may expect something along those lines here also. Whenever instantaneous makes its presence onstage, zero and infinity cannot be far away. 😉

The road to 𝑒

There are three variables apart from 𝑛 in Equation 9 for 𝑆. Let us simplify it by setting 𝑃 =1, 𝑡 =1, and 𝑟 =1. Note that the last assignment means that the bank pays 100% interest per annum: something that is very unlikely, but mathematically expedient for us! The equation for 𝑆 now becomes 𝑆=[1+1𝑛]𝑛(10) A Python 3 script called steps_to_e.py evaluates Equation 10 at logarithmic intervals and its results are tabulated below:

n         e
-----------------------------
1         2.00000000000000000
10        2.59374246010000231
100       2.70481382942152848
1000      2.71692393223559359
10000     2.71814592682492551
100000    2.71826823719229749
1000000   2.71828046909575338
10000000  2.71828169413208176
100000000 2.71828179834735773

The values are suggestive of convergence, but it is not rapid. The limit is the historically named number 𝑒. A check with Wolfram Alpha gives the value 2.71828182845904524 to seventeen decimal places.

We can also countercheck with SymPy, the Python library for symbolic mathematics, by running the script below:

from sympy import *

n = symbols("n")
S = limit((1 + 1 / n) ** n, n, oo)
print(S)

to get the result E. The script is at limit_e.py.

The expression lim𝑛[1+1𝑛]𝑛(11) does converge to a finite non-zero value, which is its limit. And the value of this limit is the profoundly important mathematical constant 𝑒: 𝑒lim𝑛[1+1𝑛]𝑛(12)

What is the amount with instantaneous interest?

Instantaneous compounding does not lead to unlimited growth. We have guessed as much from the results of evaluating Equation 9 for different values of 𝑛, as shown in Table 2.

Now that we have defined 𝑒, we may obtain a closed form solution for the amount from instantaneous compounding of interest. In Equation 9, we retain 𝑃, 𝑟 and 𝑡 and let 𝑛 approach infinity. 𝑆=lim𝑛𝑃[1+𝑟𝑛]𝑛𝑡=𝑃lim𝑛[1+𝑟𝑛]𝑛𝑡 A purist would use 𝑥 rather than 𝑛 when moving from countable intervals to continuous compounding. So, let us re-state the equation with 𝑥: 𝑆=𝑃lim𝑥[1+𝑟𝑥]𝑥𝑡 A magician’s distraction is called for here. We want the expression within the parentheses to have a second term with one as the numerator so that it looks like the second term in Equation 12. Let 𝑟𝑥 =1𝑢. Then, the above equation becomes 𝑆=𝑃lim𝑥[(1+𝑟𝑥)𝑥]𝑡=𝑃lim𝑥[(1+1𝑢)𝑢𝑟]𝑡=𝑃lim𝑥[(1+1𝑢)𝑢]𝑟𝑡=𝑃[lim𝑥(1+1𝑢)𝑢]𝑟𝑡=𝑃𝑒𝑟𝑡 We can now confidently augment Table 2 by adding the last row with a value of for 𝑛 and an upper bound of 𝑆 =𝑃𝑒𝑟𝑡 =100𝑒0.05 =105.1271096.

Thus far, we have distinguished between 𝑥𝑛 and 𝑛𝑥, emphasized that exponential growth is truly phenomenal, considered compound interest at ever decreasing intervals between interest payments, which finally let to the definition of 𝑒.

We have also glancingly looked at logarithms and contrasted linear and logarithmic scales. Central to all this is the rather diminutive number 𝑒 lying between 2.5 and 3 that occupies a central place in much of mathematics.

Hereafter, we will continue exploring 𝑒 and slowly invest it with mathematical trappings that go beyond mere numberhood and allow fascinating insights to emerge between seemingly unrelated fields.

Logarithms and the hyperbola

Limits are at the heart of both the differential and integral calculus. You have just seen one application of limits in defining the important number 𝑒. We will now take a look at the use of limits in integral calculus and the use of the logarithm as a function rather than as a mere computational aid. Our journey takes us through the history of finding areas under curves before the calculus had been fully fleshed out.

The procedure of finding the area under a closed planar curve is called quadrature or squaring. This is because the area may be thought of as being composed of little squares, which when assembled together and summed, equal the area under the curve.

Pierre de Fermat in France had achieved great success in computing the areas under curves of the form 𝑦 =𝑥𝑛.6 His method was to use a series of rectangles whose bases formed a geometric progression with common ratio, 𝑟 less than one, and which therefore converged to a finite sum which could be calculated. The one curve, though, that he could not handle was the rectangular hyperbola, which is really a pair of curves defined by 𝑦 =1𝑥. If Fermat applied his formula 𝑥𝑛𝑑𝑥=𝑥𝑛+1𝑛+1+𝐶 he faced the problem of division by zero when 𝑛 =1, and the method failed.

Computing the area

It was one of Fermat’s contemporaries, Grégoire Saint-Vincent, who was known as the “circle-squarer”, who found a way to solve this problem. He also used intervals that were in a geometric progression, but he made an important discovery in the case of a hyperbola like 𝑦 =1𝑥.

Figure 5: The area under a rectangular hyperbola. Grégoire Saint-Vincent estimated the area under the hyperbola y = \tfrac{1}{x} by summing the differently coloured strips under the curve, of intervals which were in a geometric progression. He found that the areas of all the strips were the same, i.e., A_1 = A_2 = A_3, etc. This led to the later discovery that the area under the hyperbola was related to the logarithm as a function. See the text for a full explanation.

Saint-Vincent started his integration at 𝑥 =𝑟0 =1 and divided the area under the curve into intervals along the 𝑥-axis that were in a geometric progression as shown in Figure 5. He estimated the areas of the differently coloured strips, and found that they were equal in area to each other. This was Saint-Vincent’s profound and original contribution. How did he do this?

Figure 6: Estimating the area under the hyperbola using adjacent trapeziums. The areas A_1 and A_2 are equal. See the text for the full explanaton.

The account of Saint-Vincent’s method, as described below, has been drawn from several sources [68]. It has been simplified to use modern methods and terminology, while remaining faithful to the original in spirit and conception.

Consider Figure 6 which is Figure 5 redrawn to show how the unknown areas 𝐴1, 𝐴2, etc., may be approximated by the known areas 𝑇1, 𝑇2, etc., of the respective trapeziums that are shown. Note that the areas 𝐴1, 𝐴2 etc., are contiguous and non-overlapping.

  1. Dashed lines like 𝑃𝑄 connecting the points 𝑃 and 𝑄 on the arc of the hyperbola, are drawn corresponding to 𝑥 =1 and 𝑥 =𝑟 respectively, to get a trapezium whose area, 𝑇1 is known exactly. The area of that trapezium is used to estimate the area 𝐴1, as explained below.

  2. The point 𝑃(1,1) lies on every rectangular hyperbola. Its 𝑥-coordinate represents the start of both the geometric progression and the interval of integration. In our case, the common ratio 𝑟 >1 because we do not seek convergence. The initial 𝑥-value is shown as 𝑟0 =1 on Figure 6.

  3. 𝑄(𝑟,1𝑟) also lies on the hyperbola. The straight line 𝑃𝑄 is an approximation to the arc 𝑃𝑄 on the hyperbola. The trapezium with heights of 1 and 1𝑟 and width (𝑟 1) represents a first approximation to the unknown area 𝐴1 shown in Figure 5. The known area of the trapezium, 𝑇1, is 𝑇1=12[11+1𝑟][𝑟1]=12𝑟[𝑟21]𝐴1.

  4. Moving to the next trapezium with base between 𝑥 =𝑟 and 𝑥 =𝑟2, we have 𝑇2=12[1𝑟+1𝑟2][𝑟2𝑟]=𝑟2𝑟2[𝑟+1][𝑟1]=12𝑟[𝑟21]𝐴2.

  5. This pattern of all the trapezium areas being the same was the remarkable observation of Saint-Vicent.

  6. By repeatedly subdividing the intervals it may be shown that in the limit, the values of each of the 𝑇𝑖 and 𝐴𝑖 will become equal. We will henceforth use 𝐴 to denote the single value shown as 𝐴1, 𝐴2, 𝐴3, etc., in Figure 5, Figure 6. Note that the lower limit of area summation is 1 in all cases. We may then tabulate the respective integrals, intervals of summation, and areas so [8]:

Table 3: Area under a hyperbola versus interval of integration. NB: 𝑟0 =1.
Integral Upper limit Area
𝑟011𝑥d𝑥 𝑟0 0
𝑟111𝑥d𝑥 𝑟 𝐴
𝑟211𝑥d𝑥 𝑟2 2𝐴
𝑟311𝑥d𝑥 𝑟3 3𝐴
𝑟411𝑥d𝑥 𝑟4 4𝐴

And this is where the matter rested, until Alphonse Antonio de Sarasa—a student and later a colleague of Grégoire Saint-Vincent—took a look at the results, and realized that it was a mapping between a geometric and an arithmetic series, which meant that logarithms were involved.

A logarithm is a continuous real-valued function with the following two properties [10]:

  1. log(1) =0; and

  2. log(𝑎𝑏) =log(𝑎) +log(𝑏).

Let us see if the function 𝑡11𝑥d𝑥=𝜆(𝑡) satisfies these two properties. By the first row of Table 3 property (a) is satisfied. Again, we have from Table 3 that 𝑟21=𝑟1d𝑥+𝑟2𝑟d𝑥=𝑟1d𝑥+𝑟1d𝑥=2𝑟1d𝑥 In other words, 𝜆(𝑟2) =2𝜆(𝑟). So, we may assert that the area under a hyperbola gives rise to a logarithm function: 𝑡11𝑥d𝑥=𝜆(𝑡)=log(𝑡).(13) The only question now is, what is the base of the logarithm?

The function that equals its own derivative7

An exponential function 𝑓 to a base 𝑏 is defined as 𝑦=𝑓(𝑥)=𝑏𝑥,𝑥

Let us investigate the derivative of 𝑦 =𝑏𝑥 using the definition so: d𝑦d𝑥=lim0𝑏(𝑥+)𝑏𝑥=lim0𝑏𝑥(𝑏1)=𝑏𝑥lim0𝑏1 The limit on the right hand side may or may not exist. Let us assume for now that it does. Then we may set it to a value 𝑘 and we then have the important relationship d𝑦d𝑥=𝑏𝑥lim0𝑏1=𝑘𝑏𝑥(14) which means that the derivative of an exponential function at any point is proportional to the value of the function itself, at that point.

The next question is this: is there any value of 𝑏 for which the constant of proportionality equals one? That would give us a function whose value at any point equals its derivative at that point. Let us investigate.

For finite we set the limit term on the RHS of equation Equation 14 to 1, i.e., 𝑏1=1(15) If this expression were identically equal to 1, then we may assert that lim0𝑏1=1

Solving Equation 15 for 𝑏, we get 𝑏=1+ and taking “roots” on either side, 𝑏=(1+)1(16) Since Equation 16 has been “derived” from Equation 14, taking the limit as tends to zero for either should give equivalent results. That is the value of 𝑏 that makes an exponential function its own derivative is also the value of 𝑏 that results from 𝑏=lim0(1+)1(17) If in Equation 17 we replace 1 by 𝑚, and note that 0 is equivalent to 𝑚 , we may re-write Equation 17 as 𝑏=lim𝑚(1+1𝑚)𝑚(18) But we know from Equation 12 that the limit in Equation 18 is by definition equal to 𝑒.

It has been a bit of a hard slog, but we can now confidently say that the unique function that is its own derivative and anti-derivative is the exponential function with base 𝑒. Indeed, this function is sufficiently important for it to be called the natural exponential function or the exponential function, as we have already seen.

So, when we talk of the exponential function, we mean exp(𝑥)=𝑒𝑥,𝑥

Let us see where the foregoing leads to. Let 𝑦 =𝑒𝑥. Then d𝑦𝑑𝑥=dd𝑥(𝑒𝑥)=𝑒𝑥=𝑦(19) If one takes reciprocals on either side of Equation 19, one gets d𝑥d𝑦=1𝑒𝑥=1𝑦, i.e.,d𝑥=d𝑦𝑦, leading to𝑥=1𝑦d𝑦(20) This appears similar to the equation for the area under the rectangular hyperbola given in Equation 13. But what is 𝑥 in terms of 𝑦?

The natural exponential and logarithmic functions

The natural logarithm function is that logarithm function that has 𝑒 as its base. In a generic fashion, one may write it as log𝑒 but the accepted convention is to refer to it as ln.8 Now ln is the inverse of the exponential function, exp, which means, ln(exp(𝑥))=ln𝑒𝑥=𝑥,𝑥 and converselyexp(ln(𝑥))=𝑒ln(𝑥)=𝑥,𝑥(0,). Note carefully that because exp(𝑥) is strictly greater than zero for real 𝑥, the domain of the natural logarithm function (and indeed of all logarithm functions) is (0,). Inverse functions are reflections of each other on the line 𝑦 =𝑥 on the Cartesian plane. This is illustrated for exp(𝑥) and ln(𝑥) in Figure 7.

Figure 7: The exponential and natural logarithm functions shown as reflections of each other on the line y=x. This is a property of inverse function pairs, f and f^{-1} because f(f^{-1}(x) = f^{-1}(f(x)) = x. Notice that \exp(0) = 1 and that \ln(1) = 0. This is true for all exponential and logarithmic functions regardless of the chosen base. The relative shapes of the exponential and logarithmic functions remain similar for bases other than e.

We are now in a position to answer the question asked at the end of the section Computing the area about the base of the logarithm which gave the area under a hyperbola. The base of the logarithm is 𝑒 and we may write: 𝑡11𝑥d𝑥=ln𝑡(21)

One may then use Equation 12 to define the exp function and Equation 21 to define the ln function, with the knowledge that they are an inverse function pair.

One might wonder if there is a geometrical significance to the number 𝑒 like there is for 𝜋 as the ratio of the circumference to the diameter of a circle. Think about this for a while before reading on. As before, the conics hold the answer.

Substituting 𝑡 =𝑒 in Equation 21, we get 𝑒11𝑥d𝑥=ln(𝑒)=1.(22) This equation comes closest to stitching 𝑒 to geometry, but it uses the thread of calculus! Mark how the number 1 lays a prominent role here.

Logarithms and dynamic range compression

Our human senses of sight and hearing each have enormous dynamic ranges. The eye can respond to light intensities across 13 orders of magnitude.9 Likewise our ears can hear sound intensities ranging from whispers to explosions, across 12 orders of magnitude.

If you think of a weighing scale, it usually has a scale that ranges, from say 0 kg to perhaps 150 kg. Most instruments only have a limited range over which they can measure. To increase the range, you may have to switch the input to another scale before making the measurement. How then do our ears and eyes accommodate such large dynamic ranges without the need for any form of switching?

The answer lies with logarithms. Logarithms naturally compress a large linear range to a more compact one. This would be clear from the graph of the logarithm function plotted in Figure 7.

There is a “law” first propounded by the German physiologist Ernst Heinrich Weber that the “just noticeable difference” (JND) that human beings experienced to any physiological stimulus was related by the differential equation d𝑠=𝑘d𝑊𝑊 where d𝑠 is the JND, 𝑊 the stimulus already present, and d𝑊 the stimulus increase. The German physicist Gustav Theodor Fechner popularized Weber’s hypothesis, which leads to the solution 𝑠=𝑘ln𝑊+𝐶 This is referred to as the Weber-Fechner law, but is really only a hypothesis that has not achieved the status of a theory, much less a law, especially because it has to do with subjective sensation and perception.

The important lesson for us is that logarithmic compression allows very large dynamic ranges to be accommodated, without input sensor switching. Logarithmic scales abound in the natural sciences and engineering: the pH scale for acidity, the Richter scale for measuring earthquake intensities, and the decibel scale for sound intensity, or for signal voltage, and power in electrical engineering, to name just a few.

Why is 𝑒 important?

We have now reached the stage where we can answer the question, “Why is 𝑒 important?”

The number 𝑒’s claim to fame is because of the remarkable properties of the exponential function exp(𝑥), which has the unique distinction of being its own derivative and anti-derivative. Stated formally, dd𝑥exp(𝑥)=exp(𝑥)(23) and, exp(𝑥)d𝑥=exp(𝑥)+𝐶(24) where 𝐶 is an arbitrary constant of integration. The exponential and logarithmic functions are a formidable mathematical pair, as we have seen in this blog. The number 𝑒 is important because the exponential function is its own derivative and anti-derivative. Nature is full of systems that can be modelled by this property of exponentials.

In a succeeding blog, we will see that 𝑒 is the natural bridge between the real and complex domains—a connection that has given rise to some very powerful mathematics.

Acknowledgements

Thanks are due to Wolfram Alpha, Desmos, and the various AI bots, too numerous to mention, for assistance at various stages of preparation of this blog.

Feedback

Since I work independently and alone, there is every chance that unintentional mistakes have crept into this blog, due to ignorance or carelessness. Therefore, I especially appreciate your corrective and constructive feedback.

Please email me your comments and corrections.

A PDF version of this article is available for download here:

References

[1]
Paul Loya. 2017. Amazing and Aesthetic Aspects of Analysis. Springer.
[2]
Alexander Hellemans. 2021. How Wavelets Allow Researchers to Transform, and Understand, Data. QuantaMagazine. Retrieved 6 April 2025 from https://www.quantamagazine.org/how-wavelets-allow-researchers-to-transform-and-understand-data-20211013/
[3]
J L Coolidge. 1950. The Number e. American Mathematical Monthly 57, 9 (1950), 591–602. DOI:https://doi.org/10.2307/2308112
[4]
J J OĆonnor and E F Robertson. 2001. The number e. MacTutor History of Mathematics. Retrieved 18 April 2025 from https://mathshistory.st-andrews.ac.uk/HistTopics/e/
[5]
Jenny Olive. 2003. Maths: A student’s survival guide (2nd ed.). Cambridge University Press.
[6]
C H Edwards Jr. 1979. The historical development of the calculus (1st ed.). Springer.
[7]
Eli Maor. 1994. E: The story of a number. Princeton University Press.
[8]
David Perkins. 2012. Calculus and its origins. Mathematical Association of America (MAA).
[9]
J W Bradshaw. 1903. The logarithm as a direct function. Annals of Mathematics 4, (1903), 51–62. DOI:https://doi.org/10.2307/1967113
[10]
Serge Lang. 2001. Short calculus: The original edition of ‘a first course in calculus’ (Reprint of the 1st ed. Addison–Wesley, 1964. ed.). Springer.

  1. I later found that this link is a chapter from a draft of the book with the charmingly alliterative title Amazing and Aesthetic Aspects of Analysis [1] where it is now chapter 8.↩︎

  2. The precursor called chaturanga was invented in India around the 600s.↩︎

  3. This word has both a common and a mathematical meaning. Can you reconcile the two?↩︎

  4. We touched upon this idea in the blog Varieties of Multiplication.↩︎

  5. It is often erroneously believed that Napier used 𝑒 as the base of his logarithms, but we know that his “base” was less than 1 and was indeed 1𝑒.↩︎

  6. These are our monomial power functions.↩︎

  7. Beginning with the heading, this section, more than others, is heavily borrowed from Eli Maor’s excellent text e: The Story of a Number [7].↩︎

  8. Mathematical conventions and practice might change. Programming languages might use log instead of ln. Beware! You have been forewarned.↩︎

  9. An order of magnitude conventionally means a power of ten. Two orders of magnitude thus refers to a ratio between two quantities that is either one hundred or one hundredth.↩︎

Copyright © 2006 – 2025, R (Chandra) Chandrasekhar. All rights reserved.