The Exponential and Logarithmic Functions

R (Chandra) Chandrasekhar

2004-03-01 | 2025-04-19

Estimated Reading Time: 32 minutes

This is another in my series of blogs on fascinating and mathematically indispensable numbers. It follows on from blogs on zero, one, and π, and is likely to be followed by others. It happens that a single blog is sometimes too short to display the beauty of the subject, and I have had to segment the story into parts. Such will be the case here. While e is less well known to the general public than π, it is perhaps even more fundamental to all of Nature and pervades the entire realm of Mathematics. It would indeed be difficult to discover a nook or cranny of Nature that has not been penetrated by this omnipresent emissary of mathematical order.

After completing this blog, I became aware of Robin Wilson’s Euler’s Pioneering Equation: The most beautiful theorem in mathematics [1]. I was astounded to discover that parts of this blog bear a remarkable resemblance to his chapter 4 on , in content and unfoldment. This blog is based on lectures that I had originally given in 2004, and its content antedates Wilson’s book. Nevertheless, it is flattering to realize that I have come close to a seasoned professional mathematician’s conceptual exposition on .

Unfurling countless digits

Perversely, almost all important numbers like , , , etc., in our world are irrational. One simply cannot predict the decimal digit sequence.

“What if I were the creator of such a virtual world, populated like ours, by irrational numbers with unending and unpredictable digits? How would I sustain that world without an infinite memory to hold all those countless digits?”

I would need some convenient, succinct, shorthand method by which to unfurl their countless digits, one after the other. It might be an algorithm like a convergent infinite series or a recursive definition or an infinite continued fraction1.

This thought is a preface to many of the fascinating numbers we will encounter in these blogs.

I am opening this blog with an abrupt exposure to the idea of exponentials, without any courteous introduction or gentle historical note on , which will follow soon enough though. The reason for this is that I wanted to dispel a possible confusion between and that often exists in the mind of the mathematical novice.

Such confusion is best dispelled using whole numbers, and ideally before has made its august entrance, rather than afterward, when the door for even greater conceptual muddiness has been thrown wide open. In this blog, I will be zig-zagging repeatedly across the same concepts in different contexts, simply because what we are dealing with is a tad more abstract than usual.

Bases and Exponents

We have introduced the different types of numbers in the blog The Two Most Important Numbers: Zero and One. In that very same blog, we also introduced the idea of exponentiation, or raising (something) to a power, as repeated multiplication. That section is very important: do take a look at it again if it seems faint or foggy now, as some basic results from that blog are worth reviewing at this point.

Monomial power functions

At the very outset, it is important to clear up a possible source of confusion: monomial power functions and exponentials might look similar but are very different.

A monomial power function is a monomial , with the coefficient equal to one, and the value being a non-negative integer, i.e., Examples are , , , , etc., as shown by the graphs of these functions in Figure 1.

Figure 1: Monomial power functions of the form x^n where the x is the variable and n is the power. The curve for n = e, which is not an integer, is an exception, shown as a dashed line. Its curve lies between those of n=2 and n=3. Note that all curves in this family pass through the origin (0, 0).

The following qualitative points should be noted:

  1. In each case, varies, but is constant, as defined in Equation 1.

  2. When is even, like , etc., the graph of is symmetrical about the -axis. Such a function is called an even function, defined as .

  3. When is odd, like etc., the graph of exhibits rotational symmetry about the origin , i.e., if the graph is rotated 180° about the origin, the graph remains unchanged. Such a function is called an odd function, defined as .

  4. The graph of is constant and its behaviour is anomalous when compared to others in the family, as is apparent from Figure 1.

  5. For the larger is, the closer is to .

  6. For , the larger is, the steeper the graph climbs as increases.

  7. Except for , the graphs of pass through for all other values of .

  8. The monomial power functions are a subset of the polynomials.

  9. As an exception, I have included in Figure 1 the special case of the positive non-integer power , which is the subject of this blog. This was simply to show that since lies between and its graph is sandwiched between the curves and . It is shown as a dashed line in Figure 1. But there ends the similarity. In fact, is not a monomial power function. Negative numbers cannot be raised to non-integer powers and still remain real numbers. So, the domain for alone is restricted to . If you find all this unhelpful or confusing, simply ignore it for now.

Exponentials

We now consider the second family of functions which might look like the monomial power functions but are really a bird of a different feather. The exponentials are generally defined as: Note that the value of is constant whereas varies. To keep matters simple, we will not consider the case of here. Moreover, for our purpose of comparing the behaviour of graphs of , we have restricted the definition to be: Graphs of this family of functions are shown in Figure 2.

Figure 2: The exponential functions of the form n^x for n = 1, 2, e, 3, 4. The special case of e^x is often called the exponential function or the natural exponential function and is the subject of our blog. It is shown as a dashed line. Note that all curves in this family pass through (0, 1), and the dashed curve passses through (1, e).

The following qualitative points are noteworthy:

  1. For each graph, the exponent varies, but the base is held constant for that graph.

  2. The graph for is anomalous and constant in value. It is shown only for completeness and may be excluded from the definition of exponentials as in Equation 2.

  3. All other graphs pass through the point , which is characteristic of all exponentials.

  4. For the base , when , , i.e., the dashed graph passes through .

  5. For , the values of are greater than , but less than , and approach the asymptote as .

  6. As increases without bound, so does .

  7. The larger is the steeper the rise of for values of .

  8. The graph of —shown as a dashed line—legitimately belongs to this class of curves and shares the same domain as other exponentials. Even as , its graph is sandwiched between those of and as would be expected.

  9. The exponentials are neither odd nor even functions, but their range is non-negative.

  10. The roles of and have been interchanged between the monomial power functions and the exponentials.

  11. Note how the exponential functions increase exceedingly rapidly compared to the monomial power functions.

A tabular comparison of the values of and will better reveal the large-value behaviour of these two families of functions, as shown in Table 1.

Table 1: Exponential functions grow faster than power functions, as illustrated here for and . Except for the anomalous case of .

Computational complexity theory

I am belabouring this distinction between the polynomials (or monomial power functions) and the exponentials because many students, especially of computer science, are usually clueless when they encounter the rather forbidding topic called Computational complexity theory in their university studies.

The exponential functions tend to increase extremely rapidly compared to the polynomial functions. Such distinctions become vital when evaluating the efficiency and execution times of algorithms in computer science, and indeed even their solvability in finite time. Keep this difference in mind as we navigate our way through the number in this and subsequent blogs.

Introduction to the number e

We are now ready to make our formal acquaintance with the number , which stands modestly behind in fame, though not in ubiquity. It appears interwoven into the very fabric of Nature and is pivotal to mathematics, science, and engineering.

Unlike , though, it is relatively unknown to the public at large. Indeed, it did not have its own symbol until relatively recently, when the Swiss mathematician Leonhard Euler assigned it the letter around 1731. In fact, I wanted to call this blog, “Euler’s number ” before I realized that it was actually discovered by Jacob Bernoulli, and that there are several other candidates for Euler’s number besides .

The number is associated with logarithms, exponential growth, exponential decay, compound interest, the differential and integral calculus, the circular and hyperbolic functions, probability, queueing and reliability theories, the Fourier transform, and many other areas of mathematics. This linkage, across sub-disciplines, was not known initially, but only recognized gradually as “things fell into place” later on.

In this sense, the history of is like that of, say, wavelets [3] in recent times, when it transpired that physicists, electrical engineers, and pure mathematicians had all approached the same idea from different standpoints and terminologies. A sound theory was only born after these diverse viewpoints had been integrated into a coherent body of knowledge.

Among the important numbers of mathematics, the linkage between , and is deeply entrenched. Here is an equation, which was raised to mystical status by an American professor of mathematics, Benjamin Peirce, who was photographed standing in front of a blackboard on which he had written: He was quoted as saying, “Gentlemen, we have not the slightest idea what this equation means, but we may be sure that it means something very important [4,5].” We will re-visit this equation and de-mystify it later in another blog in this series.

While is the ratio of the circumference of a circle to its diameter, what exactly is ? And, if it is so important, why is not more widely known? What properties does possess that make it so useful and pervasive? We shall attempt to answer these questions and more in this and related blogs.

The power of the exponent

Did you read that heading carefully? And did you get the pun in it?

We have already peeked into exponentiation in Table 1. Just as multiplication is a shorthand for repeated addition so too is exponentiation a shorthand for repeated multiplication. It has been said that human beings are not very good when it comes to comprehending the very large and the very small.

If I gave you a stick that is one metre long and told you to divide it into one thousand equal parts, how long would each division be? If I now told you that the same stick represented one million divisions, and asked you to mark the first one thousandth part, where would you mark it? I am not going to tell you, because this one is easy enough for you to figure out for yourself. It will tell you how good or bad your ability to estimate is.

What happens if the scale is not linear but logarithmic? Let your mental cogwheels again start turning. If you find all this too exhausting, simply look at Figure 3 below.

Figure 3: A ruler with a linear scale on one side and a logarithmic scale on the other. Note that a logarithmic scale cannot have a zero, by definition. On the logarithmic side, the ruler spans more than a million, while it spans just eight units on the linear side. Try to get your head around this.

The power of two

There is a famous story about the person who invented the game of chess.2 The monarch of the realm was so pleased with the game that he wanted to reward the inventor. Feeling very expansive, he said “Ask for anything and I will give it to you.” The inventor rather diffidently asked the king for one grain of rice on the first square of the chess board, double that number of grains on the second, double that number of grains again on the third, and so on till all the sixty four squares had their quotas filled [6].

Figure 4: A chessboard in a state of play.

The king laughed and said, “Ask for something more. You deserve it.” The inventor quietly but persistently said, “Sire, kindly grant me what I have asked.” The king jovially asked his ministers to fulfil the inventor’s modest request, thinking all would be well. Little did he know that the entire granary of the kingdom would be emptied before each square received its quota of rice grains. Can you explain why?

Grains of rice on a chess board

Let us number the squares on the chess board from to . The first square has one grain, which is . The second has two grains, which is . Likewise, the th square will have grains of rice.

The total number of grains of rice will be given by the formula:

Recognizing this as the sum of a geometric series with , , and , the sum is given by [7]:

Assuming that 50 grains of rice have a mass of one gram, the total mass of grains of rice in metric tonnes would be metric tonnes. India’s total annual rice production in 2023–2024 was metric tonnes. The inventor of chess in the seventh century asked for more than times the rice produced in India in 2023–2024! He certainly knew about the power of the exponent.

The moral of this story is that exponentials are beguilingly difficult for human beings to grasp. That is why logarithms and logarithmic scales, which linearize exponentials, were invented.

Napier and logarithms

Logarithms were developed by an eccentric3 Scottish laird called John Napier around 1614. He devoted twenty years of his life to achieve this. In these days of mobile phones with calculators, and computational packages on laptops, it is difficult to imagine a time when the tedium of calculations impelled people to seek methods to ease the burden.

It has been suggested that Napier got the idea for performing additions in place of multiplications from trigonometric identities such as He might just as well have gotten the idea from the geometric progression , where each successive term is obtained by multiplying the previous one by : something which could equally well have been accomplished by adding the exponent of —which is 1—to that of the previous term. This idea which may seem blasé to us now was profoundly significant in Napier’s time. The laws of indices which we now know, form the basis of the idea for logarithms.

Therefore, logarithms eventually reduced multiplications to additions and exponentiations to multiplications.4 Likewise, divisions became subtractions, and taking roots was replaced by divisions. This reduction in the hierarchy of the arithmetic operations came with a commensurate reduction in computational complexity. Logarithms were indeed a great labour saving device for arithmetic operations.

Where does fit into all this? In quite a roundabout way, really.

Napier coined the word logarithm which means “ratio number”. The scheme he devised was to produce a table of numbers against [8] where Comparing this with the modern notation introduced by the prolific Euler, that we find that what might correspond to the base in Napier’s logarithms was , which because it is less than 1 means that his logarithms decreased with increasing numbers. Moreover, because of the factor , setting gives in Napier’s scheme whereas in modern notation, gives regardless of the base.

The strange thing is that logarithms to the base , now called natural logarithms, used to be called Napierian logarithms, although he did not use as the base. How did this association then arise?

Let us manipulate Equation 7 step-by-step as shown below to achieve the form : The interesting point of the above derivation is that the number , which we may now associate with the base of Napier’s logarithms using modern convention, works out to , which is very close to [9]. This means that Napier had unwittingly used as the base of his logarithms and was tantalizingly close to discovering , or its reciprocal.5 The fact that may be the result of a limiting process sets the scene for the next stage in the dénouement.

Compounding of interest

Banks charge or pay compound interest on money borrowed or invested with them. Let us assume that a sum is invested with a bank that pays compound interest at the rate of per annum, where is expressed, not as a percentage, but as a fraction between zero and one. Let this interest be paid annually. Then at the end of one year, the money would have grown to . At the end of two years, the money would have grown to . Thus after years, the money would have grown to .

In point of fact, nowadays, banks do not compute interest on an annual basis. They do so on a daily basis. Let us assume that there are days in a year. Then, the interest rate per period, which in this case is the rate per day, is and there are periods of compounding in one year giving a sum at the end of the year of . Likewise, in years, there are periods of compounding and the sum at the end of years will be:

Now, what happens when the number of compounding periods grows? What happens if banks do not compute interest daily but every hour, or every minute, or every second? Is there a possible “get rich quick scheme” that involves getting paid interest every millisecond, say, or every nanosecond?

Change in compounding period

We will write a simple program to investigate how money grows as the frequency of compounding keeps increasing. The equation we will use is where is the principal, is the annual interest rate expressed as a fraction, is the number of compounding periods per annum, is the number of years and is the sum or amount at the end of years.

We assign , , , and allow to vary across annual, semi-annual, quarterly, monthly, weekly, daily, and hourly compounding periods. These correspond to values of equal to respectively.

is computed using Equation 9 and the values of and are tabulated in Table 2. Two scripts are provided, one in Julia, and the other in Python 3, that accomplish this. The results are shown below in Table 2 where the last row has been added manually, as explained later on.

Table 2: Sums for principal , rate , and time , with interest compounded at intervals of once a year, twice a year, quarterly, monthly, weekly, daily, and hourly. The last row shows the upper bound, with continuous compounding, yielding , as proved below.
1 105.000000
2 105.062500
4 105.094534
12 105.116190
52 105.124584
365 105.126750
8760 105.127095
105.127109

What do you find noteworthy about this? Regardless, of how frequently the interest is compounded, the amount or sum is solidly stuck around or thereabouts. One might be forgiven for thinking that if the interest were added with breathtaking rapidity, the sum would somehow multiply astronomically. But alas, that is not how it works.

There is one trend that is apparent from the figures in the above table, though. The numbers after the decimal place do increase very modestly even if they seem to bounded from above by some number. The one way to find that number is to progress from periodic compounding to instantaneous compounding. We derive the exact value of for instantaneous compounding later in this blog in What is the amount with instantaneous interest?.

With the word instantaneous, we are on thin ice. Instantaneous velocity gave us calculus, with its inbuilt inconsistencies of dividing by something that is close to but not quite zero. So, we may expect something along those lines here also. Whenever instantaneous makes its presence onstage, zero and infinity cannot be far away. 😉

The road to

There are three variables apart from in Equation 9 for . Let us simplify it by setting , , and . Note that the last assignment means that the bank pays % interest per annum: something that is very unlikely, but mathematically expedient for us! The equation for now becomes A Python 3 script called steps_to_e.py evaluates Equation 10 at logarithmic intervals and its results are tabulated below:

n         e
-----------------------------
1         2.00000000000000000
10        2.59374246010000231
100       2.70481382942152848
1000      2.71692393223559359
10000     2.71814592682492551
100000    2.71826823719229749
1000000   2.71828046909575338
10000000  2.71828169413208176
100000000 2.71828179834735773

The values are suggestive of convergence, but it is not rapid. The limit is the historically named number . A check with Wolfram Alpha gives the value of as 2.71828182845904524 to seventeen decimal places.

We can also countercheck with SymPy, the Python library for symbolic mathematics, by running the script below:

from sympy import *

n = symbols("n")
S = limit((1 + 1 / n) ** n, n, oo)
print(S)

to get the result E, which attests to the validity of the limit. The script is at limit_e.py.

The expression does converge to a finite non-zero value, which is its limit. And the value of this limit is the profoundly important mathematical constant :

What is the sum with instantaneous interest?

Instantaneous compounding does not lead to unlimited growth. We have guessed as much from the results of evaluating Equation 9 for different values of , as shown in Table 2.

Now that we have defined , we may obtain a closed form solution for the amount from instantaneous compounding of interest [10]. In Equation 9, we retain , and and let approach infinity. A purist would use rather than when moving from countable intervals to continuous compounding. So, let us re-state the equation with : A magician’s distraction is called for here. We want the expression within the parentheses to have a second term with one as the numerator so that it looks like the second term in Equation 12. Let . Then, the above equation becomes We can now confidently augment Table 2 by adding the last row with a value of for and an upper bound of .

Thus far, we have distinguished between and , emphasized that exponential growth is truly phenomenal, considered compound interest at ever decreasing intervals between interest payments, which finally let to the definition of .

We have also glancingly looked at logarithms and contrasted linear and logarithmic scales. Central to all this is the rather diminutive number lying between and that occupies a central place in much of mathematics.

Hereafter, we will continue exploring and slowly invest it with mathematical trappings that go beyond mere numberhood and allow fascinating insights to emerge between seemingly unrelated fields.

Logarithms and the hyperbola

Limits are at the heart of both the differential and integral calculus. You have just seen one application of limits in defining the important number . We will now take a look at the use of limits in integral calculus and the use of the logarithm as a function rather than as a mere computational aid. Our journey takes us through the history of finding areas under curves before the calculus had been fully fleshed out.

The procedure of finding the area under a closed planar curve is called quadrature or squaring. This is because the area may be thought of as being composed of little squares, which when assembled together and summed, equal the area under the curve.

Pierre de Fermat in France had achieved great success in computing the areas under curves of the form .6 His method was to use a series of rectangles whose bases formed a geometric progression with common ratio, less than one, and which therefore converged to a finite sum which could be calculated. The one curve, though, that he could not handle was the rectangular hyperbola, which is really a pair of curves defined by . If Fermat applied his formula he faced the problem of division by zero when , and the method failed.

Computing the area

It was one of Fermat’s contemporaries, Grégoire Saint-Vincent, who was known as the “circle-squarer”, who found a way to solve this problem. He also used intervals that were in a geometric progression, but he made an important discovery in the case of a hyperbola like .

Figure 5: The area under a rectangular hyperbola. Grégoire Saint-Vincent estimated the area under the hyperbola y = \tfrac{1}{x} by summing the differently coloured strips under the curve, of intervals which were in a geometric progression. He found that the areas of all the strips were the same, i.e., A_1 = A_2 = A_3, etc. This led to the later discovery that the area under the hyperbola was related to the logarithm as a function. See the text for a full explanation.

Saint-Vincent started his integration at and divided the area under the curve into intervals along the -axis that were in a geometric progression as shown in Figure 5. He estimated the areas of the differently coloured strips, and found that they were equal in area to each other. This was Saint-Vincent’s profound and original contribution. How did he do this?

Figure 6: Estimating the area under the hyperbola using adjacent trapeziums. The areas A_1 and A_2 are equal. See the text for the full explanaton.

The account of Saint-Vincent’s method, as described below, has been drawn from several sources [8,9,11,12]. It has been simplified to use modern methods and terminology, while remaining faithful to the original in spirit and conception.

Consider Figure 6 which is Figure 5 redrawn to show how the unknown areas , , etc., may be approximated by the known areas , , etc., of the respective trapeziums that are shown. Note that the areas , etc., are contiguous and non-overlapping.

  1. Dashed lines like connecting the points and on the arc of the hyperbola, are drawn corresponding to and respectively, to get a trapezium whose area, is known exactly. The area of that trapezium is used to estimate the area , as explained below.

  2. The point lies on every rectangular hyperbola. Its -coordinate represents the start of both the geometric progression and the interval of integration. In our case, the common ratio because we do not seek convergence. The initial -value is shown as on Figure 6.

  3. also lies on the hyperbola. The straight line is an approximation to the arc on the hyperbola. The trapezium with heights of and and width represents a first approximation to the unknown area shown in Figure 5. The known area of the trapezium, , is

  4. Moving to the next trapezium with base between and , we have

  5. This pattern of all the trapezium areas being the same was the remarkable observation of Saint-Vicent.

  6. By repeatedly subdividing the intervals it may be shown that in the limit, the values of each of the and will become equal. We will henceforth use to denote the single value shown as , , , etc., in Figure 5, Figure 6. Note that the lower limit of area summation is in all cases. We may then tabulate the respective integrals, intervals of summation, and areas so [12]:

Table 3: Area under a hyperbola versus interval of integration. NB: .
Integral Upper limit Area
0

And this is where the matter rested, until Alphonse Antonio de Sarasa—a student and later a colleague of Grégoire Saint-Vincent—took a look at the results, and realized that it was a mapping between a geometric and an arithmetic series, which meant that logarithms were involved.

A logarithm is a continuous real-valued function with the following two properties [14]:

  1. ; and

  2. .

Let us see if the function satisfies these two properties. By the first row of Table 3 property (a) is satisfied. Again, we have from Table 3 that In other words, . So, we may assert that the area under a hyperbola gives rise to a logarithm function: The only question now is, what is the base of the logarithm?

The function that equals its own derivative7

An exponential function to a base is defined as

Let us investigate the derivative of using the definition so: The limit on the right hand side may or may not exist. Let us assume for now that it does. Then we may set it to a value and we then have the important relationship which means that the derivative of an exponential function at any point is proportional to the value of the function itself, at that point.

The next question is this: is there any value of for which the constant of proportionality equals one? That would give us a function whose value at any point equals its derivative at that point. Let us investigate.

For finite we set the limit term on the RHS of equation Equation 14 to 1, i.e., If this expression were identically equal to 1, then we may assert that

Solving Equation 15 for , we get and taking “roots” on either side, Since Equation 16 has been “derived” from Equation 14, taking the limit as tends to zero for either should give equivalent results. That is the value of that makes an exponential function its own derivative is also the value of that results from If in Equation 17 we replace by , and note that is equivalent to , we may re-write Equation 17 as But we know from Equation 12 that the limit in Equation 18 is by definition equal to .

It has been a bit of a hard slog, but we can now confidently say that the unique function that is its own derivative and anti-derivative is the exponential function with base . Indeed, this function is sufficiently important for it to be called the natural exponential function or the exponential function, as we have already seen.

So, when we talk of the exponential function, we mean

Let us see where the foregoing leads to. Let . Then If one takes reciprocals on either side of Equation 19, one gets This appears similar to the equation for the area under the rectangular hyperbola given in Equation 13. But what is in terms of ?

The natural exponential and logarithmic functions

The natural logarithm function is that logarithm function that has as its base. In a generic fashion, one may write it as but the accepted convention is to refer to it as .8 Now is the inverse of the exponential function, , which means, Note carefully that because is strictly greater than zero for real , the domain of the natural logarithm function (and indeed of all logarithm functions) is . Inverse functions are reflections of each other on the line on the Cartesian plane. This is illustrated for and in Figure 7.

Figure 7: The exponential and natural logarithm functions shown as reflections of each other on the line y=x. This is a property of inverse function pairs, f and f^{-1} because f(f^{-1}(x) = f^{-1}(f(x)) = x. Notice that \exp(0) = 1 and that \ln(1) = 0. This is true for all exponential and logarithmic functions regardless of the chosen base. The relative shapes of the exponential and logarithmic functions remain similar for bases other than e.

We are now in a position to answer the question asked at the end of the section Computing the area about the base of the logarithm which gave the area under a hyperbola. The base of the logarithm is and we may write:

One may then use Equation 12 to define the function and Equation 21 to define the function, with the knowledge that they are an inverse function pair.

One might wonder if there is a geometrical significance to the number like there is for as the ratio of the circumference to the diameter of a circle. Think about this for a while before reading on. As before, the conics hold the answer.

Substituting in Equation 21, we get This equation comes closest to stitching to geometry, but it uses the thread of calculus! Mark how the number also plays a prominent role here.

Logarithms and dynamic range compression

Our human senses of sight and hearing each have enormous dynamic ranges. The eye can respond to light intensities across 13 orders of magnitude.9 Likewise our ears can hear sound intensities ranging from whispers to explosions, across 12 orders of magnitude.

If you think of a weighing scale, it usually has a scale that ranges, from say 0 kg to perhaps 150 kg. Most instruments only have a limited range over which they can measure. To increase the range, you may have to switch the input to another scale before making the measurement. How then do our ears and eyes accommodate such large dynamic ranges without the need for any form of switching?

The answer lies with logarithms. Logarithms naturally compress a large linear range to a more compact one. This would be clear from the graph of the logarithm function plotted in Figure 7.

There is a “law” first propounded by the German physiologist Ernst Heinrich Weber that the “just noticeable difference” (JND) that human beings experienced to any physiological stimulus was related by the differential equation where is the JND, the stimulus already present, and the stimulus increase. The German physicist Gustav Theodor Fechner popularized Weber’s hypothesis, which leads to the solution This is referred to as the Weber-Fechner law, but is really only a hypothesis that has not achieved the status of a theory, much less a law, especially because it has to do with subjective sensation and perception.

The important lesson for us is that logarithmic compression allows very large dynamic ranges to be accommodated, without input sensor switching. Logarithmic scales abound in the natural sciences and engineering: the pH scale for acidity, the Richter scale for measuring earthquake intensities, and the decibel scale for sound intensity, or for signal voltage, and power in electrical engineering, to name just a few.

Why is important?

We have now reached the stage where we can answer the question, “Why is important?”

The number ’s claim to fame is because of the remarkable properties of the exponential function , which has the unique distinction of being its own derivative and anti-derivative. Stated formally, and, where is an arbitrary constant of integration. Nature is full of systems that can be modelled using this property of exponentials. In addition, the exponential and logarithmic functions are a formidable inverse mathematical pair, as we have seen in this blog.

In a succeeding blog, we will see that is the natural bridge between the real and complex domains—a connection that has given rise to some very powerful mathematics.

Acknowledgements

Thanks are due to Wolfram Alpha, Desmos, and the various AI bots, too numerous to mention, for assistance at various stages of preparation of this blog.

Feedback

Since I work independently and alone, there is every chance that unintentional mistakes have crept into this blog, due to ignorance or carelessness. Therefore, I especially appreciate your corrective and constructive feedback.

Please email me your comments and corrections.

A PDF version of this article is available for download here:

References

[1]
Robin Wilson. 2018. Euler’s Pioneering Equation: The most beautiful theorem in mathematics. Oxford University Press.
[2]
Paul Loya. 2017. Amazing and Aesthetic Aspects of Analysis. Springer.
[3]
Alexander Hellemans. 2021. How Wavelets Allow Researchers to Transform, and Understand, Data. QuantaMagazine. Retrieved 6 April 2025 from https://www.quantamagazine.org/how-wavelets-allow-researchers-to-transform-and-understand-data-20211013/
[4]
J L Coolidge. 1950. The Number e. American Mathematical Monthly 57, 9 (1950), 591–602. DOI:https://doi.org/10.2307/2308112
[5]
J J O’Connor and E F Robertson. 2001. The number e. MacTutor History of Mathematics. Retrieved 18 April 2025 from https://mathshistory.st-andrews.ac.uk/HistTopics/e/
[6]
K C Cole. 1998. The Universe and the Teacup: The Mathematics of Truth and Beauty. Houghton Mifflin Harcourt.
[7]
Jenny Olive. 2003. Maths: A student’s survival guide (2nd ed.). Cambridge University Press.
[8]
C H Edwards Jr. 1979. The historical development of the calculus (1st ed.). Springer.
[9]
Eli Maor. 1994. E: The story of a number. Princeton University Press.
[10]
Robert B Banks. 1999. Slicing pizzas, racing turtles, and further adventures in applied mathematics. Princeton University Press.
[11]
U G Mitchell and Mary Strain. 1936. The Number e. Osiris 1, (1936), 476–496.
[12]
David Perkins. 2012. Calculus and its origins. Mathematical Association of America (MAA).
[13]
J W Bradshaw. 1903. The logarithm as a direct function. Annals of Mathematics 4, (1903), 51–62. DOI:https://doi.org/10.2307/1967113
[14]
Serge Lang. 2001. Short calculus: The original edition of ‘A First Course in Calculus’ (Reprint of the 1st ed. Addison–Wesley, 1964. ed.). Springer.

  1. I later found that this link is a chapter from a draft of the book with the charmingly alliterative title Amazing and Aesthetic Aspects of Analysis [2] where it is now chapter 8.↩︎

  2. The precursor called chaturanga was invented in India around the 600s.↩︎

  3. This word has both a common and a mathematical meaning. Can you reconcile the two?↩︎

  4. We touched upon this idea in the blog Varieties of Multiplication.↩︎

  5. It is often erroneously believed that Napier used as the base of his logarithms, but we know that his “base” was less than 1 and was indeed .↩︎

  6. These are our monomial power functions.↩︎

  7. Beginning with the heading, this section, more than others, is heavily borrowed from Eli Maor’s excellent text e: The Story of a Number [9].↩︎

  8. Mathematical conventions and practice might change. Programming languages might use instead of . Beware! You have been forewarned.↩︎

  9. An order of magnitude conventionally means a power of ten. Two orders of magnitude thus refers to a ratio between two quantities that is either one hundred or one hundredth.↩︎

Copyright © 2006 – 2025, R (Chandra) Chandrasekhar. All rights reserved.