The math here is driven by what is by far my favorite interview question. In all the interviews that I’ve conducted as an engineer and engineering manager at various places, not one person has ever gotten it right. (So by reading this, you’re giving yourself a leg up.) It’s not a great interview question really, but I just find it interesting that this tidbit stuck with me, and no one else seems to remember it from engineering school.

First, a little background. A great deal of signal processing—and analysis of linear systems in general—relies on basis decomposition. This essentially means taking a signal and breaking it down into a sum of much simpler functions that are each easy to deal with. In Fourier analysis, the *basis* consists of the set of sinusoids of different frequencies. When adding together the constituent parts, you can vary the amplitude and phase (equivalently the complex amplitude) from one frequency to the next. This is what defines linear analysis/synthesis: basically you can only scale and add.

Further, the individual sines/cosines are called *orthogonal*—a term borrowed from vector math—because you cannot decompose a sine wave into a sum of sine waves of different frequencies:

$$A \sin(f_1 x) + B \sin(f_2 x) \ne sin(f_3 x)$$

(For unique frequencies and non-zero A and B of course.) Just like if you had three vectors perpendicular (or *orthogonal*) to each other. In 3D space, no linear combination of a vector in the x-direction and one in the y-direction can result in a vector in the z-direction.

If, in addition, the sine waves forming the basis are scaled appropriately relative to each other, they are said to be *normalized*. All of this makes an *Orthonormal Basis*.

Now with that basis—pun intended—if you take a periodic signal, you can find the amplitude of each constituent sine wave making up the signal by calculating the Fourier series. Wikipedia and countless other sources provide the formulas for using sines and cosines to analyze the periodic function f(x):

$$ f(x) = \frac{a_0}{2} + \sum_{n=1}^{\infty}a_n \cos(nx) + b_n \sin(nx) $$

where:

$$a_n= \frac{1}{\pi} \int_{-\pi}^{\pi} f(x) \cos(nx) dx, \quad n \ge 0 $$

and:

$$b_n= \frac{1}{\pi} \int_{-\pi}^{\pi} f(x) \sin(nx) dx, \quad n \ge 1 $$

The coefficients a and b provide the amplitudes of sines (and cosines) with a discrete set of frequencies that are all multiples of the repetition rate of the original periodic signal. These are the *harmonics* of the original signal. (For non-periodic signals, you calculate the continuous Fourier Transform of the signal to get the amplitudes of a continuum of frequencies of sine waves. Likewise, you can use other basis functions such as the set of Laguerre-Gaussians if you’re dealing in 2D and circular symmetry.)

Again, the good people editing wikipedia have some lovely graphics that you should look at. I will only show a couple basic things here that get us back to the original point. Let’s look at the first three odd, sine harmonics of a square wave. (As it happens, if the square wave crosses zero at the origin and has a 50% duty cycle it only has odd, sine harmonics.) Superimposing these on the square wave, we see that the fundamental (red) has the same period as the square wave itself, and it “fills in” most of the square wave like painting a wall with a big roller. The next two odd harmonics have three (green) and five (blue) times the frequency of the fundamental, and they serve to fill in the corners of the square wave like painting with a trim brush and a detail brush.

Also note that the amplitudes of the harmonics decrease with increasing frequency. Think of it this way: only the sharp transitions benefit from the detail provided by higher and higher frequencies, and these transitions are a relatively small part of the whole signal. And note too that the harmonics and the square wave all cross y=0 together, and the phases of the harmonics alternate in the center of a square pulse (the red and blue traces have peaks where the green trace has a valley).

Now instead of looking at the harmonics individually, we will build the square wave by adding harmonics together. We don’t truly get a square wave until we have added in infinitely many sine waves of increasing frequency. Until infinity—which is never reached—we end up with varying degrees of overshoot and undershoot at the transitions and ripple in the “flats”. The next figure shows the square wave (black) and the sums of the first five (blue), ten (red), and twenty (green) non-zero harmonics.

Now the answer to the interview question: Interestingly, the amplitude of the over/undershoot is constant at roughly 9% of the square wave’s amplitude as you add more and more constituent frequencies. (Rigorously it is not constant, but rather it approaches this finite limit.) This is the Gibbs Phenomenon.

*Gibbs Phenomenon*. Remember that.