Orthogonality is the Key to Understand Fourier Series Expansion

Video Available


I would say that "orthogonality" is the key to understand Fourier series expansion.

In engineering and science, we deal with waveforms. A waveform may represent a characteristic of a phenomenon with respect to a certain variable, or variation of a quantity with the evolution of a certain variable for instance time.

In order to apply a mathematical method to a given waveform, the waveform must be represented by a simple mathematical expression. If a given waveform is represented by a complicated equation or just by numerical data, then applying a certain mathematical method would be quite complicated.


For representing a given waveform by a simple mathematical expression, note that there are basically three different categories. If you are interested in local behavior of a waveform, use Taylor series. If a given waveform is periodic and you want to capture the entire behavior of the waveform, then use Fourier series. If a given waveform is single-shot and you want to capture the entire behavior of the waveform, then use Fourier or Laplace transform.

So, when you want to capture the entire behavior of a periodic waveform, you should use Fourier series.


Fourier series expansion tries to approximate a periodic waveform by a sum of sinusoidal waves. Each sinusoidal wave must have a period whose multiple is the period of the given waveform. In other words, the frequency of each sinusoidal wave must be an exact multiple of the frequency of the given waveform's period.

I presume that you are familiar with describing a sinusoidal wave by an exponential function with an imaginary argument. If not, watch the video or read the text entitled "Oscillation Kernel."


Since Fourier series is often applied to time-periodic waveforms, let's consider a time periodic function where is time. When the period of is , then its frequency is and angular frequency is . Fourier series expansion approximates by a sum of sinusoidal waves.

where for are called Fourier series coefficients. Note that the angular frequency of each sinusoidal wave is which is an exact multiple of the given waveform's angular frequency .


Why are they exact multiples? It can be clearly explained by the following counterexample. What if a sinusoidal wave whose frequency is not exactly a multiple of the given waveform's frequency is included in the series? Then, that component, included in the summation, will collapse the periodicity of the given waveform! So, each sinusoidal component must have a frequency which is an exact multiple of the given waveform's frequency.

Fourier series expansion says that a periodic waveform can be approximated by a sum of sinusoidal waves having frequencies which are exact multiples of the frequency of the given periodic waveform. Those sinusoidal waves are often called harmonics.


From now on, we call the complex sinusoidal wave as an "oscillation kernel" in this text.

Before talking about the main topic of this text which is orthogonality, let's check the following two interesting points.


The Fourier series expansion formula adds up not only for positive frequencies but also for negative frequencies, because the summation starts from negative infinity. What is a negative frequency?

In our real world, causality is always satisfied. Causality means that a physical or engineering system does not give an output for an input before the input is received. Under this situation, an oscillation kernel with its coefficient, , which exists as a part of a waveform in this real world, is always accompanied by its complex conjugate counterpart

Look at this equation! The accompanied complex-conjugate wave is actually its negative frequency component. If these two components are superimposed, then applying Euler's formula gives

Amazingly, the result is not a complex number but a real number! Causality, which is always satisfied in this real world, forces the complex-conjugate constraint mentioned above, thus always canceling out the imaginary part and leading to the fact that waveforms observed in this real world are all real.

The zeroth order coefficient is an exception. It is always real and does not have a complex-conjugate counterpart. For electrical engineers, it is a dc (direct current) component.

If you are further interested in this topic, watch the videos or the texts entitled "Oscillation Kernel" and "Understanding Euler's Formula through its Derivation."


Another point to note at this point is the fact that the Fourier series coefficients are complex-valued except the zeroth-order coefficient. If a coefficient where is written as the polar form

then the th-order oscillation kernel with its coefficient can be written as

So, the coefficient specifies not only the amplitude of the oscillation kernel but also the phase angle . The use of a complex-valued coefficient makes it possible to specify both the amplitude and the phase angle by a single number.


Ok, let's talk about the main topic, orthogonality.

If we are talking about vectors, orthogonality can be interpreted as perpendicularity. If the angle formed by two vectors is 90 degrees, they are orthogonal. In other words, if the inner product of two vectors are zero, they are orthogonal.

This concept can also be applied to two periodic functions. Let's define the inner product of two periodic functions and . is multiplied by the complex conjugate of , and the product is integrated for a a common multiple of the period of and that of .

where is the common multiple of the periods mentioned above.

In Fourier series, we use oscillation kernels for the expansion. So, let's calculate the inner product of and .

The period of is , and that of is . So, we can use as a common multiple.

In the case of , the integrand becomes one. Thus, the inner product is multiplied by which is unity. This implies that and are in the same direction and thus not orthogonal to each other, of course because they are the same functions.

In the case of , on the other hand, the integrand is where is a nonzero integer. Since is a multiple of the period of , integrating for the period is zero. Thus, the inner product is zero, in other words, and are perpendicular to and thus orthogonal to each other.

Oscillation kernels for different orders are orthogonal to each other, and they form a set of orthogonal bases.


Let's look at the Fourier series expansion formula again.

Fourier series expansion represents a given periodic waveform by the sum of the orthogonal bases for .

Especially when we use positive frequencies, their negative counterparts and the zeroth-order term only to approximate a given periodic waveform , we may use

This Fourier expansion tries to approximate by the sum of orthogonal bases.


What is the inner product of two vectors? It is a projection of one vector to the other. So, what does this mean for two functions?

This is the projection of a periodic waveform to the basis . Well, this means that we are calculating how much component is included in . This is actually the Fourier series coefficient . So, we may be able to calculate by

Let's call this an "extraction equation" in this text. But, is this equation true? Ok, let's assume that a given periodic waveform consists of the th-order component with its coefficient, , and the remaining part .

Substituting this into the extraction equation to get gives

According to orthogonality, the first term is . The second term is zero, since does not include component. So,

We are able to extract from a given waveform using the extraction equation.


So, Fourier series expansion gives us a way to represent a given periodic waveform by a sum of oscillation kernels

Although it looks complex-valued, the imaginary parts are cancelled out due to causality. So, it is real-valued. You can calculate the values of the coefficients which are the magnitudes of the oscillation kernels including their phase-angle information using

In practical applications, we use only up to the th term. positive frequencies, their negative counterparts and the zeroth-order term, altogether terms.


In other words, Fourier series expansion gives us a way to decompose a periodic function into weighted oscillation kernels. And the weightings which are complex-valued so as to include phase-angle information are the Fourier coefficients and can be obtained as the inner product with respect to oscillation kernels.

The oscillation kernels form a set of orthogonal bases. So, the inner product of different-order oscillation kernels are zero. Calculating a Fourier coefficient is equivalent to calculating the projection of the given periodic waveform to that oscillation kernel using the inner product formula, or the extraction equation in this text, thus extracting that oscillation kernel's weighting. The resultant Fourier coefficient is the weighting of the oscillation kernel.

When we use up to the th term, we are representing a given periodic waveform by the sum of weighted oscillation kernels. Since the oscillation kernels form a set of orthogonal bases, they are like unit vectors in a -dimensional space. And their weightings, in other words, Fourier coefficients are the lengths of those unit vectors. So, Fourier series expansion can be viewed as a way to represent a periodic waveform by a sum of weighted vectors in different or independent directions.


Finally, let's apply Fourier series expansion to a periodic square waveform.

This figure shows a periodic square waveform whose period is .

A periodic square waveform

So, its angular frequency is

The Fourier coefficients of this waveform are calculated by the extraction equation.

Here, let's suspend this calculation and simplify

Ok, let's resume the calculation of .

The equation above is not applicable to , since there is in the denominator. To get , we calculate from scratch.

So, we get the Fourier coefficients of the periodic square wave.

It is interesting to superimpose

on the given waveform.

When , so only with the constant term, we get

0th-order approximation of a periodic square waveform

When which means we include and only, we get

1st-order approximation of a periodic square waveform

When ,

3rd-order approximation of a periodic square waveform

When ,

5th-order approximation of a periodic square waveform

If we include up to the 19th order, then we get

19th-order approximation of a periodic square waveform

As we increase the number of terms included, the Fourier series expansion better approximates the given waveform.


I hope you have reached real understanding of Fourier series expansion. Thank you.

Comments

Recent Articles

The Simplest Way to Understand the Concept of Taylor Series Expansion

Understanding Euler's Formula through its Derivation