# An Interactive Guide To The Fourier Transform

The Fourier Transform is one of deepest insights ever made. Unfortunately, the meaning is buried within dense equations:

This is a companion discussion topic for the original entry at http://betterexplained.com/articles/an-interactive-guide-to-the-fourier-transform/

Those animations are AMAZING. Mad mad props. But I don’t really get how the time points are spaced…
“The time points are spaced at the fastest frequency. A 1Hz signal needs 2 time points for a start and stop (a single data point doesn’t have a frequency). The time values [1 -1] shows the amplitude at these equally-spaced intervals.”

Like for a 1 Hz signal why are you measuring at 2 points, for a 2 Hz signal at 3 points, for a 3 Hz signal at 4 points and so on? Does this have something to do with the Nyquist-Shannon sampling theorem?

Thanks Waldir, just fixed.

FINALLY THIS MAKES SENSE

A couple examples of using the Fourier integrals and series on a known signal (e.g. a simple one like 1+sin(t)) and showing how the ‘parts’ are extracted would be helpful.

I tried and failed to get what I expected (e.g. 1 for the DC component for this litle signal above at w=0) when I attempted to evaluate the integral form, which probably just means I forgot how to integrate properly (but Maxima blew up on it too).

Kalid - As always, thanks so much for your efforts! I look forward to you tackling the Laplace transform one of these days:)

The analogy is rather out-of-place, confusing and seems like an unnecessary detour. Most important of all, the author seems to lack primary insight on the subject himself. Smoothie is a whole and its ingredients are parts. It is not the same with signals at all. They are all whole in their own domain. The analogy is totally misplaced and the information provided only partly correct. The key idea of FT - change of variable - is not emphasized at all. Only well informed people should be allowed to author such articles.
That said, there are a couple of good insights for new learners - 1. decomposition of a signal into other signals, though that hardly warrants such a confusing and misplaced analogy and 2. representation of complex numbers on the z-plane, though this may not be the best place for such a long discussion on that.

Hi Genius,
Thanks for the article ,It was awesome like always but i have a doubt.What does negative values in time mean like -1 the graph is always positive in x and y value is just projection of point in x so i can’t get negative meaning of time

I think I can answer that question also. When the frequency domain is discrete, it makes sense to talk about non-infinite amounts of each component, as you describe. The trouble is, when the frequency domain is continuous, it makes less sense to talk about the amount at each frequency, and a “amount density” is more applicable. After all, in the real world there is not really such thing as a pure sine tone; everything has some width in frequency space. But if a finite width has noninfinitesimal “amounts” at every frequency, then the total amount is infinite, which makes no sense. This is the same principle why discussing the mass of an object in detail requires a notion of a local mass density, because if every POINT in the object had its own noninfinitesimal mass, the whole thing would have infinite mass. The Dirac delta is a formal way to revive the concept of a pure sine frequency even after we have evolved to the notion of densities. The Dirac delta is the representation of a “point mass” in the “density” way of thinking about things.

I think what you said about the height of the transform being equal to the amplitude is correct in the discrete frequency interpretation, because there densities DON’T make sense, while “amounts” do.

Is there anyway to visualize orthogonal signals?Or understand it intuitively instead of just saying that their dot product is zero. I’m studying trigonometric fourier series and having a hard time grasping how each multiple of fundamental frequency component acts the same way like orthogonal vectors that when adding them together they doesn’t interfere and you can add frequency components with right amplitudes to the sum and you don’t need to correct already calculated amplitudes when adding a new term.

Is there anyway to visualize orthogonal signals?Or understand it intuitively instead of just saying that their dot product is zero. I’m studying trigonometric fourier series and having a hard time grasping how each multiple of fundamental frequency component acts the same way like orthogonal vectors that when adding them together they doesn’t interfere and you can add frequency components with right amplitudes to the sum and you don’t need to correct already calculated amplitudes when adding a new term.

Great explanation for something that is so seemingly complicated!

Worth the wait - only 22 years for me!

Keep it coming! Thanks!

It must have been hard work. And fun in putting in a lot of imagination to get this done. Great work.

Stephen,
Thanks for replying. I actually did recognize that e^0 is one. The integral becomes integral(1+sin(t)dt), which evaluates to t-cos(t). But, evaluating that from -inf to +inf blows up, which is no big surprise.

Dividing by ‘t’, as you mentioned, doesn’t quite get me there either.

The formula in the first bullet of section 2.4 in http://www.civilized.com/files/newfourier.pdf (see Gary Knott’s post above - thanks Gary) does work (divide by 1/p (the fundmental frequency) and adjust the limits of integration to be one cycle), but I don’t follow (yet, anyway, but I’m not giving up) how we can just replace the +/-infinities in the formal definition of the integral.

At least I finally got the answer I was looking for, but needless to say I would have failed the exam (which is why I like this blog in the first place ).

Glenn

I love the smoothie metaphor!

Hah, just a curious learner here. Negative values in time, for the signal you mean? In this case, if it’s an infinitely-repeating signal (that’s what the Fourier Transform assumes), then -1 means “1 second before the current cycle started, i.e. near the end of of the previous cycle”. Sort of like “-1AM” might mean 11PM of the night before.

Hi Jose, thanks – that’s a really neat interpretation. Yep, it’s almost like a “dot product” where you are seeing how big and overlap you can get (and different phases will have different overlaps).

Hey, man, this is really great. I like to use DFTs for stochastic processes and look at their underlying trend (this is similar to harmonic regression, which is kinda’ cool). Thanks for this!

Wonderful!

Daniel,
Don’t get me wrong. It is not at all that I disagreed with your assertions, especially since they are formally correct in every way. They are also really cool and insightful and your clarification is very adequate,.The confusion centers arount the traditional use of the delta function in the time domain as an infinitely high pulse of zero duration whose integral =1. Also, invoking this interpretation involves understanding a lot of really difficult math involving distributions. etc. I am also a little preplexed by the use of “s”, usually reserved for the complex variable of integration of the LaPlace transform. My only intent is to help clarify the workings of the Fourier concept using simple, approachable math.