Check out nordvpn.com/reducible to get the two year plan with an exclusive deal PLUS 1 bonus month free! It’s risk free with NordVPN’s 30 day money back guarantee!
PSA: Nord VPN is all well and good the first 2-3 years, but if you don't cancel then and your credit card information is stored, you'll get billed for another year. $99 per year for the standard plan, which is Hella more expensive than original price (I paid $107 for 3 years).
Hey I really love your video and the effort you put into it is truly heroic! I wanted to say thank you and also correct me if Im wrong but on 10:45, in the second "requirement" it should be Fj, not Fk right?
I'd love to see a similar explanation for the Laplace or Z-transform. I've yet to see a bottom up explanation of these transforms from first principles.
MATLAB actually did release a video about a similar way of deriving the DFT and a another video on how the Z-transform arises from the DFT. The animations are not as good as the ones here but it is very informative and clear.
Unbelievable. I spent all day reading about DFT and thought this video popped up because of my search history. Seeing that it was released 5 mins ago blew my mind!
This is fantastic. While going through my CS curriculum at university I feel like I got a good grasp of what the DFT accomplishes and how it's useful - even used the fft functions in numpy like you showed. I was always so confused about why there were complex numbers in the outputs if that function though, and nobody ever bothered to explain it to me. I think I never really grasped that the potential for signals to be out of phase with each other introduces an ambiguity that needs a solution. The way you walked through that just made it all click for me after years of not fully understanding.
This is very well explained. As someone who studied computer science in the university, I must admit, it's a real shame they don't explain this topic as clearly as you do.
Remarkable video. It’s hard enough to create animations and lectures to clearly explain a topic. You have managed to combined both and presented a clear picture of an algorithm that is quite complex to understand from examining the procedures only. The harmony of precise animation and a trial-error approach to solving the problem has resulted in quite possibly the best video on DFT.
Wow, this video is pure gold! I've been trying to wrap my head around the Discrete Fourier Transform for many times and this video made it so much clearer. Seriously, thanks a ton for this!
This video is fantastic for me to understand what a dft matrix is in a visual way. understanding from a pespective of using dot product to compare similarity between analyzing frequency signal and target signal is really cool.
The transform-as-matrix-multiply makes sense now. I have been considering a transform to figure out the notes of music, and have always wondered if I could do a frequency analyzer that was spaced along the musical scale, rather than evenly in frequency. Your explanation makes that easy -- just put the samples of the frequencies I care about in the rows, and just skip the rest. I can put any frequency I want, not just ones that fit evenly into the time range of the input samples. So for instance if analzying a signal sampled at 100Hz for 1s, I would have 100 evenly spaced time samples, and the normal Fourier transform would do waves from -50Hz to +49Hz. I could instead put in any logarithmically scaled waves I wanted on the rows, like all the powers of the 12th root of 2. It also shows why no one does that -- first, if the matrix isn't square, it isn't invertible, and therefore there is no inverse transform. I have to have as many frequencies as there are samples, or else information is lost. Second, I don't think that the rows would be orthogonal in this case, meaning that a pure tone, even at one of the frequencies I was selecting for, would show a nonzero coefficient in the other frequencies.
I've been working on a DIY audio workstation thing in Pure Data lately, and the one piece of black magic it's using so far is a noise cancellation patch from one of the example files. I know enough about that visual programming language to work the mono example into a stereo version, and so I'm using it to clean up the input on a stereo delay/looper. But yeah, I could not build that process from scratch.
Yesterday i was into algorithm geometry and thought about the fourth quadrants in a coordinate system. I thought about the simple forsign relations in each quadrants and how sinus and cosinus acts when your points are located in a quadrant. And i am a big fan of audio processing, watching this videos about discrete points and their inverse relation in time and frequency domain and seeing similar pattern in this, is like Joy, happyness, thankfulness. Iove your Videos, because they are art. The art of describing things simple on the one hand and exact on the other hand without any needs for Interpretation is so valueable. For me it offers the possibility for cross thinking, so how to apply this concept in Quantum mechanic to transfer Newtons physical relations to a wave core while moving in space in relation to an constant observer, so to change Position without moving while moving. And now i see in your Video that it will work about probabilty phase shifting to invent a quantum drive, in a relative position to an constant observer, to reduce the error while moving to constant zero in a linear way. Thanks a lot for this insights. More of that please.
Love this video! (As well as the complex pun 😂😂) Although I'm a year 12 student, I find it simple enough to understand the whole video, while having enough places to stop and think on my own, for example why did the matrix representation 'broke'. Maybe you could try to make a video on CQT as an extension to this video?🤔
My question is regarding your thought experiment at around 3:30 . You say that we cannot sample 14 evenly spaced points because they can be arranged as a constant signal(on line y=0). But we can also arrange 15 evenly spaced points to give a constant signal(also on y=0) ....then why is it also not insufficient. Am i missing something here??
Also another doubt, if we can represent a signal as multiple frequencies, why cannot we represent sin(x) by multiple frequencies other than its own? Why must sin(x) be only represented as all 0's except at frequency 2*pi, instead of 0 at 2*pi and some non zero value at other frequencies?
I still can't imagine how much time you need to draw all these awesome animations. Maybe you consider making simple video about how you make your videos?
The video is sooo cool !! Congrats ! By the way, I'm wondering, at the beginning, it is specified that the matrix should be invertible, but in fact the only requirement is that it should be left invertible, so does a similar process/algorithm exists with non-square matrix unsing pseudo-inverse? Thanks again for the amaizing content !
Thanks!! Suggestion for video: a meta video explaining how you code the videos on your videos, i find incredibly useful how you get the visual effects synchronized for signals, I believe that you might have programmed it, right???
I look forward to comprehend and grasp more concepts ur explanation is super amazing i really enjoyed learning. visual memory is what makes us easy to remember and uptake execution
It would be interesting to learn about how that works 'in real time', as in how software manages to split different frequencies in a piece of audio that isn't just a stable set of sine waves, which is how it becomes useful for daily use as pretty much nothing in the real world is just a stable set of sine waves. Does the software just split the audio into tiny chunks and do a simple FFT on each segment? If it's something more complicated I'm sure it's very interesting, though I guess it also starts becoming more a problem of audio engineering than CS, and drifts away from the focus of this channel
Yes in real time audio processing what you do is buffer your input signal into chunks. The length of these chunks corresponds to the time window you have defined freely in your planning stage and it depends how fast your hardware can process one of your chunks. Luckily with the Fast Fourier transform and its children we have an algorithm with a good runtime. This is important because the buffering time needs to be longer than the guaranteed processing time of the the previous sample. Also, since we saw in the video the length of the input and output vectors of the dft is the same so the resolution of your DFT corresponds directly with the length of our Time Signal. This can be mitigated with so called "zero padding" of the input vector and calculating a longer DFT(some lengths of fft are faster to calculate than others, in most algorithms these are power of 2 length ffts)
I don't understand what you say about the imaginary parts canceling out. Why would we add the complex numbers of multiple frequencies together? Why would we want to cancel out the imaginary part, if it's used to get the magnitude of each frequency?
Great video! It's taken me an hour to get to minute 10:38 ^^ I think there's a small mistake here: Shouldn't there be an arrow above a_j because it's a vector?
I'm still somewhat lost on how a DFT can be used to analyze and e.g. equalize music, since we're not dealing with constant frequencies here. How do you expand this to a dynamically changing frequency domain?
Remember how each frequency or frequency bin has its own amplitude when you have a DFT. An equalizer is just a scaler on each frequency bin...or a "weight". So an equalizer can be made by doing a FFT, applying your "weights" and doing a IFFT, to get back to having the data as voltage vs time samples. That can be run in real time using dedicated hardware or in software.
Alternatively: What if instead of using pairs of cosines and sines at a particular frequency, you used a single sinusoid with 45-degree phase, so that it has a non-zero dot product with both sines and cosines which are matched with its frequency? The result is the discrete Hartley transform.
Ultimately it has the same problem. The cosine wave projects the result onto a 0° phase; the sine wave projects it onto a 90° phase, and your suggestion projects it onto a 45° phase. As it turns out, real waveforms have phases other than those specific 3 (or 6 if you include their opposites). Besides, sine and cosine are just two halves of a whole anyways. Just use the whole circle.
@@angeldude101 FYI, I didn't invent the discrete Hartley transform. It has the nice properties that for real-valued signals, you get real-valued output, and there is no redundancy in the results (unlike for the Fourier transform, where for real-valued signals the negative-frequency components are simply the complex conjugate of the positive-frequency components). Fast algorithms to calculate it generally lean on the FFT, though, so practically speaking it's more of a curiosity than anything else.
@@rherydrevins The Hartley transform being completely ℝeal actually made it very useful for what I was just doing, which was applying Fourier to a 2D image in-place with a shader, so the standard Fourier transform would've needed 6 components per pixel (2 per color channel) when I only have 4. (A quaternion Fourier transform on the other hand...)
Oooo! Excellent. Very well done! Will send to my colleagues and students. Liked and Subscribed. Request for next time: STFT, windowing, and the MDCT! ;-)
Hi thanks for ur videos. I've a doubt it would be nice if you can help me out with it 1. When you expose the similarity of two function, your drawns make me realise that ur similarity function is related with the points that have in common. Which I don't think is a good measure. Why not make similarity by the amount of operations requiered? E.g: in one of ur examples you draw the exact function but negative, and the similarity was almost zero. And the only difference was multiplying but -1. I think about this similarity as the algorithm in computer science that is given a word, how many operations I can make to the word to make it a correct word (works for mispelling). Is not a better similarity measurement? 2. I can't stop thinking about: can we considere the set of frequencies to the function that we want to approach as a set of prime numbers that its multiplication con generate a number? Exist there some relation between those domains? Thanks again for your vídeos.
The Shannon-Nyquist explanation is pretty misleading here, I think. You only need 2 points to sample a 7hz (or any other hz) wave. It's about the speed of sampling, not the number of points. The only reason you need 15 points here is specifically because of the length of the waveform shown.
To be honest, it was very difficult for me to understand all the author was saying. I understood 3blue1brown video on FT on the first try, this video on the contrary I had to rewatch about 4 times in total. And I'm still not totally getting it. The visualizations you make are not explained enough. For example the animation you have in the end( the part is called defining true dft) why are there 2 points on the circle. Why is multiplying base frequency sample by analysis frequency matrix results in one unit circle and not the vector like it was before when we were talking about pure cosine analysis signals(see 10.10 time code). Why does the matrix have to be orthogonal in order to satisfy first two properties of the fourier transform we are looking for(namely if y=a the f is not 0, else f=0, see timecode at 11.20). I'm not a physics student so what is angular frequency (author mentions it at the end when explains winding around the unit circle)? Many things in this video are not intuitive to me, although I tend to think that I have the prerequisites for understanding this algorithm, yeah..... The main thing I took away from here is the dot product prospective on the fourier transform part under the integral, which contrasts with 3b1b prospective of thinking about it as finding the center of mass. You already made the video 30 minutes long. Would it be worse to make it 45 but explain all the details???
Everyone is saying " Wow, it's really simple in the coments". I wonder how many of them actually understood everything the author was saying. I'm not trying to hate, I'm just disappointed.
Looks like at the end you just got lazy and stopped animating, and just read text and that's why understanding of what you are explaining decreases closer to the end. What does it mean " the second peak corresponds to the complex exponential that has an underlying frequency that is moving in the opposite direction and perfectly cancels out the imaginary component of the first complex exponential" What does it mean for a frequency to be moving? How does it cancel out the imaginary component. I don't understand it.
I think I have an answer. In last part of video...the author combines the sampled analysis frequency points for both sin and cosine into a single matrix. So each element of the matrix [i,j] is a pair of the form (cos(j*i), sin(j*i)). NOTE: Here 'i' is what the author refers to as "frequency". It is called so because its value determines how "frequent"-ly the function will repeat itself. Observe how as we move down the rows, the number of cycles of the sine and cosine keep increasing i the fixed range under consideration. Now for all real number 'j' if we plot pairs of this form as points on cartesian coordinate plane then we get the circle. Depending on the points sampled in the matrix we will get the different bold points on this circle (sometimes 1 sometimes 2 etc.)
For the other question....first consider the following definition of orthogonality: 2 vectors A and B are orthogonal iff A.B = 0 (dot product is 0) Now for the matrix to be orthogonal, the dot product of any 2 rows must be zero. This fact follows from the properties 1 and 2. It can be observed by considering as the "base frequency" column, different rows of the analysis frequency matrix (i.e. different analysis frequencies) and then evaluating the equation using the assumed properties in our transform.
because in signal processing you often want to _change_ the signal in some way. You convert time domain to frequency domain, then do something to the frequencies (e.g. strengthen or weaken certain frequencies to your liking), but then how do you turn that new frequency information back into a time-domain signal?
You are right. For pure analysis this is not important, but for the DFT it is, as it mathematically makes sense and also represents the usecases, as JL pointed out, much better. Thanks for making me think about that (no irony!) 🤗
Imagine by tasting the dish and being able to tell all the ingredients of it. Now try to keep the same taste with only 5% of items available. That is how jpeg works using DFT.
So if the DFT is a matrix multiplication, and the FFT is a quick way to evaluate the DFT, then can some form of divide-and-conquer algorithm be used to multiply general matrices? I would be interested to see how an FFT works in the context of this matrix representation of DFT. I've tried watching this channel's other video on FFT, but the polynomials lost me. I'd love to see a combination of this DFT video and that FFT video, focusing on signal processing and maybe showing the DFT matrix being partitioned into smaller and smaller pieces.
@@martinkunev9911 i know i mean compared to previous videos this one seems more polished. For example i think it's the first time i see that intro sequence
Check out nordvpn.com/reducible to get the two year plan with an exclusive deal PLUS 1 bonus month free! It’s risk free with NordVPN’s 30 day money back guarantee!
PSA: Nord VPN is all well and good the first 2-3 years, but if you don't cancel then and your credit card information is stored, you'll get billed for another year. $99 per year for the standard plan, which is Hella more expensive than original price (I paid $107 for 3 years).
Hey I really love your video and the effort you put into it is truly heroic! I wanted to say thank you and also correct me if Im wrong but on 10:45, in the second "requirement" it should be Fj, not Fk right?
How can I contact you? Email
I love the way you explain things please do a video on wavelets!
Why no new videos? Please make more videos
This video is worth more than just a like - both for the subject matter and the enthralling presentation.
I'd love to see a similar explanation for the Laplace or Z-transform. I've yet to see a bottom up explanation of these transforms from first principles.
agree
so true, that would be awesome!
Geil
MATLAB actually did release a video about a similar way of deriving the DFT and a another video on how the Z-transform arises from the DFT. The animations are not as good as the ones here but it is very informative and clear.
Unbelievable. I spent all day reading about DFT and thought this video popped up because of my search history. Seeing that it was released 5 mins ago blew my mind!
One of the best videos on this channel so far, concise and deliberate, very well done!
This is fantastic. While going through my CS curriculum at university I feel like I got a good grasp of what the DFT accomplishes and how it's useful - even used the fft functions in numpy like you showed. I was always so confused about why there were complex numbers in the outputs if that function though, and nobody ever bothered to explain it to me. I think I never really grasped that the potential for signals to be out of phase with each other introduces an ambiguity that needs a solution. The way you walked through that just made it all click for me after years of not fully understanding.
This is very well explained. As someone who studied computer science in the university, I must admit, it's a real shame they don't explain this topic as clearly as you do.
blah blah blah computer science blah blah blah. i am very smart.
@@theastuteangler Someone's jealous 😂
@@dl1083 as someone who is not jealous, I must admit, that I can speak authoritatively on jealousy. I am very smart.
@@theastuteangler oh yeah, smart guy? If Bob has 36 candy bars and eats 29 of them, what does he have left?
@@racefan7616 as a fat ass, I can confirm that Bob has a stomach ache. I am very smart.
Remarkable video. It’s hard enough to create animations and lectures to clearly explain a topic. You have managed to combined both and presented a clear picture of an algorithm that is quite complex to understand from examining the procedures only. The harmony of precise animation and a trial-error approach to solving the problem has resulted in quite possibly the best video on DFT.
I have had a mental block on the DFT for nine years, it is now lifted. Thank you oh so much!
Keep up the good work!
Wow, this video is pure gold! I've been trying to wrap my head around the Discrete Fourier Transform for many times and this video made it so much clearer. Seriously, thanks a ton for this!
I love these videos about fourier transformations
Nah yt algo did this guy dirty this video is so good
The best video ever for DFT, you earned a lifetime subscriber
1:35 into the video, he said " from first principles" 😢😢😢😢😢😢😢.
Just wonderful!!!😢😢
You just in like 5 minutes made me understand the DFT when two semesters in a sandstone university could not.
Grandpa, you favourite youtuber uploaded a video!
Im not that old xD
It's been so god damm long
I don't care if It takes time to make such awesome quality.
That's what time travel is for! 🤣
“Just eat the damn orange already!”
Got to love the Computer Modern font that is used in the presentation!
This video is fantastic for me to understand what a dft matrix is in a visual way. understanding from a pespective of using dot product to compare similarity between analyzing frequency signal and target signal is really cool.
The sequel we all needed!
Absolutely amazing video! My BSc Maths project was about the Shannon-Nyquist theorem.
Never stop making videos my man you rock!
The transform-as-matrix-multiply makes sense now. I have been considering a transform to figure out the notes of music, and have always wondered if I could do a frequency analyzer that was spaced along the musical scale, rather than evenly in frequency. Your explanation makes that easy -- just put the samples of the frequencies I care about in the rows, and just skip the rest. I can put any frequency I want, not just ones that fit evenly into the time range of the input samples. So for instance if analzying a signal sampled at 100Hz for 1s, I would have 100 evenly spaced time samples, and the normal Fourier transform would do waves from -50Hz to +49Hz. I could instead put in any logarithmically scaled waves I wanted on the rows, like all the powers of the 12th root of 2.
It also shows why no one does that -- first, if the matrix isn't square, it isn't invertible, and therefore there is no inverse transform. I have to have as many frequencies as there are samples, or else information is lost. Second, I don't think that the rows would be orthogonal in this case, meaning that a pure tone, even at one of the frequencies I was selecting for, would show a nonzero coefficient in the other frequencies.
25:00 shouldn't it be unitary (not orthogonal)?
Thanks
I've been working on a DIY audio workstation thing in Pure Data lately, and the one piece of black magic it's using so far is a noise cancellation patch from one of the example files. I know enough about that visual programming language to work the mono example into a stereo version, and so I'm using it to clean up the input on a stereo delay/looper. But yeah, I could not build that process from scratch.
Yesterday i was into algorithm geometry and thought about the fourth quadrants in a coordinate system. I thought about the simple forsign relations in each quadrants and how sinus and cosinus acts when your points are located in a quadrant. And i am a big fan of audio processing, watching this videos about discrete points and their inverse relation in time and frequency domain and seeing similar pattern in this, is like Joy, happyness, thankfulness. Iove your Videos, because they are art. The art of describing things simple on the one hand and exact on the other hand without any needs for Interpretation is so valueable. For me it offers the possibility for cross thinking, so how to apply this concept in Quantum mechanic to transfer Newtons physical relations to a wave core while moving in space in relation to an constant observer, so to change Position without moving while moving. And now i see in your Video that it will work about probabilty phase shifting to invent a quantum drive, in a relative position to an constant observer, to reduce the error while moving to constant zero in a linear way. Thanks a lot for this insights. More of that please.
You user name sounds like you are also a fan of Ben Krasnow! 😄
@@harriehausenman8623 😁😁😁😁
Outstanding presentation.
Love this video! (As well as the complex pun 😂😂)
Although I'm a year 12 student, I find it simple enough to understand the whole video, while having enough places to stop and think on my own, for example why did the matrix representation 'broke'.
Maybe you could try to make a video on CQT as an extension to this video?🤔
True, this video has explained to me, 9th grader, how to perform a DFT. It's just so simply described.
Great content, wish I had access to this when I was in graduate school. It would have made it so much more enjoyable when learning DSP
It is such a beautiful and elegant explanation!
22:25 a bit convoluted... I see what you did there :)
Unrelated but at 21:50 if you look at the cos x and sinx graphs from the side it looks like sec x and csc x respectively
Beautiful explanation and video! 🎉😊
My question is regarding your thought experiment at around 3:30 . You say that we cannot sample 14 evenly spaced points because they can be arranged as a constant signal(on line y=0). But we can also arrange 15 evenly spaced points to give a constant signal(also on y=0) ....then why is it also not insufficient. Am i missing something here??
Also another doubt, if we can represent a signal as multiple frequencies, why cannot we represent sin(x) by multiple frequencies other than its own? Why must sin(x) be only represented as all 0's except at frequency 2*pi, instead of 0 at 2*pi and some non zero value at other frequencies?
Get on Nebula! Love your work
Finally another amazing video. I love this channel's videos. Keep the good work up. Thanks for your efforts.
Just a beautiful exposition. *chef's kiss*
I still can't imagine how much time you need to draw all these awesome animations.
Maybe you consider making simple video about how you make your videos?
I believe they are using manim, the Mathematical Animation Engine created a used by 3Blue1Brown.
The video is sooo cool !! Congrats ! By the way, I'm wondering, at the beginning, it is specified that the matrix should be invertible, but in fact the only requirement is that it should be left invertible, so does a similar process/algorithm exists with non-square matrix unsing pseudo-inverse?
Thanks again for the amaizing content !
Thanks!! Suggestion for video: a meta video explaining how you code the videos on your videos, i find incredibly useful how you get the visual effects synchronized for signals, I believe that you might have programmed it, right???
I look forward to comprehend and grasp more concepts ur explanation is super amazing i really enjoyed learning. visual memory is what makes us easy to remember and uptake execution
That's practical and theoretical description of FT. Beautifully explained 👏
Thank you
Why we use Fourier transform in communication and laplace in control system??
Thanks
This is purely quality content. I don't understand why this doesn't get enough viewers 😅
It would be interesting to learn about how that works 'in real time', as in how software manages to split different frequencies in a piece of audio that isn't just a stable set of sine waves, which is how it becomes useful for daily use as pretty much nothing in the real world is just a stable set of sine waves. Does the software just split the audio into tiny chunks and do a simple FFT on each segment? If it's something more complicated I'm sure it's very interesting, though I guess it also starts becoming more a problem of audio engineering than CS, and drifts away from the focus of this channel
Yes in real time audio processing what you do is buffer your input signal into chunks. The length of these chunks corresponds to the time window you have defined freely in your planning stage and it depends how fast your hardware can process one of your chunks. Luckily with the Fast Fourier transform and its children we have an algorithm with a good runtime. This is important because the buffering time needs to be longer than the guaranteed processing time of the the previous sample. Also, since we saw in the video the length of the input and output vectors of the dft is the same so the resolution of your DFT corresponds directly with the length of our Time Signal. This can be mitigated with so called "zero padding" of the input vector and calculating a longer DFT(some lengths of fft are faster to calculate than others, in most algorithms these are power of 2 length ffts)
YES!! THE LEGEND IS BACK!!!
Thanks!
One of the best Videos I have ever seen on zhis kind of topic. Thanks a lot!
Great work man, we really appreciate it!
I have a question. What do you mean by fake fourier transform? Primary stage of discrete method right? I mean without Wn?
Damn what a brilliaaaaaant presentation of complex concept to concrete !!!
It is amazing! I can't believe how we can process these signal in our brain.
Best explaination I could have wished for, thank you!
this is absolute art
This video is worth a few million views 💪🏻😎
I don't understand what you say about the imaginary parts canceling out. Why would we add the complex numbers of multiple frequencies together? Why would we want to cancel out the imaginary part, if it's used to get the magnitude of each frequency?
Is this still made with manim? or are you using new stuff? It looks beyond great, by the way! You've become one of my favorite channels
Nice video, very informative.
Bro the video hasn't even been out for 3 minutes yet
@@zyansheep I know, I previewed the video entirely so its parts and it has a lot of information.
@@bereck7735 ah ok
Nice comment, very informative.
😄
I have question
Why we always use Fourier in communication and laplace in control system??
I was literally thinking of coming up with fourier stuff myself just an hour ago. Miracles you love to see
That sponsorship integration was slick. Great video!
A perfect Ed (Educational Ad) 😄
Really excellent presentation!
Hes alive!!!!
Your voice has a pretty strong echo in this video. It sounds quite different from your previous videos.
nice video but it sounds like the audio is off and is a little muffled and hard to hear on my headphones.
Great video! It's taken me an hour to get to minute 10:38 ^^ I think there's a small mistake here: Shouldn't there be an arrow above a_j because it's a vector?
I'm still somewhat lost on how a DFT can be used to analyze and e.g. equalize music, since we're not dealing with constant frequencies here. How do you expand this to a dynamically changing frequency domain?
Remember how each frequency or frequency bin has its own amplitude when you have a DFT. An equalizer is just a scaler on each frequency bin...or a "weight". So an equalizer can be made by doing a FFT, applying your "weights" and doing a IFFT, to get back to having the data as voltage vs time samples. That can be run in real time using dedicated hardware or in software.
Very intuitive! Thanks!
Alternatively: What if instead of using pairs of cosines and sines at a particular frequency, you used a single sinusoid with 45-degree phase, so that it has a non-zero dot product with both sines and cosines which are matched with its frequency? The result is the discrete Hartley transform.
Ultimately it has the same problem. The cosine wave projects the result onto a 0° phase; the sine wave projects it onto a 90° phase, and your suggestion projects it onto a 45° phase. As it turns out, real waveforms have phases other than those specific 3 (or 6 if you include their opposites). Besides, sine and cosine are just two halves of a whole anyways. Just use the whole circle.
@@angeldude101 FYI, I didn't invent the discrete Hartley transform. It has the nice properties that for real-valued signals, you get real-valued output, and there is no redundancy in the results (unlike for the Fourier transform, where for real-valued signals the negative-frequency components are simply the complex conjugate of the positive-frequency components). Fast algorithms to calculate it generally lean on the FFT, though, so practically speaking it's more of a curiosity than anything else.
@@rherydrevins The Hartley transform being completely ℝeal actually made it very useful for what I was just doing, which was applying Fourier to a 2D image in-place with a shader, so the standard Fourier transform would've needed 6 components per pixel (2 per color channel) when I only have 4. (A quaternion Fourier transform on the other hand...)
This is the great tutorial. My question is how to make such great animation? Do you use Python manim?
Oooo! Excellent. Very well done! Will send to my colleagues and students. Liked and Subscribed. Request for next time: STFT, windowing, and the MDCT! ;-)
Hi thanks for ur videos.
I've a doubt it would be nice if you can help me out with it
1. When you expose the similarity of two function, your drawns make me realise that ur similarity function is related with the points that have in common. Which I don't think is a good measure. Why not make similarity by the amount of operations requiered? E.g: in one of ur examples you draw the exact function but negative, and the similarity was almost zero. And the only difference was multiplying but -1. I think about this similarity as the algorithm in computer science that is given a word, how many operations I can make to the word to make it a correct word (works for mispelling). Is not a better similarity measurement?
2. I can't stop thinking about: can we considere the set of frequencies to the function that we want to approach as a set of prime numbers that its multiplication con generate a number?
Exist there some relation between those domains?
Thanks again for your vídeos.
didnt you go over this when you talked about the fft?
Such a shame this video hasn't millions of views. I'm not kidding, we're looking at a masterpiece.
"is best understood through the lens of music"
me: synthwave lessgooooo
The Shannon-Nyquist explanation is pretty misleading here, I think. You only need 2 points to sample a 7hz (or any other hz) wave. It's about the speed of sampling, not the number of points. The only reason you need 15 points here is specifically because of the length of the waveform shown.
why does the cosine have k/N?
To be honest, it was very difficult for me to understand all the author was saying. I understood 3blue1brown video on FT on the first try, this video on the contrary I had to rewatch about 4 times in total. And I'm still not totally getting it. The visualizations you make are not explained enough. For example the animation you have in the end( the part is called defining true dft) why are there 2 points on the circle. Why is multiplying base frequency sample by analysis frequency matrix results in one unit circle and not the vector like it was before when we were talking about pure cosine analysis signals(see 10.10 time code). Why does the matrix have to be orthogonal in order to satisfy first two properties of the fourier transform we are looking for(namely if y=a the f is not 0, else f=0, see timecode at 11.20). I'm not a physics student so what is angular frequency (author mentions it at the end when explains winding around the unit circle)? Many things in this video are not intuitive to me, although I tend to think that I have the prerequisites for understanding this algorithm, yeah..... The main thing I took away from here is the dot product prospective on the fourier transform part under the integral, which contrasts with 3b1b prospective of thinking about it as finding the center of mass. You already made the video 30 minutes long. Would it be worse to make it 45 but explain all the details???
Everyone is saying " Wow, it's really simple in the coments". I wonder how many of them actually understood everything the author was saying. I'm not trying to hate, I'm just disappointed.
Looks like at the end you just got lazy and stopped animating, and just read text and that's why understanding of what you are explaining decreases closer to the end. What does it mean " the second peak corresponds to the complex exponential that has an underlying frequency that is moving in the opposite direction and perfectly cancels out the imaginary component of the first complex exponential" What does it mean for a frequency to be moving? How does it cancel out the imaginary component. I don't understand it.
I think I have an answer.
In last part of video...the author combines the sampled analysis frequency points for both sin and cosine into a single matrix.
So each element of the matrix [i,j] is a pair of the form (cos(j*i), sin(j*i)).
NOTE: Here 'i' is what the author refers to as "frequency". It is called so because its value determines how "frequent"-ly the function will repeat itself. Observe how as we move down the rows, the number of cycles of the sine and cosine keep increasing i the fixed range under consideration.
Now for all real number 'j' if we plot pairs of this form as points on cartesian coordinate plane then we get the circle.
Depending on the points sampled in the matrix we will get the different bold points on this circle (sometimes 1 sometimes 2 etc.)
For the other question....first consider the following definition of orthogonality:
2 vectors A and B are orthogonal iff A.B = 0 (dot product is 0)
Now for the matrix to be orthogonal, the dot product of any 2 rows must be zero.
This fact follows from the properties 1 and 2.
It can be observed by considering as the "base frequency" column, different rows of the analysis frequency matrix (i.e. different analysis frequencies) and then evaluating the equation using the assumed properties in our transform.
Wow, that was actually really simple
Why is it so important that it's invertible? We already have the inverse, it's the signal we're analysing
because in signal processing you often want to _change_ the signal in some way. You convert time domain to frequency domain, then do something to the frequencies (e.g. strengthen or weaken certain frequencies to your liking), but then how do you turn that new frequency information back into a time-domain signal?
@@japanada11 Ahh of course. That makes a lot of sense, thank you!
You are right. For pure analysis this is not important, but for the DFT it is, as it mathematically makes sense and also represents the usecases, as JL pointed out, much better. Thanks for making me think about that (no irony!) 🤗
Dude...it's superbe:!
Bravo!
Sorry! But I'm going to download all your videos too watch offline, without being disturbed by ads. Forgive me 💜💜
sir Great explaination
Great work!
lol, now one of the little science youtube channels does a video on the DFT. thanks buddies.
Perfect
Brilliant
Maaaaan !
I love you
Now just cover windowing functions and my life will be complete.
*pretty please* 🥺
NTT when
like the FFT video about nuclear testing
loved this video!!!
Imagine by tasting the dish and being able to tell all the ingredients of it. Now try to keep the same taste with only 5% of items available.
That is how jpeg works using DFT.
So if the DFT is a matrix multiplication, and the FFT is a quick way to evaluate the DFT, then can some form of divide-and-conquer algorithm be used to multiply general matrices? I would be interested to see how an FFT works in the context of this matrix representation of DFT. I've tried watching this channel's other video on FFT, but the polynomials lost me. I'd love to see a combination of this DFT video and that FFT video, focusing on signal processing and maybe showing the DFT matrix being partitioned into smaller and smaller pieces.
Thank you so much. You make me such a huge favor on explaining these concept intuitively. Keep up your great work!!!
+1 subscribe
Nice video !
thanks
Is this video particularly well animated or is it just me?
not sure but it looks like it uses 3blue1brown's the manim library
@@martinkunev9911 i know i mean compared to previous videos this one seems more polished. For example i think it's the first time i see that intro sequence
@@kngod5337 Yeah, that intro was sick! BESTAGONS FTW 😄
@Reducible You should like (❤) some comments. The algorithm really seesm to 'like' that. 😉
To say it with Abeds words: "coolcoolcool"
this is magical
Really amazing ✴️