You know what's crazy. I was pronouncing it the correct way for my whole life. But I watched like one video in research that pronounced it the wrong way and for some reason decided that they were right and I was wrong. So I decided to intentionally change how I pronounced it to the wrong way. I'm actually so dumb lmao
@@copywright5635 that's crazy; on a similar note I'm convinced that Italian physicists can't pronounce "Dirac" Correctly; no professor at my Uni called the poor guy right. (Hasty generalization I know, kinda funny tho)
Runge-Kutta still leaks energy. For the equations of motion, the one integration scheme to rule them all is Velocity-Verlet. That conserves energy _exactly_ (apart from floating-point round-off), is 2nd order, and is just as computationally cheap as Euler.
2 месяца назад+24
To make it easy for others: en.wikipedia.org/wiki/Verlet_integration#Velocity_Verlet
Yes, I decided to leave off Symplectic Integrators. But of course, Velrlet and Velocity Verlet would have been better options. Seeing a lot of comments about this, I will probably do a sequel to this video in the future
Great video! It's funny that 3Blue1Brown's manim environment became the official animation framework of RUclips math videos. Every time I see some video made by manim, before watching it, I know that it is gonna be a good one. It never disappoints.
This is a great intro to approximation schemes! Really well explained and I love how you included all the animations to visualize what happens for each method. It was very helpful to see how drastic an effect the lack of energy conservation can have.
Idk man seems youtube glitch or something but it's missing 'k' after 898 in your sub count hope youtube fix the issue soon, once again thanks for a great video
I'd say equations are logical abstractions, they don't exist outside of human understanding, nature does. Equations describe natural interactions in simple notation. In short, it's too abstract to be considered above nature.
Actually nature doesn't give a fk of what we think, it has some rules/constants that cannot be crossed and axioms such as discreteness and continuity and it happens to be explained by differential equations because we cannot imagine a stopped frame of time and we need to measure change in variables(differentials) to understand the universe
Nothing in the universe is "powered by differential equations"! Differential equations CAN DESCRIBE a lot of things. This is very important to understand.
I love this kind of videos. I work on driveshafts, which are rotational mass-spring-damper systems to some degree. I loved doing the Taylor expansion and wrote some homework about how much accuracy you gain for increased compute, as you increased the order.
The RK algorithms are a very fascinating topic, and I've even implemented a few of them in a C++ application before specifically RK4. Yet, I still feel or think that FFTs and their inverses are some of the most interesting algorithms out there. Complex Vector and Field analysis, the Hamiltonians, especially the quaternions and octonions, and so much more are all interesting topics. We stand on the shoulder of giants! I truly enjoy videos like these, keep up the great work!
Thanks! FFTs are a topic I think deserve a longer and more dedicated video, but I'm considering doing one on them. Even for this, I barely touched the surface level of what RK is (didn't even derive RK4 lol), just wanted to provide some intuition for it. Both for the motivation, and why it would be more accurate than a simple Euler-Cromer method
@@copywright5635 Not exactly but similar to why Quaternions prove to work better than Euler Angles when performing rotations in 3D about the independent axes. When using Euler Angles there is the phenomenon of producing Gimbal Lock within the use of the rotation matrices. This where one axis ends up being rotated onto another where they become coincidental and from there you end up losing a degree of freedom as the two axes are locked and you can no longer differentiate between them. Quaternions helps to prevent this. Also, quaternions, even though the mathematical notations and expressions are fairly complex, implementing them in software is fairly trivial and they also have a very nice added benefit of being able to be calculated against other vectors and matrices as well as being converted to them. Because of this, they are also computationally cheap, very efficient and quite effective. It's not exactly the same, but it goes to show where a variety of Euler methods although is simpler to digest and work out by hand, also has their shortcomings. FFTs just provide a very good and efficient way to transpose from one system to another especially when working with wave patterns or anything with a frequency domain. Without FFTs, audio processing either being a wave file, a midi file, or even an MP3 file wouldn't be as efficient as they are today. Audio even when compressed requires a lot of information and can be fairly computationally expensive. FFTs reduces that by a couple orders of magnitude. Instead of trying to perform 20 thousand sine or cosine function calls per second for a 20kHz frequency. We can just sample it and use the sample rate to reconstruct a good enough approximation of the individual waves. Well sort of, as that's the abridged version. But yeah, I find it all to be very interesting and intriguing. I'm not just intrigued with this type of stuff either. I'm also intrigued by 3D Graphics Rendering, Game Engine - Physics Simulations, as well as actual CPU Hardware Design (ISA design). Then again, this gets into physics when you go beyond the logical device level and get into the actual structure of the transistors, resistors, etc. that are designed to manipulate electricity. And here we are again, with wave propagation. Right back to the use of wave functions and the power of FFTs lol! I just like things related to engineering. Factorio, Satisfactory, Dyson Sphere Program, Oxygen Not Included, Planet Crafter, Mindustry, Turing Complete, etc. there're all a part of my Steam Library and are my hobbies. And I'm no stranger to Music as I did play the Trumpet for close to 10 years back in my school days.
If you like this and Fourier methods, you should check out dispersion and dissipation analysis (sometimes referred to as "Fourier analysis") for ode solvers (and pde solvers too, but that's a bit more complex). It essentially allows someone to understand how a solver will respond to any initial condition of a linear problem.
You need to look into Clifford Algebras and Geometric Calculus. Try Macdonald's two books "Linear and Geometric Algebra" and "Vector and Geometric Calculus". Books are small and concise. He and a couple other people really blew the field open not long ago. Tensors and quaternions are subsets. Clifford makes them much simpler. Computer graphics is using this now, especially sims and games.
Wow, I honestly had no idea these methods were even connected! Thank you for the straight-forward explanation and visualizations. Top notch content, Sir.... 👍
2 minutes in, I can already say: I like your style! Just one remark: I think it would be better to clarify that we can best describe nature with the help of differential equations, instead of saying that nature is governed by differential equations (even if it was true that we live in a simulation, this would be outside the scope of scientific thinking). This reminds me of talking about charged particles "feeling" a force and thereby intentionally reacting to it, or explaining darwinism by active adaptation to a changing environment, or even "lonely atoms which want to form bonds to share electrons" to fulfill some godly given octet rule so that all of them can live a long and happy life, and every other thing our teachers tought us "although we should keep in mind that this is just a simplification" - although the concepts you are about to explain (from my point of view at this moment in time, at minute 2 in your video) exactly oppose these views, of course. But keep in mind: Now the maths begins, and view counts will only drop from here on. To be clear: I really like your video! Narration, animation, general style: wonderful! And I had a look at your channel, and I will watch a few more of your videos.
Thank you for the feedback, yes I should have been more clear about that. I was mostly focused on the mathematics here, but of course, everything physical we describe with mathematics is just a model. I'll keep this mentality in mind for future videos.
i think metaphysics is a part of science. many great scientific insights came from studying metaphysics. i.e. what is true , how much can we know of truth, and what is that truth composed of? and there maybe a correspondence between our models and reality, i.e. reality = diff equations, and it's interesting enough for you to bring up, and I think it's helps strengthen our mind's talking about this... and it can give us insights into math from a point of view of physics and vice versa.
Nice video! However, at 9:38 it should be noted that the "order" of a method does not refer to the number of terms/stages (k1, k2), but rather the truncation of the Taylor series. This means that a 2nd order method will exacly match the taylor series up to the x^2/2! term within each time step, while the following terms (x^3/3!,...) are not exact or missing. For some fully implicit methods (Gauss-Legendre), the order can be two times the number of stages. (They're computationally expensive and I wouldn't recommend using them a lot but provide inpressive results for large time steps.)
Thank you for mentioning this. I could have mentioned this, however, since I didn't go through the Taylor series derivation, I thought it would just confuse most viewers.
Great video and VPython code! Thanks for sharing. Just one heads-up: I think there's a tiny blunder in one of the equations around 2:20. The velocity goes with sin(), not cos(). And I also vote for modelled, not governed ;)
Another method (or sub-method) that maybe deserved a mention is the so-called leapfrog integration, where the average derivative for xI is taken from an average value of the previous tick acc and an extrapolated value for halfway towards the next one. It's sort of similar to RK2, but the samples are offset back by one(half) of a tick. It's relatively stable, and unlike RK2, you don't actually need to computer the derivative twice for each tick, as the first one is carried in from the previous iteration.
Yes, this is also a good method. I just wanted to keep the video simple. Perhaps I should have teased the sequel video at the end. I'm getting a lot of responses about this, so I think I'm going to make a sequel to this video covering Symplectic Integrators, along with some others like leapfrog
Amazing video ! I loved your pace and your little jokes, it really helps staying engaged with your presentation. The visualisations are of course really good too. :)
Oh this video is unfortunately a little bit late for me, just had my numerical simulation exam last month ;) still watching this video since it got recommended to me and its so fascinating how people came up with such things decades ago!!!
Very clear! Implicit Gaussian Collocation for the win though! (For numerically fully conserving skew-symmetric use cases.) EDIT: But you already mentioned Sympletic integrators in one of your responses.
@@copywright5635 Thanks! Subbed as I’m looking forward to hearing more of (sympletic) integrators. I’ve used Gaussian Collocation for simulating with conservation accuracy down to machine-precision (with help of skew-symmetric PDE form), which is cool, but I can’t say I really ever understood what is sympleticity means nor how to derive, e.g., exponentially sympletic variants.
Nice! I coded RK4 and other methods in python for a 3 body problem simulation. RK4 and Velocity Verlet were way more stable than Euler or even a 2nd order Taylor series when we consider conservation of energy. Thank you for the video;
1:09 this sound is equivalent to undertaker's unexpected entry during an ongoing wrestling match😅 haven't watched the complete video, I'll leave a 'review' comment after watching it but it seems video will be awesome and informative.
0:21 eveything in nature isn't governed by differential equations (DE), DEs describe nature, they don't govern them. I know it can be seen as a nitpik, but I felt that the semantic difference between 'governing' and 'describing' were big enough to warrant the comment. The rest of the video was great!
This is great. You seem to be getting a bit of flak from the very knowledgeable on the subject, but I just feel the video is directed to those like me, who *in theory * can follow the equations but have trouble with the 'where are we going with this' part. And in that respect the video succeeds with flying colours. So, thank you!
I saw one comment about how differential equations only "model reality and not govern it", and thought "hey, sometimes one or two random philosophical comments are good" and upvoted it. Then I realised half of this comment section is obsessed with that one phrasing for some reason 😄 Working with scientists, I've gotten used to the "this system is governed by" phrasing so much that to me it seems a weird thing to get hung up on. But I guess it's never a bad thing to get a reminder that the map is not the territory, or even a bunch of redundant reminders 😂
From my days in college, when I took Numerical Analysis, I had an idea. Do the iterative equation that, with each iteration, produces Output values relative to increasing Time… BUT… for each Output value, use it as a STARTING VALUE for the iterative equation method known as *Picard’s Method.* When Picard’s Method iterates, it STAYS ON a single Time value (i.e. as you iterate, Picard’s Method doesn’t “move you” along the Time axis.) I never got to try my idea, but I always wanted to. Overall, the idea is that you keep switching back ‘n forth between an Increasing Time iterative equation AND a Picard’s Method iterative equation. For each “invocation” of Picard’s Method, you perform enough iterations until a suitable degree of convergence is achieved.
I recall Euler's method being introduced to me explicitly as a tool that does not produce good approximations, but rather convergent ones, which is useful for proving existence of actual solutions.
For some reason, there are a lot of weird comments only trying to correct something here and there, without recognizing the relevance of what you're doing with this video. After a semester having tedious differential equations classes without any proper visualization or ilustrations to follow from my professor, or even the connection with physics, this video just recovered my passion on the subject, Thanks a lot for the video and for the effort!
I think I learned some things. sqrt(k/m) is an angle. And because it uses regular trig functions, and not hyperbolic functions, then it has an imaginary component to it. See, e^t is hyperbolic, e^(i*t) is trig. That runs counter to what I was thinking about massless stuff being trig. So, there must be a situation where both are true. Maybe the spring constant is the angular part.
Thanks for sharing your videos. I love to see how everyone has a different perspective. Would love to see more animations / videos on computational / numerical methods - difference equations, Runge-Kutta (regardless of pronunciation : ), Fourier transform. Check out GoldPlatedGoof ‘Fourier for the rest of us’. I bet you could do a very interesting video on his Dot Product / Fourier relationship. The ability to represent any curve with Fourier epicycles is truly mindblowing! Thanks, keep it up!
Hey, a minor correction, Matlab's ode45 uses a 4- and 5-STAGE Runge Kutta method (as it uses 5 k_i), but the 5-stage method is still an order 4 method, due to I think it being able to trace exactly polynomials of degree 4 but not 5.
Thank you so much! I hear about Runge-Kutta so often at the lab but never understood it until now! But it bothers me that pretty much the same math (at least in my brain) has so many different names: finite difference, Euler, Runge-Kutta, Taylor expansion... I am bad with names :')
It's similar, but there are differences, as I outlined in the video. The point of this was mostly to show why we use "RK4", and what it is, since that term is often thrown around without actually understanding how it works
@@copywright5635 Yeah a lot of people in my field just say "we use Runge-Kutta" then use ODE45 without thinking about what's behind the scene. The video is great!
You might think my comment is mean or unsupportive of your work. But actually I really enjoy watching your video. The problem is you didn't answer your title. You didn't explain why the method is better than Euler. You didn't derive it or show any reason why it is more stable. You just showed graphically how it plays out but never actually proved anything about it
Thank you for the feedback. I think your concern comes from a differing opinion on what "why" means. I do concede that the title is not true in a strict mathematical sense (I didn't derive RK after all). However, I did provide reasoning for why RK2 (and by proxy RK4) seem to conserve energy better than an Euler method. I could have mentioned Symplectic Integrators, or even the Euler-Cromer method (which only changes one term, yet conserves energy for these problems). RK methods also don't inherently conserve energy, they simply converge much faster. I approached this video with the notion in mind that it is unclear why for certain systems, RK methods, even RK2, seem to simulate so much better than standard Euler Methods. I wanted to provide a sort of intuitive motivation, and I think I accomplished this. You are correct in saying I did not mathematically prove anything about Runge-Kutta methods. This was never the intention. I apologize if you found the title misleading. TL;DR, The "why" in the title is not the "why" of a mathematician. It's the "why" of an engineer or experimental physicist.
holy fuck. I hate math, and i hate physics. I dont get any enjoyment out of it, but i get enjoyment out of self improvement. I have a weird obsession with the term "level up" and anything related to it. You just earned my subscribe because this is good stuff, could help me level up
If one of the problems is unwanted gain or loss of energy as the approximation proceeds, are there methods that calculate the total energy initially and after each step and compensates for energy gain or loss as it goes?
It's clear to see that the higher order algorithms are more exact per timestep but they're also more computationally expensive because of calculating multiple derivatives per timestep. It would've been nice to see how exact each algorithm is per derivative evaluation. Because it might be more efficient computationally to use a smaller time interval with a lower order algorithm than using a higher order algorithm.
Yes! I really wanted to incorporate this into the video, but I wanted to get it out before SoMEπ ended, so I ended up not incorporating it. I'm not sure where I'll do this, but maybe I'll make a video on my patreon or a second channel demonstrating this. Otherwise, a sequel covering symplectic integrators will be coming at some point!
I found this video fascinating, and very cool overall. [Subscribed] It is surprising how rare it is to see the words "Runge-Kutta" compared to Euler tho. However, with deep respect re: @0:20, NOTHING in nature is "governed" by differential equations, rather differential equations allow us to see how nature is governed. (nice bonus at the end!)
I would point out that even the "exact" answer is an approximation, because you have to approximate the value of sine or cosine in order to draw the graph or get a numerical result for the position of the object on the spring. Now I know that you can easily calculate the value of the trigonometric functions to far more accuracy than you need -- but those number are still calculate by an approximation algorithm.
Of course this is correct. I figured including this would be a bit off topic, as the approximation we're concerned with in the video is of the "initial value problem" type rather than for function values. Thank you for the comment though
For any linear system (such as the one modeled here) a discrete state-space model can be accurate even with a course time step. If you model a single iteration accurately, then you have a template that can be applied simply at each iteration.
yes this is of course true. I'm covering other systems in another video that should be out within ~1 month (maybe longer). That one will focus on symplectic integrators
So, one gripe... Nature isn't GOVERNED by Differential Equations... it can be modeled by them... and... really only a small tiny portion of nature. It's not like it's not like Nature was consulting a Math text and decided hey ... that sounds fun...
Couldn't you just make energy conservation explicit? That is, calculate the total kinetic + potential energy in the system at t0 and then adjust velocity or velocities at every subsequent step to force the total energy to match?
No, because energy conservation is fundamentally at odds with "velocity adjustments" (i.e. impulses.) In other words, they are what got you there in the first place. By the time you've noticed a physically incorrect circumstance, it's already too late to "fix" it in a physically correct way. For some very simple and often frictionless contexts, we actually do have exact solutions in terms of energy conservation, but for almost all Lagrangians we have only approximation methods, and we can only improve their accuracy by really including the higher-order terms.
I've been wondering for some time how to extend time step in Velocity Verlet. Could you extend Velocity Verlet using the same logic as here? How about VV extension which is higher order in force (VV kinda assumes that force doesnt change during the time step)?
Another integration method that you can consider is Verlet one, third order error for position and second order error for velocity. It is highly used in games, since we care also about object interactions and with Verlet this is really easy. We can enforce non penetration constraints without necessarily applying a the force on those objects, but just displacing their positions and still not completely break the system. Obviously non physical correct, but robust and somewhat believable.
The "easiest" step up from explicit euler and implicit euler is "semi implicit euler", because you just need to swap a line of code and get 10x better results than both methods. Runge Kutta 2+ is the step after that.
Hm, could be a good topic, maybe as a sort of sequel to this video? I'm trying to not present topics in a super dry manner. I'd rather motivate them first, so perhaps continuing the conservation of Energy throughline (or Hamiltonian ig) would be good for that. Thanks for the suggestion.
Does the error of the 4th order Runge-Kutta always stay below 0.09 in this case? I notice that it never seems to go above that value which seems quite surprising.
when i was in high school i asked my math teacher how to find the square root of a number and she told me to use a calculator. i told her i didn't have a calculator. she said to look in the back of the book. i realized she had no idea how to do more than basic math. i looked up how square roots are devised and i realized no one knows how tf to devise square roots. like, literally, there's no formula for it. all we can do is factor it closer and closer to the desired proximity, but we can never formulate an exact number. to me, this means that the universe is in no way mathematical, and that we are just approximating as best we can with the limited thought processing we have 🤔🤷♂️
During university I did a project on halo orbits and used a RK of order 10. During the exam the professor asked why I didn't used a symplectic method (one that preserves the energy): RK still had an energy error of machine's precision's order and was much faster.
yeah RK is really good for a lot of things. Also, symplectic integrators are also not 100% accurate anyways. Though, Velocity Verlet is faster than RK4 and is quite good as well
Nope. Though, it does sound similar I agree. All the music I use in this video is in the description. Even with classical music I'm trying to only use stuff that's either public domain or Creative Commons licensed
So is it correct to say that the Runge-Kutta method is essentially repeating the process of following a curve's tangent until another point on that curve, and taking that tangent too, repeat?
I think you have the right idea. If you want a more rigorous definition, here's an MIT article on that. web.mit.edu/10.001/Web/Course_Notes/Differential_Equations_Notes/node5.html A lot of approximation involves taking tangent lines (linearization), so it's a bit hard to distinguish between them if you think of it that way
please learn how to pronounce Runge.You don't have to be perfect, but at least try.
You know what's crazy. I was pronouncing it the correct way for my whole life. But I watched like one video in research that pronounced it the wrong way and for some reason decided that they were right and I was wrong. So I decided to intentionally change how I pronounced it to the wrong way.
I'm actually so dumb lmao
I've heard people pronounce it "rungie", so it's at least better than that. Haha
@@copywright5635 that's crazy; on a similar note I'm convinced that Italian physicists can't pronounce "Dirac" Correctly; no professor at my Uni called the poor guy right.
(Hasty generalization I know, kinda funny tho)
@@GeodesicBruhlet me guess, they say it like dee-rac?
@@copywright5635 English: /ˈrʊŋəˈkʊtɑː/
My favorite quote
from engineer courses
is: everything is linear
if you watch it really closely.
valid
Chaos Theory: "Am I a joke to you?"
@@alexandervorgias4812 also valid
100% everything has a linear relationship. If it wasn't for the additive identity: 1+0 = 1. No other field within mathematics would be possible.
@@alexandervorgias4812 Nope, even within Chaos Theory. There are linear relationships even if we don't have the ability to recognize them.
Runge-Kutta still leaks energy.
For the equations of motion, the one integration scheme to rule them all is Velocity-Verlet. That conserves energy _exactly_ (apart from floating-point round-off), is 2nd order, and is just as computationally cheap as Euler.
To make it easy for others: en.wikipedia.org/wiki/Verlet_integration#Velocity_Verlet
Yes, I decided to leave off Symplectic Integrators. But of course, Velrlet and Velocity Verlet would have been better options.
Seeing a lot of comments about this, I will probably do a sequel to this video in the future
I'm a big fan of Verlet systems too! So simple and so insanely stable.
@@akaHarvesteR Verlet was my professor at university for one year.
And RKF78 is nearly symplectic ;)
Great video! It's funny that 3Blue1Brown's manim environment became the official animation framework of RUclips math videos. Every time I see some video made by manim, before watching it, I know that it is gonna be a good one. It never disappoints.
I agree
This is a great intro to approximation schemes! Really well explained and I love how you included all the animations to visualize what happens for each method. It was very helpful to see how drastic an effect the lack of energy conservation can have.
Idk man seems youtube glitch or something but it's missing 'k' after 898 in your sub count hope youtube fix the issue soon, once again thanks for a great video
there’s a k now
It's fixed
16.9K, can’t wait to see it grow more!
22.3K on oct 12th 2024.
We can model the hidden "subscriber function". 🤓
You've really made numerical methods an interesting area for me.
Nature is not governed by equations. It is modelled in equations.
its governed by equations.
@@Yuri_alphqI don't think this is a debate that we'll be able to settle in a RUclips comment section lol
@@Yuri_alphqprove it
I'd say equations are logical abstractions, they don't exist outside of human understanding, nature does.
Equations describe natural interactions in simple notation.
In short, it's too abstract to be considered above nature.
Actually nature doesn't give a fk of what we think, it has some rules/constants that cannot be crossed and axioms such as discreteness and continuity and it happens to be explained by differential equations because we cannot imagine a stopped frame of time and we need to measure change in variables(differentials) to understand the universe
I really didn't expect to hear Gaspard de la Nuit :)
I do try to have good music lol
Nothing in the universe is "powered by differential equations"!
Differential equations CAN DESCRIBE a lot of things.
This is very important to understand.
this is true. the universe is powered by weed.
I think you meant to write a double negative sentence. Check again
I don't think it was meant literally.
CAN DESCRIBE imperfectly i might add
Like he said, it's te language of nature
I love this kind of videos. I work on driveshafts, which are rotational mass-spring-damper systems to some degree. I loved doing the Taylor expansion and wrote some homework about how much accuracy you gain for increased compute, as you increased the order.
Hahaha, thanks for the Vsauce callback at 1:08
Manchurian candidate activation signal @ 1:08
❤
Wonderful perception ❤
Music choice is splendid. Love Ravel.
The RK algorithms are a very fascinating topic, and I've even implemented a few of them in a C++ application before specifically RK4. Yet, I still feel or think that FFTs and their inverses are some of the most interesting algorithms out there. Complex Vector and Field analysis, the Hamiltonians, especially the quaternions and octonions, and so much more are all interesting topics. We stand on the shoulder of giants! I truly enjoy videos like these, keep up the great work!
Thanks! FFTs are a topic I think deserve a longer and more dedicated video, but I'm considering doing one on them.
Even for this, I barely touched the surface level of what RK is (didn't even derive RK4 lol), just wanted to provide some intuition for it. Both for the motivation, and why it would be more accurate than a simple Euler-Cromer method
@@copywright5635 Not exactly but similar to why Quaternions prove to work better than Euler Angles when performing rotations in 3D about the independent axes.
When using Euler Angles there is the phenomenon of producing Gimbal Lock within the use of the rotation matrices. This where one axis ends up being rotated onto another where they become coincidental and from there you end up losing a degree of freedom as the two axes are locked and you can no longer differentiate between them.
Quaternions helps to prevent this. Also, quaternions, even though the mathematical notations and expressions are fairly complex, implementing them in software is fairly trivial and they also have a very nice added benefit of being able to be calculated against other vectors and matrices as well as being converted to them. Because of this, they are also computationally cheap, very efficient and quite effective.
It's not exactly the same, but it goes to show where a variety of Euler methods although is simpler to digest and work out by hand, also has their shortcomings.
FFTs just provide a very good and efficient way to transpose from one system to another especially when working with wave patterns or anything with a frequency domain.
Without FFTs, audio processing either being a wave file, a midi file, or even an MP3 file wouldn't be as efficient as they are today. Audio even when compressed requires a lot of information and can be fairly computationally expensive. FFTs reduces that by a couple orders of magnitude. Instead of trying to perform 20 thousand sine or cosine function calls per second for a 20kHz frequency. We can just sample it and use the sample rate to reconstruct a good enough approximation of the individual waves. Well sort of, as that's the abridged version.
But yeah, I find it all to be very interesting and intriguing. I'm not just intrigued with this type of stuff either. I'm also intrigued by 3D Graphics Rendering, Game Engine - Physics Simulations, as well as actual CPU Hardware Design (ISA design). Then again, this gets into physics when you go beyond the logical device level and get into the actual structure of the transistors, resistors, etc. that are designed to manipulate electricity. And here we are again, with wave propagation. Right back to the use of wave functions and the power of FFTs lol!
I just like things related to engineering. Factorio, Satisfactory, Dyson Sphere Program, Oxygen Not Included, Planet Crafter, Mindustry, Turing Complete, etc. there're all a part of my Steam Library and are my hobbies. And I'm no stranger to Music as I did play the Trumpet for close to 10 years back in my school days.
If you like this and Fourier methods, you should check out dispersion and dissipation analysis (sometimes referred to as "Fourier analysis") for ode solvers (and pde solvers too, but that's a bit more complex). It essentially allows someone to understand how a solver will respond to any initial condition of a linear problem.
@@jameswright4732 I've written a couple of simple ODEs.
You need to look into Clifford Algebras and Geometric Calculus. Try Macdonald's two books "Linear and Geometric Algebra" and "Vector and Geometric Calculus". Books are small and concise. He and a couple other people really blew the field open not long ago. Tensors and quaternions are subsets. Clifford makes them much simpler. Computer graphics is using this now, especially sims and games.
Omg, amazing. I'm CFD engineer and in masters I leaned this all we use every day but I never had comprehensive intimation of the topic.
Excellent video! The visuals and the voice over were spot on :) you’ve made a great addition to the set of SoME videos.
amazing song choice, ondine is beautiful
Wow, I honestly had no idea these methods were even connected! Thank you for the straight-forward explanation and visualizations. Top notch content, Sir.... 👍
That honestly is a great video, keep up the good work! (and so cool there's ondine in the beginning of the video)
Thanks! [ And Scarbo at the end :) ]
dude i just found out about you i just want to say i loved this video ABSOLUTELY!!!
perfect timing of you to post this video this semester
Very nice and informative video, Sir !!! Looking forward for more of such content.
i always learn something new and interesting from your content!
2 minutes in, I can already say: I like your style! Just one remark: I think it would be better to clarify that we can best describe nature with the help of differential equations, instead of saying that nature is governed by differential equations (even if it was true that we live in a simulation, this would be outside the scope of scientific thinking). This reminds me of talking about charged particles "feeling" a force and thereby intentionally reacting to it, or explaining darwinism by active adaptation to a changing environment, or even "lonely atoms which want to form bonds to share electrons" to fulfill some godly given octet rule so that all of them can live a long and happy life, and every other thing our teachers tought us "although we should keep in mind that this is just a simplification" - although the concepts you are about to explain (from my point of view at this moment in time, at minute 2 in your video) exactly oppose these views, of course. But keep in mind: Now the maths begins, and view counts will only drop from here on.
To be clear: I really like your video! Narration, animation, general style: wonderful! And I had a look at your channel, and I will watch a few more of your videos.
Thank you for the feedback, yes I should have been more clear about that. I was mostly focused on the mathematics here, but of course, everything physical we describe with mathematics is just a model. I'll keep this mentality in mind for future videos.
i think metaphysics is a part of science. many great scientific insights came from studying metaphysics. i.e. what is true , how much can we know of truth, and what is that truth composed of?
and there maybe a correspondence between our models and reality, i.e. reality = diff equations, and it's interesting enough for you to bring up, and I think it's helps strengthen our mind's talking about this...
and it can give us insights into math from a point of view of physics and vice versa.
Nice video!
However, at 9:38 it should be noted that the "order" of a method does not refer to the number of terms/stages (k1, k2), but rather the truncation of the Taylor series. This means that a 2nd order method will exacly match the taylor series up to the x^2/2! term within each time step, while the following terms (x^3/3!,...) are not exact or missing.
For some fully implicit methods (Gauss-Legendre), the order can be two times the number of stages. (They're computationally expensive and I wouldn't recommend using them a lot but provide inpressive results for large time steps.)
Thank you for mentioning this. I could have mentioned this, however, since I didn't go through the Taylor series derivation, I thought it would just confuse most viewers.
Loving Gaspard as the background❤
Great video and VPython code! Thanks for sharing. Just one heads-up: I think there's a tiny blunder in one of the equations around 2:20. The velocity goes with sin(), not cos(). And I also vote for modelled, not governed ;)
Another method (or sub-method) that maybe deserved a mention is the so-called leapfrog integration, where the average derivative for xI is taken from an average value of the previous tick acc and an extrapolated value for halfway towards the next one.
It's sort of similar to RK2, but the samples are offset back by one(half) of a tick.
It's relatively stable, and unlike RK2, you don't actually need to computer the derivative twice for each tick, as the first one is carried in from the previous iteration.
Yes, this is also a good method. I just wanted to keep the video simple. Perhaps I should have teased the sequel video at the end. I'm getting a lot of responses about this, so I think I'm going to make a sequel to this video covering Symplectic Integrators, along with some others like leapfrog
@@copywright5635 nice! Looking forward to that one! 😄
I wish you had made this video during my computational physics class 😅. Nevertheless, thanks for your clear explanation. Deserves more subs.👍
Amazing video ! I loved your pace and your little jokes, it really helps staying engaged with your presentation. The visualisations are of course really good too. :)
Somehow you managed to squeeze half of my Computational Physics 1 exam in a 13 minutes video
Oh this video is unfortunately a little bit late for me, just had my numerical simulation exam last month ;) still watching this video since it got recommended to me and its so fascinating how people came up with such things decades ago!!!
Thanks man, fantastic explanation! Looking forward to more videos of yours.
I loved the video. Numerical methods are a super interesting topic.
Very clear! Implicit Gaussian Collocation for the win though! (For numerically fully conserving skew-symmetric use cases.) EDIT: But you already mentioned Sympletic integrators in one of your responses.
Mhm, sequel video covering them coming soon! There's so many though haha, we'll see if I do end up including Gaussian Collocation
@@copywright5635 Thanks! Subbed as I’m looking forward to hearing more of (sympletic) integrators. I’ve used Gaussian Collocation for simulating with conservation accuracy down to machine-precision (with help of skew-symmetric PDE form), which is cool, but I can’t say I really ever understood what is sympleticity means nor how to derive, e.g., exponentially sympletic variants.
3 mins in and it's already getting interesting; never disappoints
My favourite astrophysics professor taught us to use symplectic integrators for orbital mechanics because they explicitly conserve energy.
Leapfrog 🔛🔝
Thank you for making me sleepy goodnight
Great video! Just now I'm (trying) learning some numerical method to solve maths problems over C languaje. Thanks!
Nice! I coded RK4 and other methods in python for a 3 body problem simulation. RK4 and Velocity Verlet were way more stable than Euler or even a 2nd order Taylor series when we consider conservation of energy. Thank you for the video;
As a Physics student, these videos are a great motivators.
Glad to hear it!
1:09 this sound is equivalent to undertaker's unexpected entry during an ongoing wrestling match😅 haven't watched the complete video, I'll leave a 'review' comment after watching it but it seems video will be awesome and informative.
Incredible video!
Hey Vsauce, reference here!!!
Thank you for this beautiful explaination
0:21 eveything in nature isn't governed by differential equations (DE), DEs describe nature, they don't govern them.
I know it can be seen as a nitpik, but I felt that the semantic difference between 'governing' and 'describing' were big enough to warrant the comment. The rest of the video was great!
The treat was a flashbang right in the face at the end of the video
This is great. You seem to be getting a bit of flak from the very knowledgeable on the subject, but I just feel the video is directed to those like me, who *in theory * can follow the equations but have trouble with the 'where are we going with this' part. And in that respect the video succeeds with flying colours. So, thank you!
I saw one comment about how differential equations only "model reality and not govern it", and thought "hey, sometimes one or two random philosophical comments are good" and upvoted it. Then I realised half of this comment section is obsessed with that one phrasing for some reason 😄
Working with scientists, I've gotten used to the "this system is governed by" phrasing so much that to me it seems a weird thing to get hung up on. But I guess it's never a bad thing to get a reminder that the map is not the territory, or even a bunch of redundant reminders 😂
From my days in college, when I took Numerical Analysis, I had an idea. Do the iterative equation that, with each iteration, produces Output values relative to increasing Time… BUT… for each Output value, use it as a STARTING VALUE for the iterative equation method known as *Picard’s Method.* When Picard’s Method iterates, it STAYS ON a single Time value (i.e. as you iterate, Picard’s Method doesn’t “move you” along the Time axis.) I never got to try my idea, but I always wanted to. Overall, the idea is that you keep switching back ‘n forth between an Increasing Time iterative equation AND a Picard’s Method iterative equation. For each “invocation” of Picard’s Method, you perform enough iterations until a suitable degree of convergence is achieved.
I recall Euler's method being introduced to me explicitly as a tool that does not produce good approximations, but rather convergent ones, which is useful for proving existence of actual solutions.
hm, well Euler's method can be covergent. However, as I showed in the video, for many systems errors will cause it to diverge quickly.
I independently reinvented RK2 in high school. Very simple idea.
Clearly explained. Thanks.
Make more videos like this, please!
For some reason, there are a lot of weird comments only trying to correct something here and there, without recognizing the relevance of what you're doing with this video.
After a semester having tedious differential equations classes without any proper visualization or ilustrations to follow from my professor, or even the connection with physics, this video just recovered my passion on the subject, Thanks a lot for the video and for the effort!
@@carlosaquino4001 thank you for the kind comment!
0:28 and the way i wander through the vastness of space
I think I learned some things. sqrt(k/m) is an angle. And because it uses regular trig functions, and not hyperbolic functions, then it has an imaginary component to it. See, e^t is hyperbolic, e^(i*t) is trig. That runs counter to what I was thinking about massless stuff being trig. So, there must be a situation where both are true. Maybe the spring constant is the angular part.
The other end of the spring is fastened to a Hooke.
Thanks for sharing your videos. I love to see how everyone has a different perspective. Would love to see more animations / videos on computational / numerical methods - difference equations, Runge-Kutta (regardless of pronunciation : ), Fourier transform. Check out GoldPlatedGoof ‘Fourier for the rest of us’.
I bet you could do a very interesting video on his Dot Product / Fourier relationship. The ability to represent any curve with Fourier epicycles is truly mindblowing! Thanks, keep it up!
Hey, a minor correction, Matlab's ode45 uses a 4- and 5-STAGE Runge Kutta method (as it uses 5 k_i), but the 5-stage method is still an order 4 method, due to I think it being able to trace exactly polynomials of degree 4 but not 5.
Carl Runge was a German mathematician and not the founder of Kurt Cobain's music genre.
Thank you so much! I hear about Runge-Kutta so often at the lab but never understood it until now! But it bothers me that pretty much the same math (at least in my brain) has so many different names: finite difference, Euler, Runge-Kutta, Taylor expansion... I am bad with names :')
It's similar, but there are differences, as I outlined in the video. The point of this was mostly to show why we use "RK4", and what it is, since that term is often thrown around without actually understanding how it works
@@copywright5635 Yeah a lot of people in my field just say "we use Runge-Kutta" then use ODE45 without thinking about what's behind the scene. The video is great!
I would love to see similar coverage of simplectic and stiff methods. Say leapfrog and adams bash.
Check out the semi-implicit Euler method. It's especially important because it preserves energy very well for small enough regular time steps.
Runge = Run-ge.
Can you make a video on symplectic integrators?
You might think my comment is mean or unsupportive of your work. But actually I really enjoy watching your video. The problem is you didn't answer your title. You didn't explain why the method is better than Euler. You didn't derive it or show any reason why it is more stable. You just showed graphically how it plays out but never actually proved anything about it
Thank you for the feedback.
I think your concern comes from a differing opinion on what "why" means. I do concede that the title is not true in a strict mathematical sense (I didn't derive RK after all).
However, I did provide reasoning for why RK2 (and by proxy RK4) seem to conserve energy better than an Euler method. I could have mentioned Symplectic Integrators, or even the Euler-Cromer method (which only changes one term, yet conserves energy for these problems). RK methods also don't inherently conserve energy, they simply converge much faster.
I approached this video with the notion in mind that it is unclear why for certain systems, RK methods, even RK2, seem to simulate so much better than standard Euler Methods. I wanted to provide a sort of intuitive motivation, and I think I accomplished this.
You are correct in saying I did not mathematically prove anything about Runge-Kutta methods. This was never the intention. I apologize if you found the title misleading.
TL;DR, The "why" in the title is not the "why" of a mathematician. It's the "why" of an engineer or experimental physicist.
holy fuck. I hate math, and i hate physics. I dont get any enjoyment out of it, but i get enjoyment out of self improvement. I have a weird obsession with the term "level up" and anything related to it.
You just earned my subscribe because this is good stuff, could help me level up
If one of the problems is unwanted gain or loss of energy as the approximation proceeds, are there methods that calculate the total energy initially and after each step and compensates for energy gain or loss as it goes?
Wish your channel existed more than a decade ago
It's clear to see that the higher order algorithms are more exact per timestep but they're also more computationally expensive because of calculating multiple derivatives per timestep. It would've been nice to see how exact each algorithm is per derivative evaluation. Because it might be more efficient computationally to use a smaller time interval with a lower order algorithm than using a higher order algorithm.
Hm, well I didn't show this directly. But notice that the Euler time step with dt = 0.02 is still worse than RK4 with a 0.1 time step
one nifty thing is you can use the difference between k2 and k3 to estimate the error and adapt your timestep dynamically.
Yes! I really wanted to incorporate this into the video, but I wanted to get it out before SoMEπ ended, so I ended up not incorporating it. I'm not sure where I'll do this, but maybe I'll make a video on my patreon or a second channel demonstrating this.
Otherwise, a sequel covering symplectic integrators will be coming at some point!
thank you!
I think the presentation went too fast in the few key moments, like when you define implicit scheme and how to actually calculate it
@@sheevys thanks for the feedback. I’ll be more careful about that next time!
I found this video fascinating, and very cool overall. [Subscribed]
It is surprising how rare it is to see the words "Runge-Kutta" compared to Euler tho.
However, with deep respect re: @0:20, NOTHING in nature is "governed" by differential equations, rather differential equations allow us to see how nature is governed.
(nice bonus at the end!)
I would point out that even the "exact" answer is an approximation, because you have to approximate the value of sine or cosine in order to draw the graph or get a numerical result for the position of the object on the spring. Now I know that you can easily calculate the value of the trigonometric functions to far more accuracy than you need -- but those number are still calculate by an approximation algorithm.
Of course this is correct. I figured including this would be a bit off topic, as the approximation we're concerned with in the video is of the "initial value problem" type rather than for function values. Thank you for the comment though
For any linear system (such as the one modeled here) a discrete state-space model can be accurate even with a course time step. If you model a single iteration accurately, then you have a template that can be applied simply at each iteration.
yes this is of course true. I'm covering other systems in another video that should be out within ~1 month (maybe longer). That one will focus on symplectic integrators
So, one gripe... Nature isn't GOVERNED by Differential Equations... it can be modeled by them... and... really only a small tiny portion of nature. It's not like it's not like Nature was consulting a Math text and decided hey ... that sounds fun...
ODE45 for life!
Would have been interesting to speak about leapfrog
I would say the language of Nature is conservation, expressed in mathematical terms with differential equations.
Couldn't you just make energy conservation explicit? That is, calculate the total kinetic + potential energy in the system at t0 and then adjust velocity or velocities at every subsequent step to force the total energy to match?
No, because energy conservation is fundamentally at odds with "velocity adjustments" (i.e. impulses.) In other words, they are what got you there in the first place. By the time you've noticed a physically incorrect circumstance, it's already too late to "fix" it in a physically correct way.
For some very simple and often frictionless contexts, we actually do have exact solutions in terms of energy conservation, but for almost all Lagrangians we have only approximation methods, and we can only improve their accuracy by really including the higher-order terms.
I've been wondering for some time how to extend time step in Velocity Verlet.
Could you extend Velocity Verlet using the same logic as here?
How about VV extension which is higher order in force (VV kinda assumes that force doesnt change during the time step)?
Another integration method that you can consider is Verlet one, third order error for position and second order error for velocity. It is highly used in games, since we care also about object interactions and with Verlet this is really easy. We can enforce non penetration constraints without necessarily applying a the force on those objects, but just displacing their positions and still not completely break the system. Obviously non physical correct, but robust and somewhat believable.
Thank you yes, I'm going to do a video soon on symplectic integrators
Edit: We were closer than OP
Runge-Cutta? We have been calling it Roongay Kootta damn thanks
The "easiest" step up from explicit euler and implicit euler is "semi implicit euler", because you just need to swap a line of code and get 10x better results than both methods. Runge Kutta 2+ is the step after that.
great!
How about symplecetic integrators?
Hm, could be a good topic, maybe as a sort of sequel to this video?
I'm trying to not present topics in a super dry manner. I'd rather motivate them first, so perhaps continuing the conservation of Energy throughline (or Hamiltonian ig) would be good for that. Thanks for the suggestion.
Does the error of the 4th order Runge-Kutta always stay below 0.09 in this case? I notice that it never seems to go above that value which seems quite surprising.
when i was in high school i asked my math teacher how to find the square root of a number and she told me to use a calculator. i told her i didn't have a calculator. she said to look in the back of the book. i realized she had no idea how to do more than basic math. i looked up how square roots are devised and i realized no one knows how tf to devise square roots. like, literally, there's no formula for it. all we can do is factor it closer and closer to the desired proximity, but we can never formulate an exact number. to me, this means that the universe is in no way mathematical, and that we are just approximating as best we can with the limited thought processing we have 🤔🤷♂️
During university I did a project on halo orbits and used a RK of order 10. During the exam the professor asked why I didn't used a symplectic method (one that preserves the energy): RK still had an energy error of machine's precision's order and was much faster.
yeah RK is really good for a lot of things. Also, symplectic integrators are also not 100% accurate anyways. Though, Velocity Verlet is faster than RK4 and is quite good as well
@@copywright5635 The simplectic I tried was leapfrog but second derivative computations for gravity in a rotating system were quite heavy.
where'd you find that background song from dawg?
missing a delta t in the formula for k2 at 8:50?
Is that Rousseau’s piano? I swear it sounds just like his piano, I’m so used to his tuning
Nope. Though, it does sound similar I agree.
All the music I use in this video is in the description. Even with classical music I'm trying to only use stuff that's either public domain or Creative Commons licensed
Oof... if that pronunciation was engagement bait.... well, it worked..!! 😂
Runge -- rung uh.
You make awesome videos like 3blue1brown manim videos
Leapfrog KDK(or DKD) is generally a overall better pick in application, I think
So is it correct to say that the Runge-Kutta method is essentially repeating the process of following a curve's tangent until another point on that curve, and taking that tangent too, repeat?
I think you have the right idea. If you want a more rigorous definition, here's an MIT article on that.
web.mit.edu/10.001/Web/Course_Notes/Differential_Equations_Notes/node5.html
A lot of approximation involves taking tangent lines (linearization), so it's a bit hard to distinguish between them if you think of it that way