Fully understand it now. Math shouldn't be about memorizing formulas. It should be about connecting the dots, observing patterns, and summarizing patterns into shortcuts called formulas, and enjoying the thinking that goes into that process.
You can really tell that there's a lot of thought put into these videos about what might be the best way of explaining something. I actually find your way of explaining stuff really helpful.
Having taken both differential equations and linear algebra a long time ago, I can see the link between the Wronskian and vectors. How I wish this young man had been my professor (in his previous life). We have the advantage of the internet, so it's much easier to learn this stuff.
This is really good. Just about to take my differential equations matrix methods portion test and this helped a lot with explaining the why and how. Thanks!
Whattttttt....you guy, u are really sweet in your maths. U are truly so good. I like your personality and the way to pause on the board and write on it. I have fully understood the concept....thanks so much.
Yes. If we add two solutions to a nonhomogeneous differential equation, the sum will not be a solution. For the nonhomogeneous case, we first find a particular solution, then add the general solution to the corresponding homogeneous equation!
Yes. To find the eigenvalues, you can use lambda = m +/- sqrt(m^2 - p), to more directly find them. This only works for a 2x2 matrix, and unfortunately no such equivalent trick works for a 3x3 or anything beyond. The m is the mean of the two diagonal entries along the down-right diagonal. The p is the determinant, which is the product of the two eigenvalues. So for us: m = 1.5 p = 2*1 - 3*4 = -10 1.5 +/- sqrt(1.5^2 - (-10)) = 5 and -2
Both are good, but the reason this can be more powerful is that computers can perform vector math easily, e.g. Matlab and that can be run on superclusters, graphics cards, or specialized processors like ASICs. Solving using Laplace can be done on a computer too, but it's more costly on CPU cycles and works less often especially for complicated problems.
In russian textbook i found following method x' = 2x + 3y y' = 4x + y x' = 2x + 3y ky' = 4kx + ky x' + ky' = (2+4k)x + (3+k)y x' + ky' = (2+4k)(x+(3+k)/(2+4k)y) (3+k)/(2+4k) = k 3 + k = k(2 + 4k) 3 + k = 2k + 4k^2 4k^2 + k - 3 = 0 (4k - 3)(k + 1) = 0 x' - y' = -2x +2y d(x - y)/dt = -2(x-y) d(x - y)/(x-y) = -2 ln(x - y) = -2t+ln(C_{1}) x - y = C_{1}exp(-2t) x' + 3/4y' = 5x + 15/4y d(x + 3/4y)/dt = 5(x+3/4y) d(x + 3/4y)/(x+3/4y) = 5dt ln(x + 3/4y) = 5t + ln(C_{2}) x + 3/4y = C_{2}exp(5t) x - y = C_{1}exp(-2t) x + 3/4y = C_{2}exp(5t) 3/4x - 3/4y = 3/4C_{1}exp(-2t) x + 3/4y = C_{2}exp(5t) 7/4x = 3/4C_{1}exp(-2t) + C_{2}exp(5t) x = 3/7C_{1}exp(-2t) + 4/7C_{2}exp(5t) x - y = C_{1}exp(-2t) -(x + 3/4y = C_{2}exp(5t)) -7/4y = C_{1}exp(-2t) - C_{2}exp(5t) y = -4/7C_{1}exp(-2t) + 4/7C_{2}exp(5t) x = 3/7C_{1}exp(-2t) + 4/7C_{2}exp(5t) y = -4/7C_{1}exp(-2t) + 4/7C_{2}exp(5t) To generalize it for more equations we need k1,k2...,k_{n-1} but problems may appear for repeated eigenvalues I dont speak russian (When i went to school it hadn't been taught. My mother didnt want to teach me) but this approach probably has something to do with eigenvalues and eigenvectors
Fully understand it now. Math shouldn't be about memorizing formulas. It should be about connecting the dots, observing patterns, and summarizing patterns into shortcuts called formulas, and enjoying the thinking that goes into that process.
Believe it or not, I have forgotten this method! 😆
Unbelievable for someone like you
I know right
Ah he's human😂😂
I was about to ask you to make video on this topic but I found your comment here..😂
You can really tell that there's a lot of thought put into these videos about what might be the best way of explaining something. I actually find your way of explaining stuff really helpful.
What a great video! Short and explains everything clearly. I cannot thank you enough my guy!
Having taken both differential equations and linear algebra a long time ago, I can see the link between the Wronskian and vectors. How I wish this young man had been my professor (in his previous life). We have the advantage of the internet, so it's much easier to learn this stuff.
I found myself confused by MIT OCWs explanation, but this cleared it right up. Lovely work
this was super helpful! Made the connection between eigenvalue/vector problems and solving differential equations much more clear.
what a god, you summarised a 30 min lecture in under 10 mins
This is so informative. Thank you for a comprehensive explanation.
this is exactly the video i was in need off. thank you for a clear and consise explination!
This is the only video I have watched explaining the concept behind solving system of equations 😩. Thank you Sir
That was really clear. Good stuff.
This explanation is fantastic!!
Watching the eigenvalue formula show up was the the craziest cameo
Insanly well explained. Thank you
It is really very helpful video for me, I am so impressed by your way to solution.
This helps me so much, i have a test about linear algebra next week about eigenvalues and systems of diff. Equations, thanks a bunch
Man, I forgot my differential equations classes. Very cool method.
Sir , I really loveu.Hope u will contribute in mathematics in a large way,
EXCELLENT VIDEO
Exactly what I was looking for. 👏
Bro has the best handwriting ever
This is really good. Just about to take my differential equations matrix methods portion test and this helped a lot with explaining the why and how. Thanks!
Whattttttt....you guy, u are really sweet in your maths. U are truly so good. I like your personality and the way to pause on the board and write on it. I have fully understood the concept....thanks so much.
it is most definitely E I G E N V A LU E T I M E
Amazing and clear explanation! Thank you.
Thank you very Much
Great video , thank you
thank you so much
if my professor ever did this, i'd say "whoa... an easy day"
Loved it.
Does the superposition principle only work for homogeneous differential equations?
Yes. If we add two solutions to a nonhomogeneous differential equation, the sum will not be a solution. For the nonhomogeneous case, we first find a particular solution, then add the general solution to the corresponding homogeneous equation!
@@MuPrimeMath ah right. So it’s the same general solution principle thing as for differential equations that aren’t systems
is there any more simplification than this?? hats off bro
Yes. To find the eigenvalues, you can use lambda = m +/- sqrt(m^2 - p), to more directly find them. This only works for a 2x2 matrix, and unfortunately no such equivalent trick works for a 3x3 or anything beyond.
The m is the mean of the two diagonal entries along the down-right diagonal.
The p is the determinant, which is the product of the two eigenvalues.
So for us:
m = 1.5
p = 2*1 - 3*4 = -10
1.5 +/- sqrt(1.5^2 - (-10)) = 5 and -2
凄いわかりやすかった。ありがとうございました。
I got the eigenvector associated to the eigenvalue -2 equal to (1, -4/3), is this okay?
Yes. Any scalar multiple of an eigenvector is also an eigenvector with the same eigenvalue.
3:48 I don't understand why do you say that x' = e^(rt) but not x' = e^(At). Why r = A? Or rather, why e*I = A?
What's the name of this techniq?
Great explanation!
Hi...I want u to add a video for solving system of diiferntial eqns hving complex roots by using matrix exponential form....
There will be a video where I solve a system with complex roots soon!
@@MuPrimeMath ...keep on going...👍👍👍
Thanks man
thank you
i love you thank you so much
But 'r' is actually 'A' SO BOTH ARE SAME, HOW U ARRIVED EIGEN VALUE EQUATION
Couldn't you just Laplace transform both equations at the start and a get a regular system of equations?
Yes, that's another method for solving systems!
He specified the method under use
Both are good, but the reason this can be more powerful is that computers can perform vector math easily, e.g. Matlab and that can be run on superclusters, graphics cards, or specialized processors like ASICs. Solving using Laplace can be done on a computer too, but it's more costly on CPU cycles and works less often especially for complicated problems.
awesome!!
wow sir WOOOOW !
interesting...
Nice!
👍👍👍👍...nice way...
It will take me a year to connect the dots😂🔫
❤️❤️❤️❤️❤️❤️❤️❤️
gas vid
In russian textbook i found following method
x' = 2x + 3y
y' = 4x + y
x' = 2x + 3y
ky' = 4kx + ky
x' + ky' = (2+4k)x + (3+k)y
x' + ky' = (2+4k)(x+(3+k)/(2+4k)y)
(3+k)/(2+4k) = k
3 + k = k(2 + 4k)
3 + k = 2k + 4k^2
4k^2 + k - 3 = 0
(4k - 3)(k + 1) = 0
x' - y' = -2x +2y
d(x - y)/dt = -2(x-y)
d(x - y)/(x-y) = -2
ln(x - y) = -2t+ln(C_{1})
x - y = C_{1}exp(-2t)
x' + 3/4y' = 5x + 15/4y
d(x + 3/4y)/dt = 5(x+3/4y)
d(x + 3/4y)/(x+3/4y) = 5dt
ln(x + 3/4y) = 5t + ln(C_{2})
x + 3/4y = C_{2}exp(5t)
x - y = C_{1}exp(-2t)
x + 3/4y = C_{2}exp(5t)
3/4x - 3/4y = 3/4C_{1}exp(-2t)
x + 3/4y = C_{2}exp(5t)
7/4x = 3/4C_{1}exp(-2t) + C_{2}exp(5t)
x = 3/7C_{1}exp(-2t) + 4/7C_{2}exp(5t)
x - y = C_{1}exp(-2t)
-(x + 3/4y = C_{2}exp(5t))
-7/4y = C_{1}exp(-2t) - C_{2}exp(5t)
y = -4/7C_{1}exp(-2t) + 4/7C_{2}exp(5t)
x = 3/7C_{1}exp(-2t) + 4/7C_{2}exp(5t)
y = -4/7C_{1}exp(-2t) + 4/7C_{2}exp(5t)
To generalize it for more equations we need k1,k2...,k_{n-1}
but problems may appear for repeated eigenvalues
I dont speak russian (When i went to school it hadn't been taught. My mother didnt want to teach me)
but this approach probably has something to do with eigenvalues and eigenvectors
great explanation!
thanks man