I think the simplest and most intuitive way to go about this is to convert the equation into a linear system of equations of order 1. I say the simplest and most intuitive, because order 1 linear equations can be solved systematically and without guessing. Notice, for example, that y'' - y' - y = y'' - [φ + (-1/φ)]·y' + [φ·(-1/φ)]·y, which you already get from the characteristic equation. Then you can simply let y' - φ·y = z, hence z' + 1/φ·z = 0. These are super straightfoward to solve using the method of integrating factors. There is no need to guess the functions and start checking whether the roots are complex or repeated or whatever nonsense. It is just a straightforward algorithm. This is also applicable to higher order linear differential equations as well. As long as the characteristic equation can be factored, you can always turn the equation into a linear system, making it very straightforward to solve the equation. You never have to guess anything. And if the equation cannot be factored because the roots are not expressible in terms of radicals alone (as tends to be the case with polynomials of degree 5 or above), then you cannot write a formula for the solutions of the differential equation anyway, so none of the methods matter. This is why this method is so good: it works in every scenario where the equation can be solved, and it always works the same. No need to do a piecewise analysis.
Wow! This finding with y' - φ·y = z is really cool. This is really like arts thing. I'd never come up with this one this way. But, you are saying this is straight-forward solution, aren't you? I don't see how you can split the coefficients like this in the general case. I think, in general case, of course, n-th order equation can be easily converted into a the system of n first order equations. Then, you would need to diagonolize the matrix, to "split" the system into separate equations. You'd have to find the eigenvalues and eigenvectors for that. You see, you're back to the same "nonsense". In some cases when matrix cannot be reduced to diagonal one, you may need to get the Jacobi normal form and play with that thing... Do you really think that it's simpler? To me, it's one order of complexity higher. And after this hard work you'll still find out that solutions will still be the exponents to the power of x times your eigenvalues (or the linear combination of those). For n-times repeated roots, you'll get like x^n * exp(s*x). Well, actually guys like Euler very long ago observed all that and realized that solution is still of this form regardless of the coefficients. So why to reinvent the wheel? Just plug the expnent in and find the roots of the characteristic polynomial. Standard procedure, 5-munte work...
I'm sort of relieved that, for once, you didn't approach this from some oddball direction that makes me go "now how did he even come up with that?" Mind you, I like the innovative approaches you take on these equations; I just don't always have it in me to be surprised and baffled.
Let y = e^ax So y' = ae^ax and y'' = a^2e^ax After substitute, you got the quadralic a^2 = a + 1 So a = (1 +- sqrt 5)/2 Second case : y is polynomial Let d the degree of y. Then, the degree of y' is d-1 and the degree of y'' is d-2 So we got d-2 = d + d-1. But in this case, d + d-1 = d (because d is not a variable or a number), so we got - 2 = 0 which is impossible. So y can' t be polynomial. 3rd case : y is logarithmic Let's assume that y = (ln ax^b) /{ln c} So y' = [(abx^(b-1))*(1/ax^b)*lnc]/(lnc)^2 = bx^(-1)/lnc = b/lnc * 1/x So y'' = - b/lnc * 1/x^2 Let call b/lnc = k Then, - k/x^2 = k/x + ln(ax^b)/lnc -k/x^2 = (b + ln(ax^b))/lnc -b(x^(-2) + 1) = ln (ax^b) But x is not a constant so it' s impossible. Last case : y is trigonometric 1 part : Let y = atanbx^c = a(sinbx^c)/(cosbx^c) 2 part : Let y = asinbx^c 3 part : Let y = acosbx^c I just give you the idea because I wouldn't do it and make some errors.
Solution: y'' = y'+y |-y'-y ⟹ y''-y'-y = 0 |y''-4y'+3y=0 This is a linear homogeneous differential equation of 2nd order. Solution approach: Inserting y=C*e^(rx) y’=r*C*e^(rx) y’’=r²*C*e^(rx) into the differential equation yields: r²*C*e^(rx)-r*C*e^(rx)-C*e^(rx) = 0 |/(C*e^(rx)) ⇒ r²-r-1 = 0 |p-q formula ⇒ r1/2 = 1/2±√(1/4+1) = 1/2±1/2*√5 = (1±√5)/2 ⇒ r1 = (1+√5)/2 and r2 = (1-√5)/2 ⇒ y = C1*e^[(1+√5)/2*x] is a solution and y = C2*e^[(1-√5)/2*x] with a different constant of integration is also a solution. And since both solutions result in zero by presupposition, the sum is also a solution: y = C1*e^[(1+√5)/2*x]+C2*[(1-√5)/2*x]
Thanks for the video. Well.. there's no magic here:) linear differential equations with constant coefficients (both homogeneous and non-homogeneous) have well-known methods and theories to solve them. This one is called an Euler's substitution method, I guess. Just plug C*exp(s*x) (C≠0, s is complex) into y(x), then group the terms, divide by C*exp(s*x) (this term is always non zero), find the roots of the polynomial. If roots are complex but conjugates, use a linear combination corresponding to the Euler's formula to arrive at sin-s and cos-s as real and imaginary parts. Multiple roots are more interesting, but still there's a way to address that case too. I'd say it's trivial - no mathematical arts needed here :)
There is no need to even make it this complicated. Since this is a linear equation, it can be converted into two linear differential equations of order 1, each solvable using the method of integrating factors.
@@angelmendez-rivera351 because we don't know the initial conditions. Though I believe you can assume ICs to be zero and then method of solution is similar to D-operator. 😀
I think the simplest and most intuitive way to go about this is to convert the equation into a linear system of equations of order 1. I say the simplest and most intuitive, because order 1 linear equations can be solved systematically and without guessing. Notice, for example, that y'' - y' - y = y'' - [φ + (-1/φ)]·y' + [φ·(-1/φ)]·y, which you already get from the characteristic equation. Then you can simply let y' - φ·y = z, hence z' + 1/φ·z = 0. These are super straightfoward to solve using the method of integrating factors. There is no need to guess the functions and start checking whether the roots are complex or repeated or whatever nonsense. It is just a straightforward algorithm. This is also applicable to higher order linear differential equations as well. As long as the characteristic equation can be factored, you can always turn the equation into a linear system, making it very straightforward to solve the equation. You never have to guess anything. And if the equation cannot be factored because the roots are not expressible in terms of radicals alone (as tends to be the case with polynomials of degree 5 or above), then you cannot write a formula for the solutions of the differential equation anyway, so none of the methods matter. This is why this method is so good: it works in every scenario where the equation can be solved, and it always works the same. No need to do a piecewise analysis.
Nice!!!
Wow! This finding with y' - φ·y = z is really cool. This is really like arts thing. I'd never come up with this one this way. But, you are saying this is straight-forward solution, aren't you? I don't see how you can split the coefficients like this in the general case. I think, in general case, of course, n-th order equation can be easily converted into a the system of n first order equations. Then, you would need to diagonolize the matrix, to "split" the system into separate equations. You'd have to find the eigenvalues and eigenvectors for that. You see, you're back to the same "nonsense". In some cases when matrix cannot be reduced to diagonal one, you may need to get the Jacobi normal form and play with that thing... Do you really think that it's simpler? To me, it's one order of complexity higher. And after this hard work you'll still find out that solutions will still be the exponents to the power of x times your eigenvalues (or the linear combination of those). For n-times repeated roots, you'll get like x^n * exp(s*x). Well, actually guys like Euler very long ago observed all that and realized that solution is still of this form regardless of the coefficients. So why to reinvent the wheel? Just plug the expnent in and find the roots of the characteristic polynomial. Standard procedure, 5-munte work...
Hey, what a coincidence, today Michael Penn also released a video that involved the use of this same differential equation.
I thought exactly the same
I don’t see it. Do you have a link or the name of the video?
@@SyberMath ruclips.net/video/KL1CX_OeL6M/видео.html
@@SyberMath ruclips.net/video/KL1CX_OeL6M/видео.html
@@SyberMath there you go: ruclips.net/video/KL1CX_OeL6M/видео.html
I'm sort of relieved that, for once, you didn't approach this from some oddball direction that makes me go "now how did he even come up with that?" Mind you, I like the innovative approaches you take on these equations; I just don't always have it in me to be surprised and baffled.
Thank you!
Hey Mr. Syber, when are you gonna do a viwer-suggested series? I woud really look forward!
Good idea!
Let y = e^ax
So y' = ae^ax and y'' = a^2e^ax
After substitute, you got the quadralic a^2 = a + 1
So a = (1 +- sqrt 5)/2
Second case : y is polynomial
Let d the degree of y.
Then, the degree of y' is d-1 and the degree of y'' is d-2
So we got d-2 = d + d-1. But in this case, d + d-1 = d (because d is not a variable or a number), so we got - 2 = 0 which is impossible. So y can' t be polynomial.
3rd case : y is logarithmic
Let's assume that y = (ln ax^b) /{ln c}
So y' = [(abx^(b-1))*(1/ax^b)*lnc]/(lnc)^2 = bx^(-1)/lnc = b/lnc * 1/x
So y'' = - b/lnc * 1/x^2
Let call b/lnc = k
Then, - k/x^2 = k/x + ln(ax^b)/lnc
-k/x^2 = (b + ln(ax^b))/lnc
-b(x^(-2) + 1) = ln (ax^b)
But x is not a constant so it' s impossible.
Last case : y is trigonometric
1 part :
Let y = atanbx^c = a(sinbx^c)/(cosbx^c)
2 part :
Let y = asinbx^c
3 part :
Let y = acosbx^c
I just give you the idea because I wouldn't do it and make some errors.
Wow! Thanks!
I make mistakes, too! 😁
Solution:
y'' = y'+y |-y'-y ⟹
y''-y'-y = 0 |y''-4y'+3y=0
This is a linear homogeneous differential equation of 2nd order.
Solution approach:
Inserting y=C*e^(rx) y’=r*C*e^(rx) y’’=r²*C*e^(rx) into the differential equation yields:
r²*C*e^(rx)-r*C*e^(rx)-C*e^(rx) = 0 |/(C*e^(rx)) ⇒
r²-r-1 = 0 |p-q formula ⇒
r1/2 = 1/2±√(1/4+1) = 1/2±1/2*√5 = (1±√5)/2 ⇒
r1 = (1+√5)/2 and r2 = (1-√5)/2 ⇒
y = C1*e^[(1+√5)/2*x] is a solution and y = C2*e^[(1-√5)/2*x] with a different constant of integration is also a solution. And since both solutions result in zero by presupposition, the sum is also a solution:
y = C1*e^[(1+√5)/2*x]+C2*[(1-√5)/2*x]
Thanks for the video. Well.. there's no magic here:) linear differential equations with constant coefficients (both homogeneous and non-homogeneous) have well-known methods and theories to solve them. This one is called an Euler's substitution method, I guess. Just plug C*exp(s*x) (C≠0, s is complex) into y(x), then group the terms, divide by C*exp(s*x) (this term is always non zero), find the roots of the polynomial. If roots are complex but conjugates, use a linear combination corresponding to the Euler's formula to arrive at sin-s and cos-s as real and imaginary parts. Multiple roots are more interesting, but still there's a way to address that case too. I'd say it's trivial - no mathematical arts needed here :)
Np. That's right!
There is no need to even make it this complicated. Since this is a linear equation, it can be converted into two linear differential equations of order 1, each solvable using the method of integrating factors.
This man reeeally likes the golden ratio...
😜
Why not just take the Laplace transform of entire diffeq? Felt like the D-operator is doing somewhat similar 😉
Absolutely!
@@SyberMath But we can't solve it using Laplace transform. Do you know why?
@@kumardigvijaymishra5945 Why not?
@@angelmendez-rivera351 because we don't know the initial conditions. Though I believe you can assume ICs to be zero and then method of solution is similar to D-operator. 😀
@@kumardigvijaymishra5945 We don't need to know the initial conditions, we can just leave them as is, without plugging anything in
Thank
Np
Succinct and clear.
Sterling and unadulterated.
Thanks!
Good choice of words! I had to look up sterling
Hocam en son calculus atali aylar olmuş:((
e^(phi x)
More generally, any linear combination of e^(phi x) and e^(x/phi).
@@bjornfeuerbacher5514 in fact e^(-x/phi) instead of e^(x/phi)
@@pageegap Thanks, I indeed forgot to consider that the second solution is negative.
Not clear