I liked the pace of this video. When I needed to double-check, I simply paused the video -- which is far easier than trying to skip ahead when the pace is too slow.
Thank you! Glad you found it useful. I usually try to address things that I think would be confusing to students beforehand so I don't get as many questions later on.
excellent video! if you can make this intuitive to my sleep-deprived brain then you can teach anything. You cut right to the point and explain just enough background information to understand the process very well. awesome video
not mispronouncing or misspelling Fuch's Theorem because "it's a family show" lmao anyone spending quality time with their family watching this video is a maniac and the situation should be investigated. Anyways, great vid, I love the explanation and the bits of humor, youre helping make this degree marginally less painful!
Hi Khan! Great vid. thanks so much. One question, when you said "the radius of convergence for the power series is at least 1" at 2:38, don't you mean "at most 1"?
Shoot. I was hoping part 2 would be up! Really good explanation, great job! One super-minor note to add: at 8:45, you say that substituting r=1 gets a coefficient of 0, and r=.5 also gives 0. It should be that r=1 --> coefficient of 3. You may want to add an annotation for other viewers; I got confused thinking I was looking at the wrong coefficient.
Thank you for the feedback! Part 2 (more on the Frobenius method + an intro to Bessel functions) should be up tomorrow. My schedule has been a bit more hectic than anticipated this term lol. Also, you are correct about the coefficient of 3; thanks for pointing that out! I'll add an annotation, but it shouldn't change my main point that the coefficients of the later terms in the series will be non-zero. So luckily, the mistake wasn't huge haha.
Correct me if i am wrong,but you solved this question with euler's method. *Basically,if you can't write y in power series form,our point is singular. *If you can also write down indicial polynomial and solve it,point is regular singular. *You could put y=x^r and you could get the same solution.To use frobenius,you should've got recurrance relation.
Not exactly. In Euler's method, you're concerned with a particular class of ODEs (even more specific that the type you deal with via the Frobenius method). In fact, Euler's method is just a special case of the Frobenius method; the only difference is that you don't have an infinite series. In Euler's, you let y = (x-a)^r, where r is some power to be determined. In the Frobenius method, however, you let y = (x-a)^r*sum a_n*(x-a)^n. I *could* have solved this example problem using Euler's method (in which case I would have simply omitted the summation term in y); you're correct. However, that wasn't the subject of this video. Now why didn't I choose a Frobenius method problem which did involve a recurrence relation (i.e. one which can't directly be solved using Euler's method)? Because it would have been more complicated algebraically so I chose a simpler example. Hope that helps! P.S. In my Bessel function video, I apply the Frobenius method to an ODE which *does* involve recurrence relations. Link: ruclips.net/video/uLORiAWe63A/видео.html
I didn't understand At 9:11 If we choose the co efficient for n =1 rather than n = 0, we will have different values of r, then we can say a_0 , a_2,... Are zero rather than a1,a2... being zero.
I'm just new from this topic. I'm still lost on the later part at 7:50.. I guess the point here is to determine the roots, right? Then recognise what case is it then obtain the general solution based on that case number..
This is an amazing explanation. But for me: First time I heard it: could not follow or get it at all! then, I solved a few examples... then, I listened to this again: Now its the best explanation that I can find anywhere!
It doesn't matter, since the n = 0 term is 0 for y' while the n=1 and n=0 terms are zero for y''. I just kept the n = 0 as my starting index to keep things easy for myself.
Learning how to solve series solutions about a regular point was is an extra credit assignment because it wasn't covered in my class. Thank you this was helpful. However, one difference btwn SS-OP and SS-RP that I couldn't explain was when we took the derivatives of the respective series substitutions (i.e. y= sum(n=o)cnx...). Why did n increase in SS-OP but stay 0 in SS-RP? Example: SS-OP y= Sum(n=0)... y'= Sum(n=1)... y''= Sum(n=2)... SS-RP y= Sum(n=0)... y'= Sum(n=0)... y''= Sum(n=0)...
Hi David, Thank you for the kind feedback! As for your question, the reason the index is increased in your SS-OP method is that when you take the first derivative, the index n=0 just gives you 0 (the derivative of a constant is zero). The convenient thing to do in this case is just have the series start at n=1. The same idea applies to the second derivative, since the second derivative of a1*x is zero, so we'd rather start the series at n=2. For the SS-RP method that you mention, we don't change the starting point of the index n since the extra 'r' in x^(n+r) means that when we take derivatives, we don't just end up with a constant term whose derivative becomes zero due to the power 'r' there. So your original series, which was a0*x^r + a1*x^(r+1) + ... has a derivative of r*a0*x^(r-1)+(r+1)a1*x^r+..., which doesn't necessarily have a zero a0 term. To keep things consistent, I usually start my index at zero even when I take derivatives in the SS-OP case. The first term or two are going to be zero anyway so why should I make things confusing? You'll see this if you take a look at my series solution method video: ruclips.net/video/c3XtwTsE7QY/видео.html Hope that helped!
Hi, great video! But quick question about the proposed series solutions...shouldn't the differentiated series solutions start at n=1 and n=2 instead of n=0? Thanks!
Thank you for the feedback! I think you're confusing the Frobenius method solutions with regular series solutions: mathworld.wolfram.com/FrobeniusMethod.html In the Frobenius method, the derivatives both start at n=0 because of the extra 'r' in the power. The n=0 term here doesn't simply go away due to differentiation, as it would in a regular series solution (without the r). Still, even in a regular series solution, the 1st derivative would vanish at n=0 while the 2nd derivative would vanish at n=0 and n=1, so it doesn't matter if we start the series at n=0 in either case, since those early terms are zero anyway. Hope you found this clarification helpful and feel free to ask more questions!
Hello, i have a question, shouldn't we substitute the y(x) written in power series form to the ode we started with and not the one divided by (x-1)^2? , 06:32 or it doesn't matter? If it doesn't matter, why it doesn't? we would have (x-1)^n+r, instead of (x-1)n+r-2
have a question...if x=0 is an ordinary point can frobenious method still be applied (i.e solution of the form y=summation of Ar.x^(m-r) ? i saw in textbook that legendre ODE is solved this way,not sure why..Thanks for great tuts.
No problem, thank you for the feedback! For your question, yes you can still use the Frobenius Method in your case; it's just that r will be zero from the indicial equation. The Legendre ODE can be solved without using the Frobenius Method, and just with regular series solutions. I have a video on it if you'd like to check it out: ruclips.net/video/3e5BUrtUKZc/видео.html
Ok, so here's the ODE we're trying to solve by the Frobenius Method: y" - (2/x)y' + (1/x^2)y = 0, if I use y = sum a_n(x-1)^n, my solution y would only be valid from (0,2), since I'm expanding about x = 1 and since the nearest singular point to x = 1 is x = 0, the x = 0 will limit my interval of convergence to just (0,2). However, if I use the Frobenius Method: y = sum a_n x^(n+r), the singularity at x = 0 gets masked and since it's the only singularity, my y is now valid from (-inf,inf), which makes the Frobenius Method a much better alternative to the regular series solution method. In other words, using the Frobenius Method to expand about x = 0 causes the effect of the singularity to vanish. Hope that helps!
great video, thanks! I have one question, for your general solution you took constants c1 and c2. Yet in the seperate solutions you used a0. If simply adding up solutions, can you assume that c1=c2=a0? Or does the a0 constant have nothing to do with the general solution? Thanks.
Thank you for the kind feedback Robin! In this example, I'm not simply adding up the solutions. I'm taking their linear combinations. Basically, if you're solving a second-order linear ordinary differential equation, you need two linearly independent solutions (i.e. they aren't constant multiples of each other) as your 'basis' solutions. Taking their linear combination means that I multiply the first basis solution (y1) by c1, and the second basis solution (y2) by c2. So in my general solution, I have y_general = c1*y1+c2*y2. My a0 here is just a separate constant and I can technically just absorb it into c1 and c2 (c1*a0 is another constant, and so is c2*a0). Now that I've gone through the setup of the general solution, I can answer your question. In this case, I cannot assume that c1=c2=a0, because c1 and c2 are arbitrary constants. We have two constants because the differential equation is second-order (i.e. we need to integrate twice). They *might* be equal, but that's not necessarily true. We need two initial/boundary conditions to determine them, and I didn't give any initial/boundary conditions in this problem, so we can't make any conclusions about them right now. So basically, a0 has little to do with the general solution for this particular example, since it's just a constant that would get absorbed into two different constants c1 and c2. Hope that helped! If you have any more questions, just ask!
thanks for the quick response! yes i see now that you explained. I know about the lineair combination thing but didn't think about the constants c1 and c2 representing that it actually is a lineair combination (in general). i got confused because c1 and c2 just took the spot of a0 and i immediatly assumed it had to be equal to a0. But absorbing a0 into the constants makes quite a lot of sence, just didn't think about it (even though i've seen it before). thanks a lot, subbed :)
The 1st derivative would vanish at n=0 while the 2nd derivative would vanish at n=0 and n=1, so it doesn't matter if we start the series at n=0 in either case, since those early terms are zero anyway. So it doesn't make a difference if you start y' at n = 0 or n = 1 since the n = 0 term is zero anyway, and it doesn't make a difference if you start y'' at n = 2 or n = 0, since the n=0 and n=1 terms are zero anyway.
Then why did we expand the solution y(x) about 0 and not 1 or -1 in the legendre’s equation and mask the singularity so that radii of convergence of y(x) would be would atleast be equal to 2
Can you please share the name of books which you might have followed/liked for studying ODE and PDE. I'm a grad student, and I'll be taking a course on 'Advance Engineering Maths' at my university. It covers a lot of ground. From Complex Analysis to Linear Algebra to ODE, PDE, and Solution by Green's Function. For the Complex analysis part, I'm using (a) Complex analysis by Ahlfors, and (b) Priestley's 'Intro to Complex analysis', and Sheldon's "linear algebra done Right" for Linear Algebra. Can you please suggest a text for ODE and PDE part which is somewhat like (at least in spirit)Sheldon's text for LA. Right now I'm using Kreyzig's 'Advance Engineering Maths'. It's an excellent text but it's not rigorous enough for a graduate-level course, and Arfken's 'Mathematical methods for Physicists'. It's not a very good text for first-time learners, but unfortunately, that's the level at which course is pitched. So I first watch your lecture and then supplement it with this text. Not a very efficient way to go about it since I'm also three more courses from two other departments. So a good text for ODE and PDE will be a huge help. Kindly please suggest.
Thanks for the video but I am stuck on problems that have extra terms outside the summation produced during the process of uniting exponents of x and summation limits. In your problem those terms went 0 so it was clear but i do not know what do to when those terms dont go 0
Hi, i would like to know pls: 1) why you don t shift the index once you derive the series ? 2)if it s possible to have a series starting with a negative n? example => summ from n=-1 to n = infinit thank you
Thanks for the question! 1) I don't because I don't have to. I said it down below as well. I'll paste what I said here: "I think you're confusing the Frobenius method solutions with regular series solutions: mathworld.wolfram.com/FrobeniusMethod.html In the Frobenius method, the derivatives both start at n=0 because of the extra 'r' in the power. The n=0 term here doesn't simply go away due to differentiation, as it would in a regular series solution (without the r). Still, even in a regular series solution, the 1st derivative would vanish at n=0 while the 2nd derivative would vanish at n=0 and n=1, so it doesn't matter if we start the series at n=0 in either case, since those early terms are zero anyway." 2) I could start at a negative index as well, but I don't because it might seem confusing to people, so I generally just avoid it.
Hi Togara, Thank you for the comment! I'm sorry that you find my writing small, because I assumed that everything was fine given that no one else has pointed this out so far. I record my videos on my cruddy laptop so there typically isn't much room on the screen to work with. Perhaps you could try switching to HD (my videos go up to 720p) and changing to full screen if you haven't already done so. If that doesn't work, could you tell me in what parts of the video you find the writing small? Perhaps I can explain them for you?
thank you for responding to my comment, i realized the problem was laptop not your handwriting, so right now im going through the video, and if i encounter any more challenges i will kindly ask, and thank you again
your knowledge of mathematical methods is good; but your sense of humor is the best!
Thank you! Glad you like it!
@@FacultyofKhan yoo he has more humor than kevin hart
This is a great video! I appreciate that you explain seemingly simple ideas, just in case students are lost. You're awesome!
Thank you so much! Glad you liked my video!
this is a family show? yeah,my 8 year old niece is interested in how frobenius differs from bessel
Hey, it's never too early for her to get started :P
Why can't she be interested?
@@jaredjones6570 because she would rather have a tea party with her stuffed animals
@@jaredjones6570 u
Man, 8 is too early to learn power series.
I liked the pace of this video. When I needed to double-check, I simply paused the video -- which is far easier than trying to skip ahead when the pace is too slow.
I really appreciate you taking the time to make that comment at 11:00
Thank you! Glad you found it useful. I usually try to address things that I think would be confusing to students beforehand so I don't get as many questions later on.
Yeah, that comment answered a question I didn't even know I had. You da man, Faculty Khan!
@@ozzyfromspace Exactly!
Best explanation i have ever seen on this method.
Wow, thanks!
Great video. After pausing the video and letting my brain catch up with you a couple times, this makes sense now!
Fantastic video! Much more detail than my math methods book provides.
Thank you!
Well explained. I could actually follow it, and it's been many years since I've studied this.
Underrated video. I wish I could like this another thousand times.
6:37 rap god
at 2x
@@abdulbasithashraf5480 loll.eminem mode
excellent video! if you can make this intuitive to my sleep-deprived brain then you can teach anything. You cut right to the point and explain just enough background information to understand the process very well. awesome video
I appreciate your confidence in solving the mathematics problems thank for your action
at 8:56, shouldn't the coefficient be 3 for r=1 rather than 1?
Right?? and -1 for r = 0.5
@@abelmedina7879 yes but for r=0.5, the coefficient is 1 methinks
@@abelmedina7879 also check the description, he has corrected that mistake there!
Great videos ,I really find your videos more useful than my college profs
not mispronouncing or misspelling Fuch's Theorem because "it's a family show" lmao
anyone spending quality time with their family watching this video is a maniac and the situation should be investigated.
Anyways, great vid, I love the explanation and the bits of humor, youre helping make this degree marginally less painful!
Good sir, you are an absolute legend.
Hi Khan! Great vid. thanks so much. One question, when you said "the radius of convergence for the power series is at least 1" at 2:38, don't you mean "at most 1"?
Shoot. I was hoping part 2 would be up! Really good explanation, great job! One super-minor note to add: at 8:45, you say that substituting r=1 gets a coefficient of 0, and r=.5 also gives 0. It should be that r=1 --> coefficient of 3. You may want to add an annotation for other viewers; I got confused thinking I was looking at the wrong coefficient.
Thank you for the feedback! Part 2 (more on the Frobenius method + an intro to Bessel functions) should be up tomorrow. My schedule has been a bit more hectic than anticipated this term lol.
Also, you are correct about the coefficient of 3; thanks for pointing that out! I'll add an annotation, but it shouldn't change my main point that the coefficients of the later terms in the series will be non-zero. So luckily, the mistake wasn't huge haha.
Agree
Correct me if i am wrong,but you solved this question with euler's method.
*Basically,if you can't write y in power series form,our point is singular.
*If you can also write down indicial polynomial and solve it,point is regular singular.
*You could put y=x^r and you could get the same solution.To use frobenius,you should've got recurrance relation.
Not exactly. In Euler's method, you're concerned with a particular class of ODEs (even more specific that the type you deal with via the Frobenius method). In fact, Euler's method is just a special case of the Frobenius method; the only difference is that you don't have an infinite series. In Euler's, you let y = (x-a)^r, where r is some power to be determined. In the Frobenius method, however, you let y = (x-a)^r*sum a_n*(x-a)^n.
I *could* have solved this example problem using Euler's method (in which case I would have simply omitted the summation term in y); you're correct. However, that wasn't the subject of this video. Now why didn't I choose a Frobenius method problem which did involve a recurrence relation (i.e. one which can't directly be solved using Euler's method)? Because it would have been more complicated algebraically so I chose a simpler example. Hope that helps!
P.S. In my Bessel function video, I apply the Frobenius method to an ODE which *does* involve recurrence relations. Link: ruclips.net/video/uLORiAWe63A/видео.html
At 11:09, is not a sub 1 raised to r-1, not r+1? Does that change anything is the results?
I didn't understand At 9:11 If we choose the co efficient for n =1 rather than n = 0, we will have different values of r, then we can say a_0 , a_2,... Are zero rather than a1,a2... being zero.
I'm just new from this topic. I'm still lost on the later part at 7:50.. I guess the point here is to determine the roots, right? Then recognise what case is it then obtain the general solution based on that case number..
This is an amazing explanation.
But for me:
First time I heard it: could not follow or get it at all!
then, I solved a few examples...
then, I listened to this again: Now its the best explanation that I can find anywhere!
Don't know if it is the lack of sleep but that family show joke got me good
Thanks for the video.
While having derivatives of y, should sum start from n=1 for y' and, n=2 for y''?
It doesn't matter, since the n = 0 term is 0 for y' while the n=1 and n=0 terms are zero for y''. I just kept the n = 0 as my starting index to keep things easy for myself.
Love This Family SHOW!!
Learning how to solve series solutions about a regular point was is an extra credit assignment because it wasn't covered in my class. Thank you this was helpful. However, one difference btwn SS-OP and SS-RP that I couldn't explain was when we took the derivatives of the respective series substitutions (i.e. y= sum(n=o)cnx...). Why did n increase in SS-OP but stay 0 in SS-RP?
Example:
SS-OP
y= Sum(n=0)...
y'= Sum(n=1)...
y''= Sum(n=2)...
SS-RP
y= Sum(n=0)...
y'= Sum(n=0)...
y''= Sum(n=0)...
Hi David,
Thank you for the kind feedback!
As for your question, the reason the index is increased in your SS-OP method is that when you take the first derivative, the index n=0 just gives you 0 (the derivative of a constant is zero). The convenient thing to do in this case is just have the series start at n=1. The same idea applies to the second derivative, since the second derivative of a1*x is zero, so we'd rather start the series at n=2.
For the SS-RP method that you mention, we don't change the starting point of the index n since the extra 'r' in x^(n+r) means that when we take derivatives, we don't just end up with a constant term whose derivative becomes zero due to the power 'r' there. So your original series, which was a0*x^r + a1*x^(r+1) + ... has a derivative of r*a0*x^(r-1)+(r+1)a1*x^r+..., which doesn't necessarily have a zero a0 term.
To keep things consistent, I usually start my index at zero even when I take derivatives in the SS-OP case. The first term or two are going to be zero anyway so why should I make things confusing? You'll see this if you take a look at my series solution method video: ruclips.net/video/c3XtwTsE7QY/видео.html
Hope that helped!
Family show!
Taylor Series are like that annoying friend everyone hates to spend time with but is invited to all the parties anyway because they give good presents
have you seen laurent series
@@bebarshossny5148 I wish I could say I haven't
Hi, great video! But quick question about the proposed series solutions...shouldn't the differentiated series solutions start at n=1 and n=2 instead of n=0? Thanks!
Thank you for the feedback!
I think you're confusing the Frobenius method solutions with regular series solutions:
mathworld.wolfram.com/FrobeniusMethod.html
In the Frobenius method, the derivatives both start at n=0 because of the extra 'r' in the power. The n=0 term here doesn't simply go away due to differentiation, as it would in a regular series solution (without the r).
Still, even in a regular series solution, the 1st derivative would vanish at n=0 while the 2nd derivative would vanish at n=0 and n=1, so it doesn't matter if we start the series at n=0 in either case, since those early terms are zero anyway.
Hope you found this clarification helpful and feel free to ask more questions!
Hello, i have a question, shouldn't we substitute the y(x) written in power series form to the ode we started with and not the one divided by (x-1)^2? , 06:32 or it doesn't matter? If it doesn't matter, why it doesn't? we would have (x-1)^n+r, instead of (x-1)n+r-2
i understand the method but, why is a problem that the radius of convergence has a lower bound? Wouldn't the problem be that it has an upper bound?
Very well explained. Thank you.
Thank you! Glad you liked it!
You are best instructor
when you say "having a radius of convergence of *at least* 1", what you mean is that the radius of convergence can *at most* be 1, right?
I'm confused about this too
have a question...if x=0 is an ordinary point can frobenious method still be applied (i.e solution of the form y=summation of Ar.x^(m-r) ? i saw in textbook that legendre ODE is solved this way,not sure why..Thanks for great tuts.
No problem, thank you for the feedback!
For your question, yes you can still use the Frobenius Method in your case; it's just that r will be zero from the indicial equation. The Legendre ODE can be solved without using the Frobenius Method, and just with regular series solutions. I have a video on it if you'd like to check it out: ruclips.net/video/3e5BUrtUKZc/видео.html
Not sure I understand your explanation from 4:00 to 4:20...
Ok, so here's the ODE we're trying to solve by the Frobenius Method:
y" - (2/x)y' + (1/x^2)y = 0, if I use y = sum a_n(x-1)^n, my solution y would only be valid from (0,2), since I'm expanding about x = 1 and since the nearest singular point to x = 1 is x = 0, the x = 0 will limit my interval of convergence to just (0,2).
However, if I use the Frobenius Method: y = sum a_n x^(n+r), the singularity at x = 0 gets masked and since it's the only singularity, my y is now valid from (-inf,inf), which makes the Frobenius Method a much better alternative to the regular series solution method. In other words, using the Frobenius Method to expand about x = 0 causes the effect of the singularity to vanish.
Hope that helps!
great video, thanks! I have one question, for your general solution you took constants c1 and c2. Yet in the seperate solutions you used a0. If simply adding up solutions, can you assume that c1=c2=a0? Or does the a0 constant have nothing to do with the general solution? Thanks.
Thank you for the kind feedback Robin!
In this example, I'm not simply adding up the solutions. I'm taking their linear combinations. Basically, if you're solving a second-order linear ordinary differential equation, you need two linearly independent solutions (i.e. they aren't constant multiples of each other) as your 'basis' solutions. Taking their linear combination means that I multiply the first basis solution (y1) by c1, and the second basis solution (y2) by c2. So in my general solution, I have y_general = c1*y1+c2*y2. My a0 here is just a separate constant and I can technically just absorb it into c1 and c2 (c1*a0 is another constant, and so is c2*a0).
Now that I've gone through the setup of the general solution, I can answer your question. In this case, I cannot assume that c1=c2=a0, because c1 and c2 are arbitrary constants. We have two constants because the differential equation is second-order (i.e. we need to integrate twice). They *might* be equal, but that's not necessarily true. We need two initial/boundary conditions to determine them, and I didn't give any initial/boundary conditions in this problem, so we can't make any conclusions about them right now.
So basically, a0 has little to do with the general solution for this particular example, since it's just a constant that would get absorbed into two different constants c1 and c2. Hope that helped! If you have any more questions, just ask!
thanks for the quick response! yes i see now that you explained. I know about the lineair combination thing but didn't think about the constants c1 and c2 representing that it actually is a lineair combination (in general). i got confused because c1 and c2 just took the spot of a0 and i immediatly assumed it had to be equal to a0. But absorbing a0 into the constants makes quite a lot of sence, just didn't think about it (even though i've seen it before). thanks a lot, subbed :)
Glad I could help, and thank you!
Amazing....out of words
Great video! It helps me a lot ❤
could you possibly work out the other two situations for when the indicial function gives the other 2 possibilities for the r's
This is awesome, thank you!
Shouldn't the indices for y' and y" start from n=1 and n=2 respectively? Other sources seem to have that but yours doesn't. Why is that?
The 1st derivative would vanish at n=0 while the 2nd derivative would vanish at n=0 and n=1, so it doesn't matter if we start the series at n=0 in either case, since those early terms are zero anyway. So it doesn't make a difference if you start y' at n = 0 or n = 1 since the n = 0 term is zero anyway, and it doesn't make a difference if you start y'' at n = 2 or n = 0, since the n=0 and n=1 terms are zero anyway.
Then why did we expand the solution y(x) about 0 and not 1 or -1 in the legendre’s equation and mask the singularity so that radii of convergence of y(x) would be would atleast be equal to 2
Watching this video is a frobeni-must
Do we always solve ODE using the Frobenius method around the regular singular point?
Beautifully explained. Keep it up!
Thank you! Glad you liked it!
Can you please share the name of books which you might have followed/liked for studying ODE and PDE. I'm a grad student, and I'll be taking a course on 'Advance Engineering Maths' at my university. It covers a lot of ground. From Complex Analysis to Linear Algebra to ODE, PDE, and Solution by Green's Function.
For the Complex analysis part, I'm using (a) Complex analysis by Ahlfors, and (b) Priestley's 'Intro to Complex analysis', and Sheldon's "linear algebra done Right" for Linear Algebra. Can you please suggest a text for ODE and PDE part which is somewhat like (at least in spirit)Sheldon's text for LA.
Right now I'm using Kreyzig's 'Advance Engineering Maths'. It's an excellent text but it's not rigorous enough for a graduate-level course, and Arfken's 'Mathematical methods for Physicists'. It's not a very good text for first-time learners, but unfortunately, that's the level at which course is pitched. So I first watch your lecture and then supplement it with this text. Not a very efficient way to go about it since I'm also three more courses from two other departments. So a good text for ODE and PDE will be a huge help. Kindly please suggest.
Thanks for the video but I am stuck on problems that have extra terms outside the summation produced during the process of uniting exponents of x and summation limits. In your problem those terms went 0 so it was clear but i do not know what do to when those terms dont go 0
Thanks, you cleared it!
here, factoring (x-1) is easy because of the particular power denominator and derivative level sums up to a constant 2, what if not ?
Hi, i would like to know pls:
1) why you don t shift the index once you derive the series ?
2)if it s possible to have a series starting with a negative n?
example => summ from n=-1 to n = infinit
thank you
Thanks for the question!
1) I don't because I don't have to. I said it down below as well. I'll paste what I said here:
"I think you're confusing the Frobenius method solutions with regular series solutions:
mathworld.wolfram.com/FrobeniusMethod.html
In the Frobenius method, the derivatives both start at n=0 because of the extra 'r' in the power. The n=0 term here doesn't simply go away due to differentiation, as it would in a regular series solution (without the r).
Still, even in a regular series solution, the 1st derivative would vanish at n=0 while the 2nd derivative would vanish at n=0 and n=1, so it doesn't matter if we start the series at n=0 in either case, since those early terms are zero anyway."
2) I could start at a negative index as well, but I don't because it might seem confusing to people, so I generally just avoid it.
How to make such videos? What's the software did you use?
thankuuu sir.... for this..nice and useful video
Great video!
Glad you enjoyed it!
thanks a lot and i appreciate ur effort.
I love Fuck's Theorem
excellent, thank you so much
No problem! Glad you liked it!
my name is Hayden Frobenius and one of my ancestors created the frobenius method
I really hope these kind of tutorials have good audio (the speaker who teaches would opt to use a GOOD MIC)
I've started using a better mic now, so no need to worry there!
Thanku so much sir,keep it up
Very thanks you guy!
This is beautiful
Thank you so much for your explanation, one question, which device &app you used for digital handwriting
1:49 Good one 😂😉
This is a family show 😂😂
Really helpful tnx
Amazing
markakis gang are u there?!
His name is Fuchs. not Fuch, and Fuchs just means fox in German xD and it's pronounced almost the same way as fox too, like "foox".
the vedio is little bit faster please keep pauses
Sure, I'll take that into consideration when making my future videos. Thanks for the feedback!
I actually think the pace is perfect!
You know you have a pause button in the video player, right? :q
You can do pauses yourself, at whatever moments you need them.
Y"all are acting as if you didn't hear that word "stubborn" , 😂
its difficult to read the stuff he is writing,its kinda too small or maybe its just me
Hi Togara,
Thank you for the comment! I'm sorry that you find my writing small, because I assumed that everything was fine given that no one else has pointed this out so far. I record my videos on my cruddy laptop so there typically isn't much room on the screen to work with.
Perhaps you could try switching to HD (my videos go up to 720p) and changing to full screen if you haven't already done so. If that doesn't work, could you tell me in what parts of the video you find the writing small? Perhaps I can explain them for you?
thank you for responding to my comment, i realized the problem was laptop not your handwriting, so right now im going through the video, and if i encounter any more challenges i will kindly ask, and thank you again
No problem!
And sure thing, let me know about any challenges!
Also, do you mean my laptop or your laptop?
The writing's ok
a family show??
i thought these solutions to differential equations were illegal to under 18 😂😂
I wasn't even keen to the homonym.
What's with the voice?
what is this,
The way he pronounces Fuchs it would actually be written Fjuks or something like that in german lul.
at 1:50 lol
lmao
Professor Tate
Fuch's theorem..... hahahah,, lol.
say on skibidi
Hey
LOL do not mispronounce Fuch's Theorem
Thank you!