This turned out to be very interesting and I included it in my masters thesis for solving a hybrid electrical system running rotors, which turns out to use non linear systems and I followed the same steps as Ahmad did and it works.
The premultiplication term of the correction from 9:10 to 9:40 should be the inverse of D. I think the estimation of initial conditions based on a physical knowledge of the problem, or a rough approximation of the function f, is critical not only for convergence but also for finding a meaningful solution that avoids local (but not global) minimums. Otherwise it would be better, even if more computationally expensive, to use a global method like simulated annealing.
Yes you’re right. While editing the video, i had put a note indicating this should be an inverse - no idea why it did not render. Also notice line 13 12:52 that it has the inverse you refer to.
in some notation please note that the gradient part is also written (x-a)^T times gradient which in this case you took the transpose of the gradient and both are same.
Do you need to calculate the inverse of the hessian? Would the quadratic curve that you'd get from going in the direction of the gradient lead to the minimum of approximating surface?
I'm having a hard time understanding what c will be in actual applications of this method. Such as using some given f of two variables x1 and x2. Oh well.
This guy has helped me more than my university calculus professor and I can not seem to find a way to repay him.
This man should be awarded a noble prize for saving MY LIFE !! 🎖
AGREED !
it's intuitively nice and easy to understand
For the first time, I understand why I need Hessian matrix when I use Newton method. Great video!!!
Amazing, crisp and clear explanation.
Very well explained, thank you sir
Excellent way of presenting multivariate newton !
This lecture is like an all-in-one. Ahmad also provide the code. What a persona !
You are watching a master at work ! 👍🏻
Very good tutorial !!!
Ahmad is awesome ! Please continue your lectures and don't stop.
An Awesome Video, Thanks a lot
Exellent video. Thank you very much for sharing. This video gave me tons of help in understanding and doing my homework. God bless.
Thank you! very helpful.
This turned out to be very interesting and I included it in my masters thesis for solving a hybrid electrical system running rotors, which turns out to use non linear systems and I followed the same steps as Ahmad did and it works.
This guy is amazing and he giving good knowledge
Informative
Its Really good jobs.. Please continue your lectures and don't stop
I've been looking for this derivation. Thanks yet again for a wonderful video. :)
Fantastic!
That was very concise and clear. Thanks a lot!
Can someone tell me why he's so underrated ?
Same reason why Bill Gates gets few views
Wow very nice video
The premultiplication term of the correction from 9:10 to 9:40 should be the inverse of D.
I think the estimation of initial conditions based on a physical knowledge of the problem, or a rough approximation of the function f, is critical not only for convergence but also for finding a meaningful solution that avoids local (but not global) minimums. Otherwise it would be better, even if more computationally expensive, to use a global method like simulated annealing.
Yes you’re right. While editing the video, i had put a note indicating this should be an inverse - no idea why it did not render.
Also notice line 13 12:52 that it has the inverse you refer to.
Lovely video
Recommended
great video
nice video
in some notation please note that the gradient part is also written (x-a)^T times gradient which in this case you took the transpose of the gradient and both are same.
Nice project
Very good
Thank you !
Do you need to calculate the inverse of the hessian? Would the quadratic curve that you'd get from going in the direction of the gradient lead to the minimum of approximating surface?
Thanks.
Nice❤️
Nice
Nice video. If you ever get a chance to add Gauss-Newton, that'd be awesome! I can do the math for it, but I don't quite get the intuition.
Could someone explain how he calculted the derivate of q(x) please? How he pass from b^T*x to b after differentiation?
Now these 2 terms are the same as: x(T) is dimension 1 x n, H is n x n, a is n x 1, thus their product is a 1 x 1 matrix (i.e. a scalar)
It's derived from: (x(T))*(H)*(-a) + (-a(T))*H*(x) ,where * is multiplication
Can Newton's method be extended to higher dimensions?
So we can write the first term as (-a(T))*H*(x) and do the addition:
It could be done as H is symmetric.
Wow
I'm having a hard time understanding what c will be in actual applications of this method. Such as using some given f of two variables x1 and x2. Oh well.
wow its eigenchris
okey sorry...thanks btw for the vids :)
Kemuel
Hello
this is all non sense...y u no show example bradar?
*Investing in crypto now should be in every wise individuals list, in some months time you'll be ecstatic with the decision you made today*
Crypto is the new gold
Please how can I contact expert Patrick Smith fx??
I've got 12th winning thanks to Mr Patrick Smith fx he's really the best, I've made £16,400 in 18 days working with him
WhatsApp⏬⏬
十𝟭𝟲𝟬𝟳𝟯𝟬𝟱𝟮𝟴𝟮𝟮📩📩🚀🚀 🇺🇸
👆 ☝️ ☝️
suscribe
This is probably the worst video to date about this. You're transposing a single variable vector???
nice video
nice
nice