The premultiplication term of the correction from 9:10 to 9:40 should be the inverse of D. I think the estimation of initial conditions based on a physical knowledge of the problem, or a rough approximation of the function f, is critical not only for convergence but also for finding a meaningful solution that avoids local (but not global) minimums. Otherwise it would be better, even if more computationally expensive, to use a global method like simulated annealing.
Yes you’re right. While editing the video, i had put a note indicating this should be an inverse - no idea why it did not render. Also notice line 13 12:52 that it has the inverse you refer to.
I'm having a hard time understanding what c will be in actual applications of this method. Such as using some given f of two variables x1 and x2. Oh well.
Do you need to calculate the inverse of the hessian? Would the quadratic curve that you'd get from going in the direction of the gradient lead to the minimum of approximating surface?
in some notation please note that the gradient part is also written (x-a)^T times gradient which in this case you took the transpose of the gradient and both are same.
This turned out to be very interesting and I included it in my masters thesis for solving a hybrid electrical system running rotors, which turns out to use non linear systems and I followed the same steps as Ahmad did and it works.
This man should be awarded a noble prize for saving MY LIFE !! 🎖
AGREED !
This guy has helped me more than my university calculus professor and I can not seem to find a way to repay him.
Can someone tell me why he's so underrated ?
Same reason why Bill Gates gets few views
it's intuitively nice and easy to understand
For the first time, I understand why I need Hessian matrix when I use Newton method. Great video!!!
This lecture is like an all-in-one. Ahmad also provide the code. What a persona !
Excellent way of presenting multivariate newton !
Very well explained, thank you sir
You are watching a master at work ! 👍🏻
Ahmad is awesome ! Please continue your lectures and don't stop.
This guy is amazing and he giving good knowledge
Informative
nice video
Thank you! very helpful.
Now these 2 terms are the same as: x(T) is dimension 1 x n, H is n x n, a is n x 1, thus their product is a 1 x 1 matrix (i.e. a scalar)
An Awesome Video, Thanks a lot
Fantastic!
Recommended
Wow very nice video
Its Really good jobs.. Please continue your lectures and don't stop
The premultiplication term of the correction from 9:10 to 9:40 should be the inverse of D.
I think the estimation of initial conditions based on a physical knowledge of the problem, or a rough approximation of the function f, is critical not only for convergence but also for finding a meaningful solution that avoids local (but not global) minimums. Otherwise it would be better, even if more computationally expensive, to use a global method like simulated annealing.
Yes you’re right. While editing the video, i had put a note indicating this should be an inverse - no idea why it did not render.
Also notice line 13 12:52 that it has the inverse you refer to.
Lovely video
It's derived from: (x(T))*(H)*(-a) + (-a(T))*H*(x) ,where * is multiplication
Nice project
Very good
That was very concise and clear. Thanks a lot!
Thanks.
Nice❤️
Nice video. If you ever get a chance to add Gauss-Newton, that'd be awesome! I can do the math for it, but I don't quite get the intuition.
nice
Wow
Thank you !
It could be done as H is symmetric.
Could someone explain how he calculted the derivate of q(x) please? How he pass from b^T*x to b after differentiation?
I'm having a hard time understanding what c will be in actual applications of this method. Such as using some given f of two variables x1 and x2. Oh well.
Can Newton's method be extended to higher dimensions?
wow its eigenchris
Kemuel
suscribe
this is all non sense...y u no show example bradar?
nice video
nice
Do you need to calculate the inverse of the hessian? Would the quadratic curve that you'd get from going in the direction of the gradient lead to the minimum of approximating surface?
This is probably the worst video to date about this. You're transposing a single variable vector???
So we can write the first term as (-a(T))*H*(x) and do the addition:
okey sorry...thanks btw for the vids :)
*Investing in crypto now should be in every wise individuals list, in some months time you'll be ecstatic with the decision you made today*
Crypto is the new gold
Please how can I contact expert Patrick Smith fx??
I've got 12th winning thanks to Mr Patrick Smith fx he's really the best, I've made £16,400 in 18 days working with him
WhatsApp⏬⏬
十𝟭𝟲𝟬𝟳𝟯𝟬𝟱𝟮𝟴𝟮𝟮📩📩🚀🚀 🇺🇸
👆 ☝️ ☝️
Very good tutorial !!!
Exellent video. Thank you very much for sharing. This video gave me tons of help in understanding and doing my homework. God bless.
great video
Hello
Nice
in some notation please note that the gradient part is also written (x-a)^T times gradient which in this case you took the transpose of the gradient and both are same.
I've been looking for this derivation. Thanks yet again for a wonderful video. :)
This turned out to be very interesting and I included it in my masters thesis for solving a hybrid electrical system running rotors, which turns out to use non linear systems and I followed the same steps as Ahmad did and it works.
Amazing, crisp and clear explanation.