@@PunmasterSTP Thanks for asking, not great unfortunately so I pulled out. I think the course plan was very well thought out but the teaching was really lacking when trying to explain some new and difficult concepts. I'll look for other paths for learning this topic.
Wait, isn't it RMSE, and not MSE, that uses l2 norm? Because in MSE we are not taking a square root after we sum the squared elements of Y-Y_pred vector, which is done when calculating l2 norm
Great and intuitive explanation with a very clear visualization. Thanks for your effort. but I would like to pay your attention to the little mistake in the part of MSE where the L2 norm should represent the RMSE not MSE with no root. keep going.
It's RMSE(Root Means Squared Error) corresponding to L2 Norm not MSE (Mean Squared Error). At 4:09 the equation should not be powered up to square should not be there...
Im taking my first fundamental analysis class and its kicking my ass. This video cleared up 90% of my confusion about wtf a norm was. Youre doing a good thing, keep doing it, I hope you sleep amazing tonight!
This is an insanely concise and informative video I found not just the information I was looking for but also other information I didn't know I needed as well. I disagree with VEERARAGHAVAN J, this is better than 3Blue1Brown
what do you mean by "every point on the circumference of the square is a vector with l1 =1" ? Do you mean the perimeter of the square ? at ruclips.net/video/FiSy6zWDfiA/видео.html
Nice video, I am a little confused how L2 norm and mean are related. I see that L2 norm and mean (not mean error) are exactly the same. Does that imply L2 norm's other name is mean ?
Great thanks! Do we have to go through the center to measure the distance between the two points? In Euclidean distance, for example, aren’t we measuring the distance through the hypotenuse? After we got the lengths of the two lines from the center?
Well, it's actually pretty easy to see... L_n = (X_1^n + X_2^n + ... + X_k^n)^(1/n) When n -> inf, X_max^n >> all the other terms. And that is the reason why we only get X_max when we take the nth root.
I felt as if I am watching 3Blue1Brown :)
Thanks for the complement :D :D
3Blue1Brown -> the ultimate Indian theme XD
yeah that's what is was also thinking
same here guys
Brilliant, you explained something my lecturer and about 5 other websites couldn't in plain clear terms. Thank you.
Happy to help :D
I just came across and was curious. How’d the rest of your class go?
@@PunmasterSTP Thanks for asking, not great unfortunately so I pulled out. I think the course plan was very well thought out but the teaching was really lacking when trying to explain some new and difficult concepts. I'll look for other paths for learning this topic.
@@krmt I'm sorry to hear that, and I hope your current academic endeavors are going well.
which course is this?
Wait, isn't it RMSE, and not MSE, that uses l2 norm? Because in MSE we are not taking a square root after we sum the squared elements of Y-Y_pred vector, which is done when calculating l2 norm
You're Goddamn Right
Great and intuitive explanation with a very clear visualization. Thanks for your effort. but I would like to pay your attention to the little mistake in the part of MSE where the L2 norm should represent the RMSE not MSE with no root. keep going.
I was rather confused when he mentioned mean squared errror until I saw your comments. Thanks for the clarification!
It's RMSE(Root Means Squared Error) corresponding to L2 Norm not MSE (Mean Squared Error). At 4:09 the equation should not be powered up to square should not be there...
You're Goddamn Right
Thank you! I'm just starting linear algebra, and this helped me understand/visualize what's going on with norms much better. Loved it!
Great video, this guy is probably 3brown1blue. You know, cause Indian people have mostly brown eyes :v
Haha...nice one!
Im taking my first fundamental analysis class and its kicking my ass.
This video cleared up 90% of my confusion about wtf a norm was. Youre doing a good thing, keep doing it, I hope you sleep amazing tonight!
sleep amazing, thats a rare but important one
short crisp and rich of information that's rare nowadays on youtube
Thanks man!
Its not. You have to search little bit.
It was such a rewarding 5 mins watching this video. Please keep them coming.🔥
Norm in machine? More like “Natural explanation that’s just the thing!” Thanks for sharing. 👍
Love you. You deserve a place in heaven
Thanks for your kind words ❤️
Please make video on manim. How to use it? No good tutorials out there.
Many people are already doing that. Check these out:
Theorem of Beethoven (YT channel)
Talking Physics (Blog)
eulertour.com
r/manim
"the victory"?? IT'S "THE VECTOR A"
great video. amazing explanation
may i please ask that you remove the social tags you included in the top left? it's extremely distracting
Thanks man. Just had a class where I got lost what norms were. The college education system is broken. You guys are the saviours.
You're putting the "brown" in 3Blue1Brown 🙂
This is an insanely concise and informative video I found not just the information I was looking for but also other information I didn't know I needed as well. I disagree with VEERARAGHAVAN J, this is better than 3Blue1Brown
Thank you so much buddy :D :D
Great Video, simple and informative explanation.
Thank you!
hellow people from the future this video really really ehlped
Great job. Love manim
Manim ❤️
wow, it's that simple! thank you!!!!
WHAT SOWFWARE USE TO ANIMATE THE VIDEO?
Great explanation, thank you so much for the video.
so high quality stuff, feels illegal to watch free
Thanks, Amazing video and mainly Animations (Are you using Manim ?)
Yeah...😌
3:58 what is the maximum of the vector 😊
sorry but i did not understand anything.
4:03 L2 is RMSE not MSE
Can you do a video on Gaussian naive bayes.
Thanks for the suggestion...I'll try to make one.
Great, as always. Keep'em coming!
Thanks, will do!
Appreciate you 👍
👍
Thanks very clear
|x^n| + |y^n| = 1
1 = 1/(x^n + y^n)
if n --> inf
then either x = + or -1 or y = + or -1 for the equality to hold
which gives us a square
I from from year 2022
👍
what do you mean by "every point on the circumference of the square is a vector with l1 =1" ? Do you mean the perimeter of the square ? at ruclips.net/video/FiSy6zWDfiA/видео.html
great sir
Genious
Great!
Nice video, I am a little confused how L2 norm and mean are related. I see that L2 norm and mean (not mean error) are exactly the same. Does that imply L2 norm's other name is mean ?
thanks. could you please tell me what is the name of the program you made this video.
What is the name of the piano piece at 3:03 min mark? Thanks.
'No. 10 - A New Beginning' by Esther Abrami, part of the RUclips Audio Library
Thanks for the video, theses 5 minutes cleared up doubts that i had accumulated over 5 lectures haha
Lacking of words just amazing 😃
Thank you so much :)
Great. Nice explantion
Thanks man!
Good Explanation.Thank you
now it makes sense. Maraming salamat po.
Good job man
Damn top right corner. So annoying
You're a saint, thank you
THANK YOU SO MUCH!!!
Welcome :)
nice brother
Thanks a lot
Love the energy in your voice!
Thank you for the quick and effective explanation of norm.
Thank you so much for this!!
a quality education thank u
500th like! first time the ridge and lasso ambiguity got cleared, once for all
Glad it helped!!
the dynamic change really helps a lot!! thanks
This is great! Thank you!
wonderfully explained. I appreciated it a lot.
Really good. Thanks !! 🔥
I like this video
Thanks sir 🙏🏼
This was really good. ❤❤
Thanks :D
Great thanks! Do we have to go through the center to measure the distance between the two points? In Euclidean distance, for example, aren’t we measuring the distance through the hypotenuse? After we got the lengths of the two lines from the center?
most underrated video in math youtube
Thanks!
perfect explanation. thanks so much!
Glad it was helpful!
Really good man, thanks
Thanks man!
Wow you have opened my eyes to the truth
Glad to help :)
Can you share resources to where I can learn about L(inf) norms?
Well, it's actually pretty easy to see...
L_n = (X_1^n + X_2^n + ... + X_k^n)^(1/n)
When n -> inf, X_max^n >> all the other terms.
And that is the reason why we only get X_max when we take the nth root.
@@NormalizedNerd yes exactly
Thank you so much, you explanation makes it so much easier to grasp
Glad it was helpful :D
Amazing!
Thank you!!
Great !!! Keep going !)
Keep supporting!
Nice, may I ask you what you use to create animations? :)
I use manim (open source python library)
@@NormalizedNerd ok, thank you very much :)
@@marcocanil4147 I was thinking Manim is a software. But I was amazed to see you are using python library