Loss Functions in Deep Learning | Deep Learning | CampusX
HTML-код
- Опубликовано: 28 июн 2024
- In this video, we'll understand the concept of Loss Functions and their role in training neural networks. Join me for a straightforward explanation to grasp how these functions impact model performance.
============================
Do you want to learn from me?
Check my affordable mentorship program at : learnwith.campusx.in
============================
📱 Grow with us:
CampusX' LinkedIn: / campusx-official
CampusX on Instagram for daily tips: / campusx.official
My LinkedIn: / nitish-singh-03412789
Discord: / discord
👍If you find this video helpful, consider giving it a thumbs up and subscribing for more educational videos on data science!
💭Share your thoughts, experiences, or questions in the comments below. I love hearing from you!
✨ Hashtags✨
#DeepLearning #LossFunctions #NeuralNetworks #MachineLearning #AI #LearningBasics #SimplifiedLearning #modeltraining
⌚Time Stamps⌚
00:00 - Intro
01:09 - What is loss function?
11:08 - Loss functions in deep learning
14:20 - Loss function vs cost function
24:35 - Advantages/Disadvantages
59:13 - Outro
When will you cover RNN, encoder-decoder & transformers?
Also, if you could make mini projects on these topics, it would be great.
Keep doing this great work of knowledge sharing, hope your tribe grows more. 👍
Your every word and every minute of sayings are worth a lot!
the only channel I have ever seen on youtube is underrated! best content seen so far....... Thanks a lot
Fantastic Explanation Sir ! Absolutely brilliant ! Way to go Sir ! Thank you so much for the crystal clear explanation
Good Content, great explanation and an exceptionally gifted teacher. Learning is truly made enjoyable by your videos. Thank you for your hard work and clear teaching Nitish Sir.
Ony day this channel become most popular for Deep learning ❤️❤️
Your Content deleviery is truely outstanding sir . Although the numbers don,t justify with your teaching talent but let me tell i came here after seeing many of the paid courses and became a fond of ur teaching method .So, please don,t stop making such fabulous videos . I am pretty sure that this channel will be among top channels for ML and data science soon !!
This is the best explaination about the whole basic of losses , all doubt are cleared thank you so much for this video.
These loss functions are the same as taught in machine learning. Difference in Huber, Binary and Categorical loss function.
Great content for me....now everything about loss function is clear .......thank you
My Morning begins with campusX...
Gentlemen u r on right track
please continue the "100 days of deep learning" sir its humble request to you. This playlist and this channel is best on this entire youtube for machine learner ❤❤❤❤
Sir, you are really amazing. I have learned lot of things from your RUclips channel.
I wanted this video and got it. Thank you.
was able to understand each and every word, concept just because of you sir. Your teaching has brought me to this place where i can understand such concepts easily. Thank you very much sir. Really appreciate your hard work and passion. ❣🌼🌟
Such wonderful learning experience
With all respect....thank you very much ❤
It was a great Explanation . Thank you so much for such amazing videos.
Great lecture as usual. Just one small clarification: binary cross entropy has a convex (but not close formed) solution hence it only has a single global minima and no local minima. This can be proved using simple calculus by noticing the second derivatives and check if it is always greater than 0. Hence, you mentioned that there are multiple local minima which is not right. But thanks for your comprehensive material which is helping us learn such complex topics with ease!
Great work sir. Amazing 😍
44:52 Binary cross entropy loss is convex function,it will only have one local minima or only one global minima
hmesha ki trha kmaaaal Sir g
thank you for your hard work
Great video sir as expected
Very very excellent teaching skills you have Sir! Its like college senior explaining concept to me sitting in hostel room.
Amazing sir 🙏🏻
Beautiful explanation
this playlist is a 💎💎💎💎💎
Welcome Back Sir 🤟
How beautiful this is 🥰
Awesome sir!
Learning DL and Hindi together, respect from Afghanistan Sir!
Thanks for the timestamps It's really helpful
Nowadays my morning and night end with your lecture sir😅.. thanks for putting so much effort.
great content
Very well explained, Thanks
Mindboggling !!!!!!!!!!!!!!!!!!
Hi. i think the in huber loss example plot @ 36:59, it is for clasification example rather than regression example. regression line should pass through the data points instead of seperating them.
awesome man just amazing ... ! ! !
Great content!
Thank you!!!
Very well explained
thank you so much sir, clear explaination
nice explanation sir
thank you so much
if the difference in (yi - y^i) is in decimals, then the loss value is diminished and not magnified, so maybe a novelty would be take this into account.
Amazing
This is so very important
Thank you
at timestamp 44:40 --> sir, you told that binary corss entropy may have multiple minimal, but binary cross entropy is a convex function so it won't have multiple minima, i think.
Thank You Sir.
At 21:06,[MEAN SQUARE ERROR] In order to calculate totel error by doing [y - y^] some value may be negative and can reduce the error {That we don't want} that is why we are doing square after doing substraction as you said. So here my doubt is that can we make that negative value to positive. then there is no need to do square. Please explain this. Thank you. :)
22:25 unit^2
I am enjoying your video like a web series sir
Great work
sir carryon this series
One disadvantage of MSE that, i can figure out if there are multiple local minima then there might be a case where MSE loss function can lead to a local minima instead of global minima
Thank you sir 😁😊
🦸♂Thank you Bhaiya ...
amazing lectureeeeeeee
great explanation. can you tell me why we need bias in NN . how it is useful
Can you please create a videos for remainig Loss Function , for AutoEncoders, GANS, Transformers also. Thanks
excellent teaching skill.sir plz provide notes pdf
can we use step function as the activation function for the last layer/ prediction node while doing classification problem using binary cross entropy? for 0 and 1 outputs?
Thanxs sir
Thanks Sir
thanks sir
Superb Video Sirr! Can you tell me which is the stylus that your using? And what is the name of the drawing/writing pad that you use. I want to buy one too
Galaxy tab s7+
Great concise video. Loved it.
A small question 💡:
Sometimes we do drop='first', to remove that redundant first column during onehotencoding. So does that make a difference while using either of these categorical losses!?
I think this might be happening automatically or not needed bcoz that way we could not get the loss for that category
yes it affects the model because u should keep no. of parameters as less as possible for optimised model. but we dont always . it depends on variables or input. like 2 inputs can be interpreted by just one variable. 2^1. 3 variables require at least 2 variables but 2^2 is 4 so we can drop one column.
Awesome
finished watching
best
Wouldn't Categorical and Sparse Entropy become same ?
As after OHE, all log terms become zero except the current one which gives same result as from Sparse.
easyy thankssss
maza aagya
❤
at 36:27, shouldnt the line be nearly perpendicular to what you drew? seems like a case of simpson's paradox.
Respect
ML MICE SKLEARN video is still pending sir pleases make that video, other Playlist are also very helpfull thanks for all content.
43.32
cost function = 1/n∑ ( loss function )
Sir, which tool are you using for explanation in this video
thank your sir for this great content.
13/05/24
As usual crystal clear explanation Sir ji❤❤🙌 @CampusX
please share the white board @CampusX
what is the difference :
1.) if we update the weights and bias on each row ,for all epoches ,
2) for each batch (all rows togeather), for all epoches .
can you tell senarios where one is better over other?
+1
Thank you sir for resuming
Revising my concepts.
August 04, 2023 😅
can someone explain me how 0.3 0.6 0.1 is coming @ 52:37 I want to know how can I get these values and which formula is used
grate
please put timestamp for each topic in this video.
Please take care of background noises
aise explain karoge to like to karna padega na....
First viewer
Isn't logloss convex?
Black bord achha tha
Hi sir
I want complete end to end project video.please share me
Why you stopped posting videos in this Playlist?
Creating the next one right now... Backpropogation
@@campusx-official please upload atleast one videos in 3-4 days to maintain continuity. by the way this playlist is going to be game changer for most learners, because comprehensive video content for Deep Learning is not available on youtube!
Your method of teaching is very simple and understandable. Thank You for providing credible content!
Time series in details 😓
Let him finish this series na. Why forcing like this???
@@geekyprogrammer4831 true brother
Birds ka voice aara background me
Avoid Hindi speaking in video
Data science ke Thalapathy