Thanks, Ma'am for your easy explanation. I spent almost entirety day to catch some things. It's all clear for me now. Keep updating such materials related to CNN. I am also interested in learning Data Science related to mathematics from you.
Glad this video is helpful. And i have a playlist named “statistics for data science” there you can learn maths . I have few videos in it , rest will update soon
Explanation is good. Madam I want the Image size conversion process explanation for ResNet50 and VGG16 Model. For example 224x224x3 image conversion to 112x112x64...Please the conversion till final stage based on the formula as a video. It helps a lot for everyone
Mam, i am following your playlist. Really it is very helpful content. But mam your playlist is short.please make more videos. Because now I don't want follow the playlist from any youtuber except you mam. 🥰🥰🥰
Mam I understood in detail about the Resnet 50 architecture, but there is one question, like I am right now making a project on LDW system it has to detect the lanes, so how do use this model? What should be my approach?
4:50 you say we avoid vanishing gradient by skipping layers, back propagation however does not travel through the skip connections. Can you explain how the vanishing gradient is exactly solved?
from my understanding, back propagation does travel through the skipped layers but stack them as a single one. So a set of stacked layers have one common weight that the gradient travels throught. simply put you have a direct weighted connection lets say between layer 1 and 10 for example. in the end the gradient goes through less connections and thus remains stable
great explanation thanks a lot. but I have a question. after 2 convolution layers our input dimension changed in to 75*75*64. this will be the input after the next three convolution layers. to add this with the output of these three convolution layers their dimensions must be the same but the third convolution has 256 filter size which makes the output x dimension*y dimension*256 which can't be added to the 75*75*64 dimension and we used identity blocks even though their dimension is different. can you please explain to me this? thanks once again😊
Channels are changing layer to layer because all these channels are already fixed as per the paper of Resnet. See Resnet is an algorithm and all the parameters like how neurons in a layer, padding, pooling size and channels are pre defined. So we are using those parameters. If you want you can change the number of channels and play around.
Dear I couldn't understand the logic of usage of identity block in the first convolutional block. Because input is 75x75x64. But in the last convolutional layer as 1x1x256, the output should have dimension of 256. Therefore we can not add input and output, could you please clear this with an example ? Thank you very much for the video.
Ok so what I understood about the identity and convolution block is that their result is added to the output of a normal block of convolution and pooling in a network to generate a residual block and then the result of the residual block is fed to the next residual block in line... I.e., we are changing the value of activations of a block explicitly Please correct me if I am wrong
thanks mam....great explaination 1 doubt: in lung disease detection our image of xrays will be in grayscale format so, while giving size [224.224] +[3] what will be in place of 3 as 3 is used for colored image?
M'am i have used resnet50 for a project but my guide asked me to do changes in the algorithm,so mam what changes i can do in the algorithm without disturbing accuracy to large amount.
You can remove 1 or 2 layers from the algorithm or you can change the filter size in any of the layer or you can change the pool size . You can change the stride in 1 or 2 layers of the algorithm. When you will do changes in 1 or 2 layer that will not impact the accuracy much.
can anybody write answer of my question after of maxpooling layer we have 64 layer that connected to slip connection this input will add to the other input that has 256 layer??? what is happening in there
nice explanation @Aarohi can you please explain that convolutional block how the output size(28*28*128) matches the input size (56*56*64) once again there is a little confusion for me
You can write your own resnet model and change the input size. Check this link: github.com/AarohiSingla/ResNet50/blob/master/3-resnet50_rooms_dataset.ipynb Here I am using the input size 64. and you can replace that 64 with your customized image size.
Hi.. great video... have a small doubt regarding identity block..let consider 1st skip connection where identity block works. X=75*75*64 to add this o/p of 1x1 conv, 256. But the size of X (input) is not matching with o/p of 1x1 conv, 256. Then how identity works???? pls comment..
Hi.. thanks for appreciation... Please provide me the output size also. Because we apply identity block when input size and output size is similar. And as per your comment, x=75*75*64 and you are applying 1*1 conv,256 in the shortcut path. But what is the size of output image then only we can compare whether 1*1 conv,256 is giving us correct size or not.
@@poojakabra1479 Correct.. you cant add 75*75*64 with the output of the first convolution block (1*1*256). If you see the resnet architecture code, we use a 1*1 con with stride =1 in the first skip connection and the filter 64 is multipled by 4 = 256; so the shortcut size would be 75*75*256 which can then be added with the output from the 1*1*256 conv layer which is also 75*75*256.
if we want to make f(x) equals to zero. then why dont we just remove these layers? if their output is zero, why dont we just remove them? i didnt get this point. anyone please explain what did i miss.
I am pretty new to the whole deep learning thing but what I understood from the explanation was, it is not that we are trying to achieve a fx = 0 but more appropriately we are trying to achieve y = fx + x for each residual block... Now we know that traditionally we try to bring SGD to a local minima and we can only do that if fx is close to the original output but here we are changing the activations explicitly to do so by adding x to fx rather than relying on the network to bring the network to a local minima... Now to answer what u asked according to the paper the identity x is not considered as an extra hyperparameter so it is kinda non existent so during back propagation the network still adjusts the weights of the original layers so we can't remove them... Hope this was helpful
thank you mam for this. I saw almost 4-5 videos on youtube, but didn't get ResNet. You make it very simple. Thanks!
Glad it was helpful!
Easily one of the best video in Resnet . Crisp and clear explanation. Good job
Thanks
Mam, you are to good. Really trust keep going on your way. Your content is really very helpful. I didn't see such a content on RUclips.
Glad to hear that.
This is an excellent description of resnet50 architecture. You earned a subscriber. 👍
Awesome, thank you!
Thanks, Ma'am for your easy explanation. I spent almost entirety day to catch some things. It's all clear for me now. Keep updating such materials related to CNN. I am also interested in learning Data Science related to mathematics from you.
Glad this video is helpful. And i have a playlist named “statistics for data science” there you can learn maths . I have few videos in it , rest will update soon
@@CodeWithAarohi Thanks so much for kind information. I will obviously check cause I need to learn maths.
you can't believe I minimise the video for giving 👍 likes, but I found I already had liked@@CodeWithAarohi
i've watched 10 videos explaining this and yours was the best
Glad to hear that 😊
What an explanation! This is a masterpiece tutorial, and thank you, Ma'm, for making such mesmerizing video.
Thanks a lot 😊
much much thanks mam , very very much far better from my college professors
Glad my video is helpful! Keep watching and learning 😊
this is the greatest explanation i have ever seen upon this topic. thank you
Glad it was helpful!
@@CodeWithAarohi
Mem i want a certificate course on AI, ML,DL .If u make any course on this topic i wanna enroll under u .
This is the best video on this topic!! Thank you so much, Aarohiji.. Your help is greatly appreciated for my research.
I am glad my video is helpful.
I appreciate the patience and very useful repeat in the presentation!
Glad it was helpful!
Thank you so much for this lecture! Clear and to the point!
Glad it was helpful!
The best resnet video explained in so much detail. Thank you Aarohi.
You're so welcome!
back to this old video, and get back and review, just realize how beautiful Resnet it is. how those ppl come up with those cool ideas
Thank you so much for your excellent explanation of Resnet!!!
Welcome
Great explanation! Thank you so much, I know what ResNet now. ^^
Glad this video is helpful
Thank you very much mam for your good explanation about ResNet.
You are most welcome
Very friendly explanation . I clear my problem by help of your presentation. Love u mammmammmaaa❤❤❤❤❤❤❤❤❤❤
Happy to help
Thanks alot ♥ Greetings from Istanbul Technical University
Welcome 🙂
Great explanation Ma'am Aarohi. You made all the concepts very easy and clear. Lots of love from Pakistan.
Glad my videos helped you 😊
Explanation is good. Madam I want the Image size conversion process explanation for ResNet50 and VGG16 Model. For example 224x224x3 image conversion to 112x112x64...Please the conversion till final stage based on the formula as a video. It helps a lot for everyone
Nice explanation mam
Thanks a lot
Thank you for this video. Excellent work
You are welcome!
You make deep learning easy!
Glad to hear that!
Really Worth of my time to watch the video. Great explanation Madam.
Glad to hear that
This video is very helpful! ,one of the best video in Resnet, thankyou mam, it would be helpful if share the slides
I don't have those PPT's now
Thanks ma'am for such nice explanation.,🙏🙏
Most welcome 😊
Amazing step by step by explanation!
Glad it was helpful!
Thankyou so much. This explanation is really helpful
Glad it was helpful!
Great great Explanation. Thank you so much mam
Most welcome 😊
Nice explanation superb video ma'am
Thank you so much!
Thank you Ma'am for your great and amazing explanation
It's my pleasure
Explained very well. Good work
Thanks
Mam, i am following your playlist. Really it is very helpful content. But mam your playlist is short.please make more videos. Because now I don't want follow the playlist from any youtuber except you mam. 🥰🥰🥰
Sure, I will update the playlist and also try to add more videos soon 😊
Very detailed explanation. 👌👌👌👌
thankyou
thank you much , so helpful video
You are welcome!
Love you mam,,,,,I really love your explaination....Thank you very much for making such a video
This video is very helpful! Thank you so much for explaining this :)
Glad it was helpful!
Well explained maam thank you so much for explaining this 🙏
Welcome
amazing and detailed explaination
Glad my video is helpful!
thank you very good job and explanation.
Glad it was helpful!
Very well explained. Thanks
You are welcome!
Thankyou so much, this was really helpful.
Welcome
thank you mam ,please explain implementation like this
Noted
thanks a lot for the neat explanation
You are welcome!
Thanks... very helpful...
Glad it was helpful!
Mam I understood in detail about the Resnet 50 architecture, but there is one question, like I am right now making a project on LDW system it has to detect the lanes, so how do use this model? What should be my approach?
Best explanation on internet , Thanks Aarohi 🧡
Glad it was helpful 😊
Thank you so much.
You're welcome!
4:50 you say we avoid vanishing gradient by skipping layers, back propagation however does not travel through the skip connections. Can you explain how the vanishing gradient is exactly solved?
from my understanding, back propagation does travel through the skipped layers but stack them as a single one. So a set of stacked layers have one common weight that the gradient travels throught. simply put you have a direct weighted connection lets say between layer 1 and 10 for example. in the end the gradient goes through less connections and thus remains stable
Great explaination thanks!!
Glad it was helpful!
Excellent mam, keep it up
Thank you!
Well done, very nicely explained. Keep it up
Thanks a lot
We track FX such that it gets as min as possible and hence Y becomes as close as possible to Input. FX is acting like a Regularisation function also
correct
How to manage a big topic in short video....👍🏻👍🏻
Gagan Malhi thankyou for appreciation
Great tutorial
Glad you think so!
thanks for explaining
You're welcome
Super explanation
Thank you 🙂
superb explanation. if you explain nlp series (transformers), it will be also superb
I will try
Best Explanation ever...Can i get this ppt ?
useful and simple:)
Thank you 🙂
Very nice!
Glad you like it
Can you PLEASE make a Attention model in deep learning video just like this one step by step and detailed explanation it will be a great help.
I will try to make it soon
Very Good!!
thankyou
great explanation thanks a lot. but I have a question. after 2 convolution layers our input dimension changed in to 75*75*64. this will be the input after the next three convolution layers. to add this with the output of these three convolution layers their dimensions must be the same but the third convolution has 256 filter size which makes the output x dimension*y dimension*256 which can't be added to the 75*75*64 dimension and we used identity blocks even though their dimension is different. can you please explain to me this? thanks once again😊
great explanation on architecture and I am not able to understand channel change from layer to layer
Channels are changing layer to layer because all these channels are already fixed as per the paper of Resnet. See Resnet is an algorithm and all the parameters like how neurons in a layer, padding, pooling size and channels are pre defined. So we are using those parameters. If you want you can change the number of channels and play around.
You are the best
Thanks
Dear I couldn't understand the logic of usage of identity block in the first convolutional block. Because input is 75x75x64. But in the last convolutional layer as 1x1x256, the output should have dimension of 256. Therefore we can not add input and output, could you please clear this with an example ? Thank you very much for the video.
Thank you
Welcome!
good explanation
Thanks for liking
Thank u so much mam
Most welcome 😊
Thank u so much dii
You are welcome!
Thank you Aarohi 👍🏻. I have doubt - So does the residual networks play their part only while updating weights?
super akka 👏
Thank you 😊
Ok so what I understood about the identity and convolution block is that their result is added to the output of a normal block of convolution and pooling in a network to generate a residual block and then the result of the residual block is fed to the next residual block in line... I.e., we are changing the value of activations of a block explicitly
Please correct me if I am wrong
thanks mam....great explaination
1 doubt:
in lung disease detection our image of xrays will be in grayscale format so,
while giving size [224.224] +[3] what will be in place of 3 as 3 is used for colored image?
no replace 3 with 1
M'am i have used resnet50 for a project but my guide asked me to do changes in the algorithm,so mam what changes i can do in the algorithm without disturbing accuracy to large amount.
You can remove 1 or 2 layers from the algorithm or you can change the filter size in any of the layer or you can change the pool size . You can change the stride in 1 or 2 layers of the algorithm. When you will do changes in 1 or 2 layer that will not impact the accuracy much.
Good Explanation. But 301/2 can never be 150. So how do we correct it?
can anybody write answer of my question
after of maxpooling layer we have 64 layer that connected to slip connection this input will add to the other input that has 256 layer???
what is happening in there
Hello Maam...Very Good Explanation, Can you explain attention gate mechanism in a video?
Will upload soon
Can we use this resnet50 model for mri brain tumor image classification having 4 classes in the target feature,?
Yes, you can
thank you@@CodeWithAarohi
nice explanation @Aarohi can you please explain that convolutional block how the output size(28*28*128) matches the input size (56*56*64) once again there is a little confusion for me
hello dear if we want to change the input of pretrained network Resnet50 224 * 224 to some higher value what should we do ?
You can write your own resnet model and change the input size. Check this link: github.com/AarohiSingla/ResNet50/blob/master/3-resnet50_rooms_dataset.ipynb Here I am using the input size 64. and you can replace that 64 with your customized image size.
Hi.. great video... have a small doubt regarding identity block..let consider 1st skip connection where identity block works. X=75*75*64 to add this o/p of 1x1 conv, 256. But the size of X (input) is not matching with o/p of 1x1 conv, 256. Then how identity works???? pls comment..
Hi.. thanks for appreciation... Please provide me the output size also. Because we apply identity block when input size and output size is similar. And as per your comment, x=75*75*64 and you are applying 1*1 conv,256 in the shortcut path. But what is the size of output image then only we can compare whether 1*1 conv,256 is giving us correct size or not.
@@CodeWithAarohi The 75*75*64 has depth=64 and after the 1*1 conv,256 layer, depth = 256. How can you add them?
@@poojakabra1479 Correct.. you cant add 75*75*64 with the output of the first convolution block (1*1*256). If you see the resnet architecture code, we use a 1*1 con with stride =1 in the first skip connection and the filter 64 is multipled by 4 = 256; so the shortcut size would be 75*75*256 which can then be added with the output from the 1*1*256 conv layer which is also 75*75*256.
can you please explain
How this res-net 50 applied to speech
Thank you for the good explanation.. I have one doubt. What will the filter matrix will contain I mean values??
Those will be some pixel values from image
if we want to make f(x) equals to zero. then why dont we just remove these layers? if their output is zero, why dont we just remove them? i didnt get this point. anyone please explain what did i miss.
I am pretty new to the whole deep learning thing but what I understood from the explanation was, it is not that we are trying to achieve a fx = 0 but more appropriately we are trying to achieve y = fx + x for each residual block... Now we know that traditionally we try to bring SGD to a local minima and we can only do that if fx is close to the original output but here we are changing the activations explicitly to do so by adding x to fx rather than relying on the network to bring the network to a local minima... Now to answer what u asked according to the paper the identity x is not considered as an extra hyperparameter so it is kinda non existent so during back propagation the network still adjusts the weights of the original layers so we can't remove them...
Hope this was helpful
Does the skip layers get trained during forward propogation?
skip layers are skipped from training. This is the logic of resnet
is the number of layers in ResNet50 the same as those in ResNet50V2?
You can put subtitles in English, I appreciate it.
Sure, I will do
Hi mam please make videos on SkNet and DieT
Noted!
@@CodeWithAarohi mam I have phd interview I have some doubts can I contact you mam please if possible to you mam
How would we get f(x) if we won't pass through the layers
You have so much of knowledge about AI then why you not working in big brand, MNC
I like to work the way I am working now 😊
where did the formula come from? please explain.
I have serious problem with doing resnet model, please let me know
what is your query?
Thank you so mam, can you please language autogenerated hindi to english in this video
Do you have any code for modulation classification?
no
Fantastic
Thanks
Well done aarohi
Thankyou
Very nice explanation. Mam can u share the ppt..
Thanks for liking my work and I am sorry, I dont have this ppt with me. But yes if you want resnet code that I can share.
Okk mam plz share
@@shubhamchoudhary3580 github.com/AarohiSingla/ResNet50/blob/master/3-resnet50_rooms_dataset.ipynb