hello Shriram, you have explained very well.. I have a question, the conv layer 2 has a padding of 2 as per the architecture but I see you have mentioned the padding as 'valid' for this layer, could you kindly clarify this? am I missing something here?
does the table value change?? does the stride and image size from the alexnet architecture table in the video remain the same irrespective of the input image??
Hello sir, it is very informative. I want to ask you is it possible for you to make a video for training Alexnet architecture with pre trained cifar 10 dataset with explaining on how to rescale the datasets from 32*32 to 227*227? This will be very much helpful
Sir, it is very informative. thank you very much. Sir at the second convolution layer it is said that the size is 27x27x256. But it will be 23x23x256 right. in the program output, it is correctly calculated but in all the slides available it is 27x27x256. There are corresponding changes in the following layers (ie. in the next layer it is 11x11 then 9x9 then 7x7 etc.
Very informative and useful sir.... very used for my research.... can you explain the drawbacks of these pretrained models.... like googlenet, inception, resnet
Nice tutorial and demo sir, I want to ask about layer output. In this video and architecture output layer using 1000 classes, If our have classes 4, do we have anything ouput layer change 4 classes ?.
can we change parameters like input size , layer , feature map , size , kernel size , stride , activation ? any of them changing any of them will change alexnet architecture ? is it necessary to use same parameters which described in video to use this architecture namesd as alexnet ? If i change parameters it will be also called as alexnet ?
Hey, thanks for the explanation. Regarding the input dimension of the AlexNet network, I tried implementing it in TensorFlow and only works if the size is 227 x 227. But Many of the literatures following them have made use of different dimensions. Like SPPNet paper says its 224 x 224 x 3, and R-CNN paper uses 227 x 227. Please can you explain me as to what should be believed.
Your, voice is similar to Ravi Ashwin. Great explanation
Oh ho...thanks buddy..
Hi, thanks for your really nice explanation, would you please explain a bit more why do we have 96 filters ?
thanks
You're welcome!
hello Shriram, you have explained very well.. I have a question, the conv layer 2 has a padding of 2 as per the architecture but I see you have mentioned the padding as 'valid' for this layer, could you kindly clarify this? am I missing something here?
Very simple way of explaining deep learning. Awesome playlist
Thanks and glad u liked. Your comments keep me moving
does the table value change?? does the stride and image size from the alexnet architecture table in the video remain the same irrespective of the input image??
Hello sir, it is very informative. I want to ask you is it possible for you to make a video for training Alexnet architecture with pre trained cifar 10 dataset with explaining on how to rescale the datasets from 32*32 to 227*227? This will be very much helpful
Great explanation sir!!
Keep watching
Sir, it is very informative. thank you very much.
Sir at the second convolution layer it is said that the size is 27x27x256. But it will be 23x23x256 right. in the program output, it is correctly calculated but in all the slides available it is 27x27x256. There are corresponding changes in the following layers (ie. in the next layer it is 11x11 then 9x9 then 7x7 etc.
u can train this AlexNet in tensorflow. but tensorflow won't provide pretrained weights for AlexNet. only pytorch will provide that.
Nicely explain and very easy to understand
Thanks
Hey Bro, you told transfer learning, where is the weights for this. I think we are using only architecture of the AlexNet, how to apply weights
Nice effort, Appreciated!
it is a good explanation in a simple way. Thank you.
Thanks and glad you liked it
Very informative and useful sir.... very used for my research.... can you explain the drawbacks of these pretrained models.... like googlenet, inception, resnet
You have made it very easy to understand
Thank you
You didn't use batchNormalization or according to paper Local response Normalization
Nice tutorial and demo sir, I want to ask about layer output. In this video and architecture output layer using 1000 classes, If our have classes 4, do we have anything ouput layer change 4 classes ?.
did you know the answer because I have the same question ?
can we change parameters like input size , layer , feature map , size , kernel size , stride , activation ? any of them changing any of them will change alexnet architecture ? is it necessary to use same parameters which described in video to use this architecture namesd as alexnet ? If i change parameters it will be also called as alexnet ?
lemme know if you have found the answer
Thanks for the explanation in a simple way
Thanks and glad u liked it
Sir, thank you very much!
Thank you
Hey, thanks for the explanation.
Regarding the input dimension of the AlexNet network, I tried implementing it in TensorFlow and only works if the size is 227 x 227. But Many of the literatures following them have made use of different dimensions. Like SPPNet paper says its 224 x 224 x 3, and R-CNN paper uses 227 x 227. Please can you explain me as to what should be believed.
To my knowledge... it should work with any dimension. There is no hard rule about the dimensions bro.
The larger the image dimension, the more time it's gonna take for training and inference!
You talked about augmentation? I cant find this in the code...
Aygmentation is dealt separately
Is it possible to build the entire model without using the library?
Yes. But, bit challenging
can you please share the code of AlexNet, it would help alot
I will upload in GIT tomorrow..username shriramkv
@@ShriramVasudevan add the link to it in description please
This is one of the best.
Can you please provide example code?
Very nicely done
this is the best I have found on internet but you would have implemented with some example
I have done that. Video follows shortly brother.
@@ShriramVasudevan Please upload it bro we are waiting...
clearly explained !
Thanks hasina
Very well done..
sir very nice explanation and as well slides. thanks sir. kindly can you share the slides?
Thanks
Very nice sir...
Thank you very much
please provide the code sir
hello i need code
Thanks too much
Thanks Brother.
can you please share the code
Pl pull it from my git. @shriramkv
👍
Thanks and glad you liked it
nice
Thanks and glad u liked
why 227x227? The paper itself says 224x224? could you please explain
Please sir can you send me code
It's in my GIT.. @shriramkv