does the table value change?? does the stride and image size from the alexnet architecture table in the video remain the same irrespective of the input image??
hello Shriram, you have explained very well.. I have a question, the conv layer 2 has a padding of 2 as per the architecture but I see you have mentioned the padding as 'valid' for this layer, could you kindly clarify this? am I missing something here?
Hello sir, it is very informative. I want to ask you is it possible for you to make a video for training Alexnet architecture with pre trained cifar 10 dataset with explaining on how to rescale the datasets from 32*32 to 227*227? This will be very much helpful
Nice tutorial and demo sir, I want to ask about layer output. In this video and architecture output layer using 1000 classes, If our have classes 4, do we have anything ouput layer change 4 classes ?.
can we change parameters like input size , layer , feature map , size , kernel size , stride , activation ? any of them changing any of them will change alexnet architecture ? is it necessary to use same parameters which described in video to use this architecture namesd as alexnet ? If i change parameters it will be also called as alexnet ?
Sir, it is very informative. thank you very much. Sir at the second convolution layer it is said that the size is 27x27x256. But it will be 23x23x256 right. in the program output, it is correctly calculated but in all the slides available it is 27x27x256. There are corresponding changes in the following layers (ie. in the next layer it is 11x11 then 9x9 then 7x7 etc.
Very informative and useful sir.... very used for my research.... can you explain the drawbacks of these pretrained models.... like googlenet, inception, resnet
Hey, thanks for the explanation. Regarding the input dimension of the AlexNet network, I tried implementing it in TensorFlow and only works if the size is 227 x 227. But Many of the literatures following them have made use of different dimensions. Like SPPNet paper says its 224 x 224 x 3, and R-CNN paper uses 227 x 227. Please can you explain me as to what should be believed.
Best way you explained the concept .
Very simple way of explaining deep learning. Awesome playlist
Thanks and glad u liked. Your comments keep me moving
Your, voice is similar to Ravi Ashwin. Great explanation
Oh ho...thanks buddy..
it is a good explanation in a simple way. Thank you.
Thanks and glad you liked it
Nice effort, Appreciated!
Hi, thanks for your really nice explanation, would you please explain a bit more why do we have 96 filters ?
You have made it very easy to understand
Thank you
Thanks for the explanation in a simple way
Thanks and glad u liked it
Nicely explain and very easy to understand
Thanks
does the table value change?? does the stride and image size from the alexnet architecture table in the video remain the same irrespective of the input image??
Great explanation sir!!
Keep watching
hello Shriram, you have explained very well.. I have a question, the conv layer 2 has a padding of 2 as per the architecture but I see you have mentioned the padding as 'valid' for this layer, could you kindly clarify this? am I missing something here?
Sir, thank you very much!
Thank you
u can train this AlexNet in tensorflow. but tensorflow won't provide pretrained weights for AlexNet. only pytorch will provide that.
Hello sir, it is very informative. I want to ask you is it possible for you to make a video for training Alexnet architecture with pre trained cifar 10 dataset with explaining on how to rescale the datasets from 32*32 to 227*227? This will be very much helpful
This is one of the best.
thanks
You're welcome!
Very nicely done
You didn't use batchNormalization or according to paper Local response Normalization
Nice tutorial and demo sir, I want to ask about layer output. In this video and architecture output layer using 1000 classes, If our have classes 4, do we have anything ouput layer change 4 classes ?.
did you know the answer because I have the same question ?
Hey Bro, you told transfer learning, where is the weights for this. I think we are using only architecture of the AlexNet, how to apply weights
Very well done..
clearly explained !
Thanks hasina
can we change parameters like input size , layer , feature map , size , kernel size , stride , activation ? any of them changing any of them will change alexnet architecture ? is it necessary to use same parameters which described in video to use this architecture namesd as alexnet ? If i change parameters it will be also called as alexnet ?
lemme know if you have found the answer
Sir, it is very informative. thank you very much.
Sir at the second convolution layer it is said that the size is 27x27x256. But it will be 23x23x256 right. in the program output, it is correctly calculated but in all the slides available it is 27x27x256. There are corresponding changes in the following layers (ie. in the next layer it is 11x11 then 9x9 then 7x7 etc.
Very informative and useful sir.... very used for my research.... can you explain the drawbacks of these pretrained models.... like googlenet, inception, resnet
this is the best I have found on internet but you would have implemented with some example
I have done that. Video follows shortly brother.
@@ShriramVasudevan Please upload it bro we are waiting...
Very nice sir...
Thank you very much
You talked about augmentation? I cant find this in the code...
Aygmentation is dealt separately
Is it possible to build the entire model without using the library?
Yes. But, bit challenging
Can you please provide example code?
can you please share the code of AlexNet, it would help alot
I will upload in GIT tomorrow..username shriramkv
@@ShriramVasudevan add the link to it in description please
Hey, thanks for the explanation.
Regarding the input dimension of the AlexNet network, I tried implementing it in TensorFlow and only works if the size is 227 x 227. But Many of the literatures following them have made use of different dimensions. Like SPPNet paper says its 224 x 224 x 3, and R-CNN paper uses 227 x 227. Please can you explain me as to what should be believed.
To my knowledge... it should work with any dimension. There is no hard rule about the dimensions bro.
The larger the image dimension, the more time it's gonna take for training and inference!
sir very nice explanation and as well slides. thanks sir. kindly can you share the slides?
Thanks
Thanks too much
Thanks Brother.
nice
Thanks and glad u liked
can you please share the code
Pl pull it from my git. @shriramkv
please provide the code sir
hello i need code
👍
Thanks and glad you liked it
why 227x227? The paper itself says 224x224? could you please explain
Please sir can you send me code
It's in my GIT.. @shriramkv