Hi Ian, do you know if there is some kind of rule of thump that relates the number of examples per class that is needed for trainning using Inception/Residual networks. For example I have 1 cases where with the same dataset just changing from a Residual-50 layer network to a Residual-18 layer improved accuracy. I know that this seems a typical case of overfitting, but how deep should I start with where you don't know a-priori which network to use.
Sorry Ian other question when you mention "data defects" you are referring to cases where you have for instance 2 similar inputs that are from different classes?
super straight-forward and practical guide. it's hard to find this kind of simple but useful advice. thanks for sharing your wisdom!
theres no thing as super straight-forward, it's just straight-forward...
Hi Ian, do you know if there is some kind of rule of thump that relates the number of examples per class that is needed for trainning using Inception/Residual networks. For example I have 1 cases where with the same dataset just changing from a Residual-50 layer network to a Residual-18 layer improved accuracy. I know that this seems a typical case of overfitting, but how deep should I start with where you don't know a-priori which network to use.
Sorry Ian other question when you mention "data defects" you are referring to cases where you have for instance 2 similar inputs that are from different classes?
Excellent presentation!!! Thank you.
Great video!
Practical applications are goal.