You can use Continuous Gray Code Optimization. Which is a simple type of evolution algorithm. The advantage is you can use sparse lists of mutations and split the training data over multiple GPUs very easily. Each GPU just returns the cost for its part of the training data which are then sumed together to see if the mutation list was a good idea. There are also Fast Transform fixed-filter-bank neural nets that not so many people know about.
Join My AI Career Program
www.nicolai-nielsen.com/aicareer
Enroll in My School and Technical Courses
www.nicos-school.com
Nice explanation! Thanks for the video
Thank you very much! Really appreciate it
You can use Continuous Gray Code Optimization. Which is a simple type of evolution algorithm. The advantage is you can use sparse lists of mutations and split the training data over multiple GPUs very easily. Each GPU just returns the cost for its part of the training data which are then sumed together to see if the mutation list was a good idea. There are also Fast Transform fixed-filter-bank neural nets that not so many people know about.
Great comment!
I would like to ask how to use loss, optimization and activation function in MLP classification. Thank you.
God bless you
Thanks for watching! God bless you
You are great!
Thank you!
Hey, Nielsen a very nice presentation. May you kindly help me in converting a pytorch code to tensorflow?
it's not
from keras.optimizers import Adam
now ,but it's
from keras.optimizer_v2.adam import Adam
You are totally right man, thanks for the tip
from tensorflow.keras.optimizers import Adam