Very useful, I wish more people did this sort of thing. I have one recommendation. Don't talk about the line at the top of the screen capture area but rather at least a third of the way down so viewers can see some context. It also means it hasn't scrolled out of view!
Thank you and thanks for the tip! I'm sorry about that, still getting comfortable with recording these videos. I didn't realize around the time I made this video that it was cropping out the top like that
Very Intuitive video loved. I just have one question. Where would I need to change the code if for example if I was to use my own dataset for DeepFake Identification. @Henry
Hey Henry, great work. Quick question, assuming I wanted to work with a 1D signal, do I just change 'conv2D' to 'Conv1D' and then things should work out? I want to work with a time series data, not an image data.
Hi! thanks for the explanation! Can you please point me to a place where they use this (raghakot's) implementation of resnet for their own dataset? I am having starting trouble of getting this to work on my dataset of images. Also, this might be a bit silly, but can we input matrices instead of images to the resnet? My input contains matrices (of same dimensions 513x125) that represent spectrograms(images) of audio files.
very interesting! such, more practical videos are also very helpful. intermediate/advanced video topics in AI are so rare, you are great! For example I wouldnt have thought about using learning rate scheduling with an adaptive optimizer like Adam. Do you have experience with these hyperparameter choices (Adam,Nadam, Radam, SGD... with or withour lr scheduling, there are so many..!)? Even with something like Hyperband (/BOHB) (thanks for your video on this btw ;-) its difficult and expensive to test. And NeuralArchitectureSearch focuses more on the bigger picture and just assumes the right hyperparameters on a cell level?
Thank you so much!! I think LR scheduling seems to work across tasks, there is definitely some interaction with the optimizers, particularly between things like decreasing learning rates and momenutm. However, I think in the current state of understanding for these optimizers it's best to throw heuristics at them like using search algorithms on the interaction between B1, B2, momentum parameters and a learning rate decay or cosine cycling. I recently read a paper on the LEAF evolutionary algorithm from Liang et al. (2019) which demonstrates that architecture search is more beneficial than hyperparameter optimization!
I apologize for that, this was my first video of this kind and I wasn't aware of the cropping in the iMovie interface! I will return to these kinds of tutorials in the future and make them much better, thank you for this advice!
Very useful, I wish more people did this sort of thing. I have one recommendation. Don't talk about the line at the top of the screen capture area but rather at least a third of the way down so viewers can see some context. It also means it hasn't scrolled out of view!
Thank you and thanks for the tip! I'm sorry about that, still getting comfortable with recording these videos. I didn't realize around the time I made this video that it was cropping out the top like that
This is quite intuitive and interesting. I wish your next video will be on U-net for segmentation.
Thank you so much! Sounds like a great idea!
Very Intuitive video loved. I just have one question. Where would I need to change the code if for example if I was to use my own dataset for DeepFake Identification. @Henry
Hey Henry, great work. Quick question, assuming I wanted to work with a 1D signal, do I just change 'conv2D' to 'Conv1D' and then things should work out? I want to work with a time series data, not an image data.
Hi! thanks for the explanation! Can you please point me to a place where they use this (raghakot's) implementation of resnet for their own dataset? I am having starting trouble of getting this to work on my dataset of images.
Also, this might be a bit silly, but can we input matrices instead of images to the resnet? My input contains matrices (of same dimensions 513x125) that represent spectrograms(images) of audio files.
very interesting! such, more practical videos are also very helpful. intermediate/advanced video topics in AI are so rare, you are great!
For example I wouldnt have thought about using learning rate scheduling with an adaptive optimizer like Adam. Do you have experience with these hyperparameter choices (Adam,Nadam, Radam, SGD... with or withour lr scheduling, there are so many..!)? Even with something like Hyperband (/BOHB) (thanks for your video on this btw ;-) its difficult and expensive to test. And NeuralArchitectureSearch focuses more on the bigger picture and just assumes the right hyperparameters on a cell level?
Thank you so much!! I think LR scheduling seems to work across tasks, there is definitely some interaction with the optimizers, particularly between things like decreasing learning rates and momenutm. However, I think in the current state of understanding for these optimizers it's best to throw heuristics at them like using search algorithms on the interaction between B1, B2, momentum parameters and a learning rate decay or cosine cycling. I recently read a paper on the LEAF evolutionary algorithm from Liang et al. (2019) which demonstrates that architecture search is more beneficial than hyperparameter optimization!
You’ve somehow cropped the video, so when you speak you cannot see what you’re talking about.
I apologize for that, this was my first video of this kind and I wasn't aware of the cropping in the iMovie interface! I will return to these kinds of tutorials in the future and make them much better, thank you for this advice!
@@connor-shorten No worries, loved the video, forgot to mention that of course. Keep 'em coming!
@@peterstorm_io Lol, Thank you so much!!
so complex