This guy made all that really easy to follow. I admire his ability to explain such complicated things. He's really good at identifying and skipping over the irrelevant stuff, and focusing on the core problem/solution.
Neat, I just did my bachelor thesis on convolutional neural networks. We built and trained a sign language interpreter that worked pretty well. I can affirm that neural networks are equal parts wisdom and witchcraft.
+Alcesmire That's actually pretty awesome! It could seriously improve the lives of deaf people, especially seeing as how we're moving more and more into the whole voice-controlled, NLP virtual assistant world.
Frankly, I don't see the wisdom part. Sure, when you design a NN, you do have to scale it correctly to the problem and so on, but once you've got everything setup, the rest is magic.
Is'nt Nueral networks just math? Ive studied the backpropagation algo,stacked neural networks,etc..etc and the thing that struck me is that its all just math that you learn in an engineering course especially stats and linear algebra to solve equations. Why did you say its "witchcraft"?
Sreekar Nimbalkar Because NNs are large sets of linear or non-linear equations with dynamically generated coefficients that mean absolutely nothing to the designers of NNs, but somehow work. Hence witchcraft. I mean, I could basically simulate a NN on paper and it would still work, while I would still have no idea why it works.
convolutional neural networks is one of those things that really needs some visuals, i find that it is really hard to 'grok' when you get it explained in a book or via speech but once you get a visual example it's kind of hilariously simple and scaringly plausible
This is by far the best video I've watched on CNNs and I've watched 4 others. It really describes the back propagation and image compression to a single dimension.
Not long ago, I read about a machine learning system that was able to classify planes, trees, and people in nearly live video, all without ever having any hard-coded feature sets. The math was way over my head (despite being a computer scientist, specialized areas can still stump me at times). Now I look back at it, and it was in fact a CNN being used! This was a few years ago now, but if they just started becoming popular in 2012, that makes sense. Thank you for the higher-level explanation that allows me to understand it after all this time XD
I wrote a neural network framework in Java that allows you to build neural networks with arbitrary shapes and structures. You can chain layers together and as long as you implement forward and backward functions, your layer will work. I implemented a lot of layers (fully connected, convolutional, pooling, etc.) And yes it's Java but the process was a valuable learning experience, much better for me than learning keras or something without knowing how it works
Hey... I tried to implement a cnn from scratch using python... But it is not working properly... I found few issues but not the solution for it.. I searched abt it in every website but couldn't find proper solution.. can u clarify my doubts, pls?
@@KnakuanaRka That changes meaning because a computer did it. Now you can automate that process and probably hundreds such processes. Automated object detection can be used in a multitude of processes and industries. Google it and surprise yourself.
My friend had some project with ANN. And I have some project with image analysis. I never knew both can be linked with this technique! This might help me in my research! Thanks Sean, thanks Dr Mike!
So basically the whole convolution part is to "reduce" the dimensions, to then pass the information into a deep net? Really awesome videos ! Extremely addictive :)
+bibbly bobbly Thanks! Very interesting. I once read that the patterned hallucinations of LSD are probably caused by the acid disrupting the signal in our retina. Pretty interesting stuff.. Is that plausible in your opinion?
As far as I know, the kernels that CNNs learn on the first hidden layer (without prior knowledge) are also very similar to patterns our visual cortex reacts to. So it's maybe even closer than you thought it is. I definitely find this quite fascinating :)
The part of the brain responsible for most mammalian intelligence (the neocortex) is considered hierarchical by many, sort of like neural networks. But it's not like there's a single layer of neurons in each level. Each level has millions of microcolumns of neurons (around 100 neurons per column), 3 to 9 or more layers (depending on how you count and the location in the hierarchy) with distinct properties, and multiple types of connections (inhibitory, excitatory, modulatory, different durations of effect, etc.) There are hundreds or thousands of underlying common characteristics in the neocortex alone, whereas neural networks have maybe one or two dozen underlying characteristics. I suspect some of that is just plumbing to deal with things like metabolism, but neural networks (except hierarchical temporal memory, which doesn't do anything the brain definitely doesn't do) are pretty unlikely to lead to brain-like intelligence. They're really useful, but the way they are designed is like trying to reinvent the computer by mimicking transistors and no other characteristics of computers.
In minute 9:07 - the size of the image changes not because it computes only the middle pixel, but because it fits in the size of the image less times than the images' width and height.
+turarwanaa +J Simmons But doesnt it mean that its all just an optimization program that does the magic? The training images get run through the process, it produces a value. The settings are tweaked slightly, the result are compared and one is better than the other. Rince and repeat. Working with some optimization programs myself, the trick is in how the algorithm is programmed to make large or small tweaks to settings... Its like finding the tallest mountain in the area while blind... Does crossing the valley lead to a taller mountain or should you just go up hill? It all seems CPU horse power dependent to me o.o
Thomas Gandalf It isn't anywhere near brute force either. The "magic" is in why neural networks work so well at all compared to other methods. At a low level it looks fairly similar to other optimization methods, but the structure of the network and how it abstracts is very important. It makes much less sense than a naive perspective suggests.
+Vulcapyro something that runs to iteratively tweak parameters of mathematical formulae until it finds the best possible solution, i.e. explores or exhausts the state space, is pretty much the definition of brute force.... granted CNN don't usually exhaust the state space but make informed decisions on which parameter set to try next
Thank you so much! Aside for entertaining me for years now, this video has actually helped me in my personal little research in programing an AI in a simple game using Tensorflow. (Is it overkill ? Sure. Is it fun to do and learn? Heck yeah!)
Wow exciting. Fantastic explanation and I can really see the power of this! Can't wait to see where deep learning and computer AI in general goes in the years to come. We are on the edge of some very exciting stuff.
Personally, I think hierarchical temporal memory will have a larger impact in the long run because it uses the brain as a constraint and won't have to rely so much on human ingenuity.
So really, Machine learning is creating an automated task to find enough differences that are unique to a specific thing so that you can then assume an outcome with enough confidence
+abschussrampe Google is currently pushing them onto basically everything. I'm not sure that _all_ their services use them yet but increasingly they do. For the following they either have talked before about _planning_ to use this technology or they already use it directly in the services you may or may not love to use by them: Google Maps Google Car driving Google Now suggestions Google Search RUclips Thumbnails RUclips video suggestions Google Translate Google Photos DeepMind (the guys behind AlphaGo) Allo, their new messaging app probably lots more Other big guns who either talk about or already do use these: Apple Microsoft Facebook Amazon probably lots more Nowadays, if you are on the internet, chances are you are using a service that in one form or another relies on deep learning and convolutional networks. You can do an insane number of semi-cognitive tasks with them. They _do_ have their limits in their current form but development goes rapidly.
+abschussrampe They're used in quite a few places. One example I saw used a CNN to classify the activities occurring in a video--for example, to learn how to tell whether a video is of someone hiking, mountain biking, swimming, or canoeing. It could be expanded with more data sets to classify other types of activity, which in turn would allow our future AIs to understand what's happening around them instead of having to be told what's going on before knowing how to react.
+abschussrampe Self-driving cars (Autos, you might say) are essentially all implemented as systems of CNNs at this point, if you want a particular example that will likely significantly change our way of life.
Rest assured the government is using them to identify not only who you are but what you're doing. We're going from facial recognition to activity recognition.
Now I know what to work on between handing in my Master thesis and defending it. Cause model my Artificial Neural network towards a Convolutional Neural Network. Might indeed bring up my accuracies in regards to recognizing game events based on Electroencephalogram and Eye-tracking data.
4:00 - That is a somewhat misleading statement right there my friend! The reason why convolution is used is not due to the fact that it is needed to downsample the input space so that the computer will not melt; the reason is that if you dont use the kernel based method typically known as convolutions you lose ANY topological information that relates the pixels with each other. There is literally no chance that a multilayer perceptron type of neural network could achieve accuracy anywhere near a convolution neural network and the reason is as stated previously, that the topological relationship of the pixels is lost once its reshaped in a one dimenionsal vector with space equal the total amount of the pixels. In fact, latest architectures do not even attempt to downsample the information and pooling techiniques are considered to be obsolete, further proving how wrong the statement you made at 4:00 is.
I love the topics from this guy and he explained it very well. Though his accent is a little difficult for me. Awesome video and very cool how you get to reference and link all those other previous videos
I like how he throws around "corners and edges" and the begining of DL corners and edges was actually a prediction but in reality, the slices of the most capable nets looks absolutely nothing like corners and edges and a whole lot more like noise.
Linux is better for long(multiday) computations because it is more stable and uses less resources in the background; it also happens to make the project more reproducible because versions of linux don't become dysfunctional with time like windows. Even if a CS has windows on their personal, there is no reason to think they wouldn't use linux on there workstation.
knowing a little bit how the human visual system is working, it's seems like you're actually describing it... And that's scary and thrilling at the same time.
How would you handle different sized images in the training data set? According to what I understood, the number of neurons (and weigths) depends on the number of pixels.
Interestingly, there is a platform out there called Accuval and they claim their house valuation is by far the best because they use ANN (fully connected rather than CNN).
Another great video! I really like the fact that you create annotations for relevant or prerequisite videos and stuff, but maybe they would be more useful if they opened in a new tab. I don't want to lose where I was on this video, when I open and go through the annotated video.
Is this the process that Google used for its image recognition software that can be run backwards to "dream up" images of the things it can recognize. So if the program can recognize an image of a cat you can reverse it and have it generate a picture of what it thinks a cat looks like.
+Jerrod Milton You are correct! How they basically do this is instead of adjusting the weights of the network to get the correct output value, they change the pixels of the input image, so that the CNN predicts e.g. a cat with higher certainty.
Absolutely fascinating! Great stuff, thanks for making it & sharing. I'm suspecting convolutional neural networks (CNNs) are possibly the solution for any potentially subjective classification, like the images in your video, but now wondering if a CNN is eventually equal to a Support Vector Machine for OBJECTIVE classifications, e.g. solutions to a mathematics problem.
What's the first sentence he says? "This is kind of a full opt vice's videos on deep learning?" I replayed it like 30 times and I can't figure out what he's saying
This sounds very interesting, it gave me a nice flashback to my AI studies a few years ago. How would this method hold up against noisy images or partially occluded images once the network is trained? For example if you trained a CNN to recognise your face from n images, could you wear a phantom of the opera mask and still expect it to recognise you?
So all the options you have for choosing what kind of new car you want to buy makes up a neural network. Right input and you get a possible car choice. How do you make it into a deep learning by-product. Ive seen curiosity used to train. This stuff is scary fun.
Classic neural net model doesn't work anyway. when you remove input from the node it still produces echoes of that input - that's how biological neurons work. How can you apply this to image processing? returning echo signals are learned via neural link storages for better results which system deems more favorable. Basically that means that your library can store node assigned info and each node instead of doing full processing theoretically can pull out saved neural link for just detecting a small portion of familiar input, run checks on that one and high-tune it to a degree when you'll need, say only 1/4th of sobel convolution etc.
Presumably images is just one type of information you could feed into one of these. You could feed in text, or patient vitals/symptoms, economic data, etc..
+GoodWoIf Yep. Mike tends to focus on image analysis in his videos because he is, in fact, an image analyst, but CNNs can be used on pretty much any data set you can think of--as long as you have enough data to train them and a decent number of possible convolutions to apply.
Looks like the next step will be combination of CNN with temporal one. Like this: keep on moving CN kernel over the image using this path (as learned before) and keep on feeding TN with data, until we have 99% detection certainty.
***** to be honest I wish RUclips would get rid of annotations, the i and everything else that pops up over the videos. I just want to watch the video, annotations are too often abused and spammed to be useful.
You should use growing media-free hydroponic systems to for viewing healthy roots. I'm sure you are aware of the "speaking plant approach" or perhaps the benefit for using imaging and CNN's for monitoring crops, as plants don't talk binary...
'"someone" came around and applied this to imagenet and got great results' Well those someone's won the 2018 Turing award for that work LOL (LeCun for that work in particular. Bengio and Hinton for similar work)
I think for future videos you should set up the camera on a tripod and speak directly to the camera, to us. But otherwise this is very well explained and demonstared!
He said at the beginning that feeding a whole picture of pixels would be too much data and too many pixels but its hard for me to see why this method would be much better
So effectively the filter is a single neuron hooked up to an X by X section of the previous layer, replicated across the entire previous layer. Like a mass produced chip. So if you have 10 convolutions in each layer and have 4 layers doesn't that mean you end up with 10000 images/features by the end?
+Clint Bellanger Oh yes, in fact without GPUs it would take far too long to do any of this. Combined with some of the developments in CNNs themselves (e.g. Relu), GPUs have made all of this possible.
I'm curious to know, are the nodes just the result of a kernel convolution, or are the some linear (or non-linear) combination of all the different kernel convolutions? Like is the kernel applied, then the resulting images summed similar to a traditional neural network?
I'm actually trying to do this in reverse... A form of one-shot pass to detect "everything detectable", within an image. Mentally, the impossibility of "this system", is that I would have to shrink the whole universe down into a single pixel. Extrapolating it out, starting at the pixel-level and working up to some fixed detail of "absolution". That is like me giving you $200,000 and then asking you... "Tell me everything I can buy with this." Or giving you "the web", and asking you "how much did it cost to buy everything found online"... In one pass. Where there is a will, there is a way. I think I found a way... To do this, to a practical extent. A lot less training required and greater potential for "more info in a smaller space". Totally ignoring the fact that it detects more than a few "demanded" things at one time. Sometimes people just don't know what they are looking for, until you show it to them. Why examine the same picture ten times to figure out that it has a dog in it, because "dog" was the last thing you asked it to detect in your last set, because you ran out of neural-net detection ability after just four items in a detection. You know what's funny... They don't need a resolution of "0 and 1" for an output to be "sure" it's possibly a dog. If they actually evaluated the neural-net, after weighting... they would realize some processing contributed NOTHING or nothing significant to the detection process and that could have been eliminated and more time spent processing something that contributed MORE to the weighting. Going all the way down to those last pixels, which also had many things that contributed nearly nothing to the output. The detection could have jumped-out ten levels earlier if it only used "processing relevant to dogs", when detecting dogs. That thumb-print, alone, is worth it's weight (pun intended) in gold. Honestly, the pixels aren't important, it's the "detectable attributes" and the arrangement of them, which is important. The specifics of details are only important to specific things. Not when doing "general detections", like "dog" or "person". Data that already exists as "Labradors" and "golden retrievers" and "mutts", with specifics that have weights already learned. Weights that can be normalized to represent "dog", without ever having to train one actual "generic dog" as a "dog".
In the case of convolutional neural networks the weight changes the opacity between the different "channels" or "features", calculated by different kernels, am I understand right?
I like the 'extensive' library of books he has on that shelf above and behind him
Word Press Forms is all you need!
This guy made all that really easy to follow. I admire his ability to explain such complicated things. He's really good at identifying and skipping over the irrelevant stuff, and focusing on the core problem/solution.
Neat, I just did my bachelor thesis on convolutional neural networks. We built and trained a sign language interpreter that worked pretty well. I can affirm that neural networks are equal parts wisdom and witchcraft.
+Alcesmire Aha! Now that I know neural networks exist, I can start my Mathematics thesis with:
"Let N be a neural network..."
+Alcesmire That's actually pretty awesome! It could seriously improve the lives of deaf people, especially seeing as how we're moving more and more into the whole voice-controlled, NLP virtual assistant world.
Frankly, I don't see the wisdom part. Sure, when you design a NN, you do have to scale it correctly to the problem and so on, but once you've got everything setup, the rest is magic.
Is'nt Nueral networks just math? Ive studied the backpropagation algo,stacked neural networks,etc..etc and the thing that struck me is that its all just math that you learn in an engineering course especially stats and linear algebra to solve equations. Why did you say its "witchcraft"?
Sreekar Nimbalkar
Because NNs are large sets of linear or non-linear equations with dynamically generated coefficients that mean absolutely nothing to the designers of NNs, but somehow work. Hence witchcraft.
I mean, I could basically simulate a NN on paper and it would still work, while I would still have no idea why it works.
convolutional neural networks is one of those things that really needs some visuals, i find that it is really hard to 'grok' when you get it explained in a book or via speech but once you get a visual example it's kind of hilariously simple and scaringly plausible
damn, I thought I was the only one who read that book
Dog Barksley which book?
forshadowing
i love watching mike out of all the other ppl on this channel. this man just sounds right
This is by far the best video I've watched on CNNs and I've watched 4 others. It really describes the back propagation and image compression to a single dimension.
I could`t have explained it better, given the limitations of a youtube video. Well done computerphile!
Not long ago, I read about a machine learning system that was able to classify planes, trees, and people in nearly live video, all without ever having any hard-coded feature sets. The math was way over my head (despite being a computer scientist, specialized areas can still stump me at times). Now I look back at it, and it was in fact a CNN being used! This was a few years ago now, but if they just started becoming popular in 2012, that makes sense.
Thank you for the higher-level explanation that allows me to understand it after all this time XD
For years down the line and look where we are now!
What a time to be alive!
2 minutes paper with Dr. Gibberish
I just discovered this channel, saw a bunch viideos and didn't come across a single boring one.
that was great as always! Now, eight years after, a video about Vision Transformers would be epic
I wrote a neural network framework in Java that allows you to build neural networks with arbitrary shapes and structures. You can chain layers together and as long as you implement forward and backward functions, your layer will work. I implemented a lot of layers (fully connected, convolutional, pooling, etc.)
And yes it's Java but the process was a valuable learning experience, much better for me than learning keras or something without knowing how it works
The best way of learning is building stuff that you want to understand😉
@@IgorRoztr absolutely agree. i wouldn't say i understand something unless i've built something like it
Hey... I tried to implement a cnn from scratch using python... But it is not working properly... I found few issues but not the solution for it.. I searched abt it in every website but couldn't find proper solution.. can u clarify my doubts, pls?
I jst want to clarify whether whatever ik is right or wrong and want to know the solutions for the issues
So happy for Frodo Baggins and his new career as AI teacher
Couldn't but to observe WPF C# book , nice to see another one specialised in these two things
I don't know why, but Dr Mike explains things so nice and clear. Thanks!
"...check whether the photo is of a bird."
"give me a research team and five years"
YES!!!!!!!!! GRANT MONEY!!! PAY ME! This is an ACADEMIC!!!!!!!!!!! $$$$$$$$$$$$$$
this is the best comment i've ever read. I feel you bro. Lived it!!!!!!!
SaltyBrains I don’t get how that changes the meaning at all.
Another xkcd fan?
@@KnakuanaRka That changes meaning because a computer did it. Now you can automate that process and probably hundreds such processes. Automated object detection can be used in a multitude of processes and industries. Google it and surprise yourself.
Watching this again really helped me improve my network. Thanks
Might be the best intuitive description I've come across!
Thank you Computerphile for the great videos you put up.
My friend had some project with ANN. And I have some project with image analysis. I never knew both can be linked with this technique! This might help me in my research!
Thanks Sean, thanks Dr Mike!
Kudos to Mike, videos with him are always fun and well explained!
Still after 4 years, this is the best explanation of CNN on youtube ...
Dang kernel convolutions.
My least favorite thing to happen when I'm making popcorn.
Wow, interesting concept, nicely explained ... well done!
So basically the whole convolution part is to "reduce" the dimensions, to then pass the information into a deep net?
Really awesome videos ! Extremely addictive :)
update: they are indeed a big deal
Wow, "convolution process" just sounds a lot like abstraction that brains do....I think these guys are really onto something here...I dig it
+Christopher Willis really interesting way of looking at it.. what an exciting time to be alive!
+bibbly bobbly Thanks! Very interesting. I once read that the patterned hallucinations of LSD are probably caused by the acid disrupting the signal in our retina. Pretty interesting stuff.. Is that plausible in your opinion?
+bibbly bobbly thanks well said
As far as I know, the kernels that CNNs learn on the first hidden layer (without prior knowledge) are also very similar to patterns our visual cortex reacts to. So it's maybe even closer than you thought it is.
I definitely find this quite fascinating :)
The part of the brain responsible for most mammalian intelligence (the neocortex) is considered hierarchical by many, sort of like neural networks. But it's not like there's a single layer of neurons in each level. Each level has millions of microcolumns of neurons (around 100 neurons per column), 3 to 9 or more layers (depending on how you count and the location in the hierarchy) with distinct properties, and multiple types of connections (inhibitory, excitatory, modulatory, different durations of effect, etc.) There are hundreds or thousands of underlying common characteristics in the neocortex alone, whereas neural networks have maybe one or two dozen underlying characteristics.
I suspect some of that is just plumbing to deal with things like metabolism, but neural networks (except hierarchical temporal memory, which doesn't do anything the brain definitely doesn't do) are pretty unlikely to lead to brain-like intelligence. They're really useful, but the way they are designed is like trying to reinvent the computer by mimicking transistors and no other characteristics of computers.
Would you mind putting links in the description for annotated link videos for us mobile users, thanks.
+Sirus done >Sean
Thanks
Thanks. Annotated links are pretty much always off in the future. At least for nerds?
I guess the debate he mentions over whether neural networks will change everything, is settled now in 2024?
In minute 9:07 - the size of the image changes not because it computes only the middle pixel, but because it fits in the size of the image less times than the images' width and height.
3 deep 5 me learning
i dont understand what you mean
2 deep 4 them + inception
not 3 deep 5 me anymore tho
Extremely good explanation of things that, until this series on deep learning, were just black magic to me !
+turarwanaa Lucky you, watched it twice and I still think he is a dark wizard.
+turarwanaa +J Simmons
But doesnt it mean that its all just an optimization program that does the magic?
The training images get run through the process, it produces a value. The settings are tweaked slightly, the result are compared and one is better than the other. Rince and repeat.
Working with some optimization programs myself, the trick is in how the algorithm is programmed to make large or small tweaks to settings...
Its like finding the tallest mountain in the area while blind... Does crossing the valley lead to a taller mountain or should you just go up hill?
It all seems CPU horse power dependent to me o.o
+HexerPsy yes it's pretty much brute force, no magic. as with most things machine...
Thomas Gandalf It isn't anywhere near brute force either.
The "magic" is in why neural networks work so well at all compared to other methods. At a low level it looks fairly similar to other optimization methods, but the structure of the network and how it abstracts is very important. It makes much less sense than a naive perspective suggests.
+Vulcapyro something that runs to iteratively tweak parameters of mathematical formulae until it finds the best possible solution, i.e. explores or exhausts the state space, is pretty much the definition of brute force.... granted CNN don't usually exhaust the state space but make informed decisions on which parameter set to try next
This guy seems cool - I like the videos he presents! :)
Thank you so much! Aside for entertaining me for years now, this video has actually helped me in my personal little research in programing an AI in a simple game using Tensorflow. (Is it overkill ? Sure. Is it fun to do and learn? Heck yeah!)
Idk how your game went, but you're the man.
Wow exciting. Fantastic explanation and I can really see the power of this! Can't wait to see where deep learning and computer AI in general goes in the years to come. We are on the edge of some very exciting stuff.
Personally, I think hierarchical temporal memory will have a larger impact in the long run because it uses the brain as a constraint and won't have to rely so much on human ingenuity.
So really, Machine learning is creating an automated task to find enough differences that are unique to a specific thing so that you can then assume an outcome with enough confidence
I was trying to understand cnns and dr mike comes to the rescue
I am a neural network watching videos about neural networks.
we need to go deeper
this is amazing
Moeシt wow
aren't we all?
I am a neural network inputting and outputting comments about neural networks watching videos about neural networks. The singularity is nigh.
I would have loved to hear examples of where these are getting used and what kind of impact they have on our way of life!
+abschussrampe Google is currently pushing them onto basically everything. I'm not sure that _all_ their services use them yet but increasingly they do.
For the following they either have talked before about _planning_ to use this technology or they already use it directly in the services you may or may not love to use by them:
Google Maps
Google Car driving
Google Now suggestions
Google Search
RUclips Thumbnails
RUclips video suggestions
Google Translate
Google Photos
DeepMind (the guys behind AlphaGo)
Allo, their new messaging app
probably lots more
Other big guns who either talk about or already do use these:
Apple
Microsoft
Facebook
Amazon
probably lots more
Nowadays, if you are on the internet, chances are you are using a service that in one form or another relies on deep learning and convolutional networks. You can do an insane number of semi-cognitive tasks with them. They _do_ have their limits in their current form but development goes rapidly.
+abschussrampe They're used in quite a few places. One example I saw used a CNN to classify the activities occurring in a video--for example, to learn how to tell whether a video is of someone hiking, mountain biking, swimming, or canoeing. It could be expanded with more data sets to classify other types of activity, which in turn would allow our future AIs to understand what's happening around them instead of having to be told what's going on before knowing how to react.
+abschussrampe Self-driving cars (Autos, you might say) are essentially all implemented as systems of CNNs at this point, if you want a particular example that will likely significantly change our way of life.
If you're outside the EU/Canada, Facebook uses CNN for facial recognition to tag photos
Rest assured the government is using them to identify not only who you are but what you're doing. We're going from facial recognition to activity recognition.
So it turns out these are in fact a big deal.
Now I know what to work on between handing in my Master thesis and defending it. Cause model my Artificial Neural network towards a Convolutional Neural Network. Might indeed bring up my accuracies in regards to recognizing game events based on Electroencephalogram and Eye-tracking data.
4:00 - That is a somewhat misleading statement right there my friend! The reason why convolution is used is not due to the fact that it is needed to downsample the input space so that the computer will not melt; the reason is that if you dont use the kernel based method typically known as convolutions you lose ANY topological information that relates the pixels with each other. There is literally no chance that a multilayer perceptron type of neural network could achieve accuracy anywhere near a convolution neural network and the reason is as stated previously, that the topological relationship of the pixels is lost once its reshaped in a one dimenionsal vector with space equal the total amount of the pixels. In fact, latest architectures do not even attempt to downsample the information and pooling techiniques are considered to be obsolete, further proving how wrong the statement you made at 4:00 is.
I love the topics from this guy and he explained it very well. Though his accent is a little difficult for me. Awesome video and very cool how you get to reference and link all those other previous videos
Me too, I enabled the subtitles, Any way its a great video
I like how he throws around "corners and edges" and the begining of DL corners and edges was actually a prediction but in reality, the slices of the most capable nets looks absolutely nothing like corners and edges and a whole lot more like noise.
This was super awesome, I love this Mike guy!
"I'd have to start by programming up linux" he says while sitting in front of a WPF book
...because Computer Scientists are only allowed to reference one Operating System? I don't get it.
Linux is better for long(multiday) computations because it is more stable and uses less resources in the background; it also happens to make the project more reproducible because versions of linux don't become dysfunctional with time like windows. Even if a CS has windows on their personal, there is no reason to think they wouldn't use linux on there workstation.
agreed he is not capable of this but he doesn't care because he has WORK TO DO.
knowing a little bit how the human visual system is working, it's seems like you're actually describing it...
And that's scary and thrilling at the same time.
How would you handle different sized images in the training data set? According to what I understood, the number of neurons (and weigths) depends on the number of pixels.
I love Mike's videos on image processing.. Keep em up!
what a class! big shout out from Brazil
Interestingly, there is a platform out there called Accuval and they claim their house valuation is by far the best because they use ANN (fully connected rather than CNN).
We want video about PNN(Probabilistic Neural Network)
Another great video!
I really like the fact that you create annotations for relevant or prerequisite videos and stuff, but maybe they would be more useful if they opened in a new tab. I don't want to lose where I was on this video, when I open and go through the annotated video.
Is this the process that Google used for its image recognition software that can be run backwards to "dream up" images of the things it can recognize. So if the program can recognize an image of a cat you can reverse it and have it generate a picture of what it thinks a cat looks like.
+Jerrod Milton You are correct! How they basically do this is instead of adjusting the weights of the network to get the correct output value, they change the pixels of the input image, so that the CNN predicts e.g. a cat with higher certainty.
+Karl Kastor Wait, this exists? Are there front-ends to those applications available?
Benjamin Philipp Google Deep Dream. People have done several implementations for this since the original paper.
Benjamin Philipp
google Deep Dream. People have done several implementations for this since the original paper
Karl Kastor
Tanks - I've since found Deep Dream, but thanks for coming back for me :)
Absolutely fascinating! Great stuff, thanks for making it & sharing. I'm suspecting convolutional neural networks (CNNs) are possibly the solution for any potentially subjective classification, like the images in your video, but now wondering if a CNN is eventually equal to a Support Vector Machine for OBJECTIVE classifications, e.g. solutions to a mathematics problem.
This guy is awesome
Have to do a work on a paper about imagenet and deep convolutional neural networks. This video explained sooo much! Thank you!!!
The James Acaster vibes are so strong in this guy. Perhaps all this revising has turned my brain to mush, but this video really helped :D Thank you!
What's the first sentence he says? "This is kind of a full opt vice's videos on deep learning?" I replayed it like 30 times and I can't figure out what he's saying
+syawkcab "This is kind-of a follow up to Brais' video on deep learning"
OHHHH!
This sounds very interesting, it gave me a nice flashback to my AI studies a few years ago.
How would this method hold up against noisy images or partially occluded images once the network is trained? For example if you trained a CNN to recognise your face from n images, could you wear a phantom of the opera mask and still expect it to recognise you?
So all the options you have for choosing what kind of new car you want to buy makes up a neural network.
Right input and you get a possible car choice.
How do you make it into a deep learning by-product.
Ive seen curiosity used to train.
This stuff is scary fun.
A beautiful description! Well done.
This guy is a monster ! He explain so well
Extrordinary explanation, thanks!
amazing explanations here. Thanks for the share. This has made things more digestible.
Classic neural net model doesn't work anyway. when you remove input from the node it still produces echoes of that input - that's how biological neurons work.
How can you apply this to image processing? returning echo signals are learned via neural link storages for better results which system deems more favorable. Basically that means that your library can store node assigned info and each node instead of doing full processing theoretically can pull out saved neural link for just detecting a small portion of familiar input, run checks on that one and high-tune it to a degree when you'll need, say only 1/4th of sobel convolution etc.
hes best computerphile prof.. deep and on point.. would love to work with him :O
Are there different ways to implement the library? How do people in competitions make their algorithm better using the same library?
Presumably images is just one type of information you could feed into one of these. You could feed in text, or patient vitals/symptoms, economic data, etc..
+GoodWoIf Yep. Mike tends to focus on image analysis in his videos because he is, in fact, an image analyst, but CNNs can be used on pretty much any data set you can think of--as long as you have enough data to train them and a decent number of possible convolutions to apply.
Best nap ever
The dog sound effect gave me a bit of a chuckle
Looks like the next step will be combination of CNN with temporal one. Like this: keep on moving CN kernel over the image using this path (as learned before) and keep on feeding TN with data, until we have 99% detection certainty.
Very cool video. I understood and saw clearly! Thank you so much for such content!!
Is there an "AI" playlist on Computerphile?
If the process is looked like to be a hash algorithm, then the collision is what we are looking for at the end :)
"... and there'll be a different representation of my face transformed in some way to be useful." BRILLIANT
Man, i would really like to see the work behind the root tips convolutional network.
Can we have the links in the description please? Regards everyone on a mobile.
+Jamie Twells Ah, they're there now - Thank you!
***** to be honest I wish RUclips would get rid of annotations, the i and everything else that pops up over the videos. I just want to watch the video, annotations are too often abused and spammed to be useful.
Great and comprehensive video!
You should use growing media-free hydroponic systems to for viewing healthy roots. I'm sure you are aware of the "speaking plant approach" or perhaps the benefit for using imaging and CNN's for monitoring crops, as plants don't talk binary...
This is the only time CNN is useful.
Wowthatsfail Truth
'"someone" came around and applied this to imagenet and got great results'
Well those someone's won the 2018 Turing award for that work LOL (LeCun for that work in particular. Bengio and Hinton for similar work)
Expected a video about neural networks analyzing sentiment to help news outlets adjust their narrative :o
What an amazing explanation!
I think for future videos you should set up the camera on a tripod and speak directly to the camera, to us. But otherwise this is very well explained and demonstared!
wow that was delightfully well explained, I enjoyed the video so much.
please ask him to talk about RNNs too!
He said at the beginning that feeding a whole picture of pixels would be too much data and too many pixels but its hard for me to see why this method would be much better
Dr Mike Pound can you do a video about how karnel works, please?
Maybe the only human job left in the future will be labelling images
thank you for this video as well!
So are linear processors useful for neural networking due to the 'probabilistic' nature of them?
So effectively the filter is a single neuron hooked up to an X by X section of the previous layer, replicated across the entire previous layer. Like a mass produced chip.
So if you have 10 convolutions in each layer and have 4 layers doesn't that mean you end up with 10000 images/features by the end?
eyyyy mate, i-m from your uni!! birmigam is so cooool!!!
when you learning basic for a project and you find it's your supervisor in this video XD
Please do a video on how CNN's are applied to Natural Language Processing (NLP). Usually RNN's are, but CNN's can also be used.
Transformers too
So neat. Do these CNN libraries use graphics cards for some calculations? Some steps of this remind me of pixel shaders.
+Clint Bellanger Oh yes, in fact without GPUs it would take far too long to do any of this. Combined with some of the developments in CNNs themselves (e.g. Relu), GPUs have made all of this possible.
I'm curious to know, are the nodes just the result of a kernel convolution, or are the some linear (or non-linear) combination of all the different kernel convolutions? Like is the kernel applied, then the resulting images summed similar to a traditional neural network?
Is possible Linux like initial objects
And Haskell like Terminal ?
I'm actually trying to do this in reverse... A form of one-shot pass to detect "everything detectable", within an image. Mentally, the impossibility of "this system", is that I would have to shrink the whole universe down into a single pixel. Extrapolating it out, starting at the pixel-level and working up to some fixed detail of "absolution".
That is like me giving you $200,000 and then asking you... "Tell me everything I can buy with this." Or giving you "the web", and asking you "how much did it cost to buy everything found online"... In one pass.
Where there is a will, there is a way. I think I found a way... To do this, to a practical extent. A lot less training required and greater potential for "more info in a smaller space". Totally ignoring the fact that it detects more than a few "demanded" things at one time. Sometimes people just don't know what they are looking for, until you show it to them. Why examine the same picture ten times to figure out that it has a dog in it, because "dog" was the last thing you asked it to detect in your last set, because you ran out of neural-net detection ability after just four items in a detection.
You know what's funny... They don't need a resolution of "0 and 1" for an output to be "sure" it's possibly a dog. If they actually evaluated the neural-net, after weighting... they would realize some processing contributed NOTHING or nothing significant to the detection process and that could have been eliminated and more time spent processing something that contributed MORE to the weighting. Going all the way down to those last pixels, which also had many things that contributed nearly nothing to the output. The detection could have jumped-out ten levels earlier if it only used "processing relevant to dogs", when detecting dogs. That thumb-print, alone, is worth it's weight (pun intended) in gold.
Honestly, the pixels aren't important, it's the "detectable attributes" and the arrangement of them, which is important. The specifics of details are only important to specific things. Not when doing "general detections", like "dog" or "person". Data that already exists as "Labradors" and "golden retrievers" and "mutts", with specifics that have weights already learned. Weights that can be normalized to represent "dog", without ever having to train one actual "generic dog" as a "dog".
In the case of convolutional neural networks the weight changes the opacity between the different "channels" or "features", calculated by different kernels, am I understand right?