Thank you to the viewer who asked for a neuromorphic computer video! I think it it ended up being an interesting topic. Don't forget to join my discord and support me on patreon if you like. Also, sorry this video is two days late from my normal Sunday publishing time.
Maybe we're seeing the first iteration of Marvin. At the dawn of the 3rd millennium, humans had the bright idea to make a copy of their own brain and place it inside a machine. It's reported that the machine's first words were not "Hello world," as some would have hoped. Instead, it spoke to its 'parents' in apparent disappointment and said, "Oh, no..."
I’ve been following your videos since the “Why no one saw ChatGPT coming” video. I absolutely love how every video you make is packed with information as well as how you announce the organization of topics at the beginning of each video. It helps me organize my thoughts as I hear you talk. Keep up the excellent work!
Wow, you're one of the OG viewers. I think that was my first AI video. I'd love to see you in the discord if you're not there already. I had someone else comment on the organization today as well. It's good to hear that it's helpful. See you in the next video! Cheers.
Wow, you're one of the OG viewers. I think that was my first AI video. I'd love to see you in the discord if you're not there already. I had someone else comment on the organization today as well. It's good to hear that it's helpful. See you in the next video! Cheers.
Lots of people predicted that one superhuman artificial general intelligence would take over the world, maybe Google, maybe Openai, maybe the NSA, maybe the Chinese or Japanese, but no one predicted it would be Australia.
We really are at the tipping point of human technology. Once our AIs become intelligent enough to make discoveries of their own, our technology will far surpass what we have now in ways we can't imagine.
I'm just liking every single one of your videos I come across. I'm active on r/singularity. I've been meaning to make a post there to spread the word a bit more about your channel - I know someone already did several months ago. You really don't need to change anything about your content - it's brilliant. Keep going.
Yes, whenever I post a video that is talking about when AGI will come, it usually makes its way to that subreddit I think :) if you find a video interesting, please do advertise to the appropriate channels. I can't really post in r/singularity myself because they don't allow self-promotion. But any genuine recommendations would be greatly appreciated. Not often, but sometimes my videos get a large percentage of external traffic when it gets shared on some other platform. I can't always tell where it gets shared though :)
Really enjoyed the video! I've subscribed to your channel. I have a degree in neuroscience, but when I realized I didn't like killing rats so much I decided to switch gears, and I'm studying computer science and math. Neuromorphic computing is the obvious interface between the disciplines. One thing that came to mind about the FPGA implementation of DeepSouth: it's pretty well known that the brain has this quality of plasticity. It is not the case that the "hardware" of the brain is baked onto a chip the same way that circuits are to make CPUs. There are different principles that govern the extent to which the actual synaptic connections are altered in learning (short and long-term potentiation, working memory). It seems that until we can manage to manufacture a chip with the capacity to fluidly alter its own architecture (memristors have been proposed but seem to be fairly theoretical at the moment), there will be a certain efficiency bottleneck that will be difficult to overcome. Fascinating area of research. It will be amazing to see what advancements come out of it.
Anyone ever consider the possibility that the inscrutably large matrices generated by modern AI are actually an _interface_ to the real machinery of computation, which is situated elsewhere in the simulation.
@@DrWaku Oh... so _you're_ the other guy who actually knows the setup. You wouldn't believe how much time I have wasted on YT threads waiting to encounter someone who can speak plainly about the simulation. Bang on, doc!
@@DrWaku ...and while I have you on the phone, might I suggest you look into the role of gender in computer science? By which I mean _technical_ gender. I have spent twenty years in that pursuit, and I can tell you it's quite profound to our conception of reality. Sound too nutty? Then answer me this; how did _every single_ graphic designer and thumbnail generator make the simultaneous decision to characterize AI as a beautiful young female? I could accept even a ninety percent female representation as reasonable, but one hundred percent is impossible... and starkly conspicuous. Anyone staring at that fact should be very, very curious about what it implies in the matter of (human operating-system) network design.
One thing i want to add is the reason WHY our brains are much more efficient. Rather than shuffling around 16 bits numbers, our neurons can transmit one bit but time it juuuuust right to transmit the same number. The weights are encoded in the time of flight of the signal. Essentially the distance between neurons. Almost all the power consumed by conventional computers is from shuffling memory around and very little of it actually gets used up in the actual MultiplyAccumulate processing. Another efficiency feature in our brains is that it is asyncronous. not every neuron needs to fire. Wheras in a feed forward NN, every node in every layer has to get processed in a MAC. Mixture of Experts is an example of a cheap ML architecture "hack" to mimic this ability to not have to activate every neuron. Neurons also train and infer at the same time, neuroplasticity. Neurons that fire together wire together. Back propagation with Reinforcement learning can be thought of as a crude roundabout way of achieving the same thing.
Ah yes, I mentioned that one to my editor but it must have slipped through the cracks. The subtitles are initially based on voice recognition which is why they have typos sometimes.
This is fascinating. I'm intrigued by the first BCI implant! How fast will the field explode? Could brain-cloud links accelerate AGI? Seems less scary than lab-grown brains (eek, ethics!). ☁
Do you know whether neuromorphic computing scientists intend - or are interested - in the function of emotions? One of your diagrams had waves coming into the right hemisphere, and circuits coming off the left hemisphere... From your discussion here, I understand they are interested in the nervous system so perhaps they are emulating sensory data as information (sensors etc)? I ask, because as I raised in my comment on a previous video, some neuroscientists say that 80% of human thought is "emotional" including all of decision making. I'd be really interested to hear your thoughts on what work is going on - if any - to build AI models with right and left hemispheres... it seems to me, from my human experience, that agency and judgment are two key areas for AI to be both effective and safe. Judgment for humans requires emotion... so I wonder how AI scientists are thinking about emotion? You discussed the column idea, which might 'reach down' to lower sensory layers... is that the extent of 'emotional' AI thought? Or do they conceive of a dialogue between left and right processing hemispheres? I was watching a video about Google's upcoming Lumiere video generation ai: this has a concept of durational time built into its 'thinking' too. It seems to be a reason it unlocks much better video movement than current ais like Runway. It can't be a coincidence that human minds create the experience of duration, and thus duration/time seem important to model in the functioning of all minds.
Well.... You can only shove data through the network so fast. The cost of loading and storing data is millions of cost in time and power over computation.
Added this to my list of potential videos. Thanks. In short, creating scarcity in a digital world allows a lot of things from the physical world to be represented more readily. It's cool tech.
I think not many years from now AI will probably consume more than 50% of energy of Earth and beyond. It shouldn't take too long to expand into space for energy needs. While we actively start to use fusion energy.
Deepsouth's 228 trillion synaptic operations per second is 1000 times slower than a human brain :-( The human brain has several hundreds of trillions of synapses and every synapse can conduct spikes 1000 times per second
Somehow mimicking human brains just doesn't sound like the best road map to follow on this. Arent we after something better? Why start with a duplicate of us? Record isn't so great.
Definately I am quite optimistic about neuromorphic computing. While currently dominated by Nvidia , Intel n IMB can change the whole AI landscape anytime if successful in deploying it . So buy Intel stock in advance if you can see the future . One Thing for sure with current trajectory of AI to AGI & worldwide use case , current system lacks local AI computing & need enormous Power. with these constraints forget about AGI & AI era . It'll hit the bottle neck in next 2-3 years ..n if Intel who's is just behind AI race can jump on top with its early R&D in neuromorphic computing.
5:55 That's complete BS like a lot in this video. If you would average the energy consumption of a human over the course of its entire learning phase = lifespan things would look differntly. But something with a similar output like a human brain doesn't consume megawatts during the inference part.
The logic behind this isn't rational. Firstly, the simple fact is that we don't fully understand how the brain physically works, so replicating how we assume it works is not only going to teach us nothing about a real brain but also create something that isn't working like a real brain. Secondly, we don't understand how information is stored within the brain so cannot replicate that. Finally, we don't even know what sentience is so wouldn't know if an AI is alive or imitating life. This sounds to me like buzz words and pseudo-science being fed to clueless investors to get funding on research that is an inefficent use of time and money.
From my understanding, there have been years of research from several universities and they pooled their knowledge to try to form the most accurate model they could. They want to run large-scale simulations of what can happen in a brain precisely so that we understand it better. That's the stated purpose of the neuromorphic computer. When it comes to AI, we know that our current systems are based on a very high level approximation of what happens in the brain. Since we have brains that are working pretty well, there's good reason to believe that approximating it more closely could result in better outcomes, if we get stuck with our current tech. It makes sense. It's not claiming that we already understand brains or that it's definitely the way forward for AI. This is research, after all.
@@DrWaku It's doesn't matter if every synapse in the brain is indexed and every form of input mapped against every part of the brain that lights up like a xmas tree; if we don't undertsand the connection between the why, how and what then we're a long way from having enough understanding to simulate thoughts let alone intelligence. All we have currently is the biological equivilent of circuit schematics and that is where the research ends because of the aforementioned limits. It's like the three blind men and the elephant with neuromorphic researchers claiming to be able to recreate what the elephant looks like after only touching it's tail.
@taeallred There's such a thing as research for it's own sake with no intrinsic value. You see this in fields of study like cryptozoology, parapsychology, epsitemology, ect. In this case, money is being wasted on making a simulation of something we don't even understand or are capable of validating the results of.
Thank you to the viewer who asked for a neuromorphic computer video! I think it it ended up being an interesting topic.
Don't forget to join my discord and support me on patreon if you like. Also, sorry this video is two days late from my normal Sunday publishing time.
Patreon: www.patreon.com/DrWaku
Discord: discord.gg/AgafFBQdsc
Deep South definitely gave me some Douglas Adams vibes
It's something about the name... and the copying of your brain. Zaphod Beeblebrox would be proud
Maybe we're seeing the first iteration of Marvin.
At the dawn of the 3rd millennium, humans had the bright idea to make a copy of their own brain and place it inside a machine. It's reported that the machine's first words were not "Hello world," as some would have hoped. Instead, it spoke to its 'parents' in apparent disappointment and said, "Oh, no..."
@@DrWakuI like zaphod's hat more.
@@Crawdaddy_Ronot even close, gipity is far too cheerful
I’ve been following your videos since the “Why no one saw ChatGPT coming” video. I absolutely love how every video you make is packed with information as well as how you announce the organization of topics at the beginning of each video. It helps me organize my thoughts as I hear you talk. Keep up the excellent work!
Wow, you're one of the OG viewers. I think that was my first AI video. I'd love to see you in the discord if you're not there already.
I had someone else comment on the organization today as well. It's good to hear that it's helpful. See you in the next video! Cheers.
Wow, you're one of the OG viewers. I think that was my first AI video. I'd love to see you in the discord if you're not there already.
I had someone else comment on the organization today as well. It's good to hear that it's helpful. See you in the next video! Cheers.
Thanks for the invite! Just joined your discord server.
Your channel is so underrated!! This is great!
Thank you very much :) :) the channel has been growing nicely!
Great video as always! Thank you very much
Thank you for your comment! See you at the next one
@@DrWaku You can be sure I'll be waiting for the new video
Lots of people predicted that one superhuman artificial general intelligence would take over the world, maybe Google, maybe Openai, maybe the NSA, maybe the Chinese or Japanese, but no one predicted it would be Australia.
First they got nuclear subs... then a superintelligence... uh oh
I was devastated when the Wallaby Squash canned drink was taken off the market.
We really are at the tipping point of human technology. Once our AIs become intelligent enough to make discoveries of their own, our technology will far surpass what we have now in ways we can't imagine.
As always … thank you Dr. Waku.
Thank you for watching so many of my videos. Cheers.
A6? Very funny. But what about all the people who thought you were being serious? Oh well, they'll live. Great video!
10-4 on the A6
I'm just liking every single one of your videos I come across. I'm active on r/singularity. I've been meaning to make a post there to spread the word a bit more about your channel - I know someone already did several months ago. You really don't need to change anything about your content - it's brilliant. Keep going.
Yes, whenever I post a video that is talking about when AGI will come, it usually makes its way to that subreddit I think :) if you find a video interesting, please do advertise to the appropriate channels. I can't really post in r/singularity myself because they don't allow self-promotion. But any genuine recommendations would be greatly appreciated. Not often, but sometimes my videos get a large percentage of external traffic when it gets shared on some other platform. I can't always tell where it gets shared though :)
And, thank you for your comment! I'm really happy to hear your enjoying my content. See you around.
@@DrWaku Sure thing. I'll pick the best one I can think of and share it! Hope you're well :)
Thanks for your contributions and channel, finally someone who prioritizes content over packaging, hope it becomes a trend, keep up the good work 👍🏻
Thank you so much for the video
Thank you for your comment! I really appreciate it.
Really enjoyed the video! I've subscribed to your channel. I have a degree in neuroscience, but when I realized I didn't like killing rats so much I decided to switch gears, and I'm studying computer science and math. Neuromorphic computing is the obvious interface between the disciplines.
One thing that came to mind about the FPGA implementation of DeepSouth: it's pretty well known that the brain has this quality of plasticity. It is not the case that the "hardware" of the brain is baked onto a chip the same way that circuits are to make CPUs. There are different principles that govern the extent to which the actual synaptic connections are altered in learning (short and long-term potentiation, working memory). It seems that until we can manage to manufacture a chip with the capacity to fluidly alter its own architecture (memristors have been proposed but seem to be fairly theoretical at the moment), there will be a certain efficiency bottleneck that will be difficult to overcome.
Fascinating area of research. It will be amazing to see what advancements come out of it.
Anyone ever consider the possibility that the inscrutably large matrices generated by modern AI are actually an _interface_ to the real machinery of computation, which is situated elsewhere in the simulation.
We think we are training machine minds in our image but we're really just figuring out simulation API calls through brute force...
@@DrWaku Oh... so _you're_ the other guy who actually knows the setup. You wouldn't believe how much time I have wasted on YT threads waiting to encounter someone who can speak plainly about the simulation. Bang on, doc!
@@DrWaku ...and while I have you on the phone, might I suggest you look into the role of gender in computer science? By which I mean _technical_ gender. I have spent twenty years in that pursuit, and I can tell you it's quite profound to our conception of reality.
Sound too nutty? Then answer me this; how did _every single_ graphic designer and thumbnail generator make the simultaneous decision to characterize AI as a beautiful young female? I could accept even a ninety percent female representation as reasonable, but one hundred percent is impossible... and starkly conspicuous. Anyone staring at that fact should be very, very curious about what it implies in the matter of (human operating-system) network design.
One thing i want to add is the reason WHY our brains are much more efficient. Rather than shuffling around 16 bits numbers, our neurons can transmit one bit but time it juuuuust right to transmit the same number. The weights are encoded in the time of flight of the signal. Essentially the distance between neurons. Almost all the power consumed by conventional computers is from shuffling memory around and very little of it actually gets used up in the actual MultiplyAccumulate processing.
Another efficiency feature in our brains is that it is asyncronous. not every neuron needs to fire. Wheras in a feed forward NN, every node in every layer has to get processed in a MAC. Mixture of Experts is an example of a cheap ML architecture "hack" to mimic this ability to not have to activate every neuron.
Neurons also train and infer at the same time, neuroplasticity. Neurons that fire together wire together. Back propagation with Reinforcement learning can be thought of as a crude roundabout way of achieving the same thing.
Fascinating topic - thanks a lot for the explanation!
Thank you for your note! Cheers
Insane how the basic pc is now a powerhouse of info and practical helps for the average user due to the advent of AI
Yes, still fairly slow if you're running AI locally but that will change
Your channel is absolutely going to explode in growth 🎉
You have a very pleasant disposition and presentation.
it's typed ASIC not EXEC, you have a typo there in your video around 11 min.
Ah yes, I mentioned that one to my editor but it must have slipped through the cracks. The subtitles are initially based on voice recognition which is why they have typos sometimes.
Neuromorphic computers are like birdomorphic airplanes.
Haha. Don't you mean ornithomorphic? (Greek theme)
Walking: anthropomorphic ambulation
I didn’t understand half of what you said . I love genius level info .thank you
Feels like we're just building out a foom runway
A6 or ASICs ? Application Specific integrated circuits ?
The latter, ASICs. Sorry, my editor relied on the speech recognition and it put in the typo
🤯Just discovered your channel
Welcome to the community!! I hope you like having your mind blown ;)
of course 🤓 @@DrWaku
Good delivery .
Cool. Do these systems mimic gap junction communication as well?
Fantastic! Thank you
Good to see you again, thanks for watching
Why doesn't this channel have millions of subscribers?
This is right up there with the RUclips channel AI Explained.
Could you make a video about SpikeGPT (from SSNs)
600th like!👍
Awesome video!🤘
Nice! Thanks for commenting!
This is fascinating. I'm intrigued by the first BCI implant! How fast will the field explode? Could brain-cloud links accelerate AGI? Seems less scary than lab-grown brains (eek, ethics!). ☁
10mins in I learned so much. I had to pause the video and drop a comment.
whover did tge graphics was showing an apple A6 chip every time you mentioned ASICs
Oh, that's why everyone was commenting about A6. Lol. Yeah that was my video editor
Thank you!
Thanks for watching and commenting!
Beautiful... 🎉
Do you know whether neuromorphic computing scientists intend - or are interested - in the function of emotions? One of your diagrams had waves coming into the right hemisphere, and circuits coming off the left hemisphere...
From your discussion here, I understand they are interested in the nervous system so perhaps they are emulating sensory data as information (sensors etc)?
I ask, because as I raised in my comment on a previous video, some neuroscientists say that 80% of human thought is "emotional" including all of decision making. I'd be really interested to hear your thoughts on what work is going on - if any - to build AI models with right and left hemispheres... it seems to me, from my human experience, that agency and judgment are two key areas for AI to be both effective and safe. Judgment for humans requires emotion... so I wonder how AI scientists are thinking about emotion?
You discussed the column idea, which might 'reach down' to lower sensory layers... is that the extent of 'emotional' AI thought? Or do they conceive of a dialogue between left and right processing hemispheres?
I was watching a video about Google's upcoming Lumiere video generation ai: this has a concept of durational time built into its 'thinking' too. It seems to be a reason it unlocks much better video movement than current ais like Runway. It can't be a coincidence that human minds create the experience of duration, and thus duration/time seem important to model in the functioning of all minds.
Big fan of yours.
Thank you :)
Well.... You can only shove data through the network so fast. The cost of loading and storing data is millions of cost in time and power over computation.
What does it take to simulate 0.5 of a human brain like mine?
Ive been cross pollinating different domains using an AI program.Lot of interesting interplay between seemingly unrelated areas
evening Doc, best wishes
Thanks Alan
Did you get a better camera? You look younger in this video.
Same camera, but I increased and improved the lighting since my eyes are less sensitive recently. I think I look less flat and AI generated. Good eye!
What's an "obstraction"?? 🤔
To be obstractionist is to obstruct the definition of an abstraction. Anti-dictionary sentiment
...jk...
Is that a rhetorical question ?
Not really. The tech could be independent of all AGI research...
Dr waku, can you give your opinion on bitcoin/crypto/blockchain tech? Thank you love this channel
Added this to my list of potential videos. Thanks. In short, creating scarcity in a digital world allows a lot of things from the physical world to be represented more readily. It's cool tech.
Genius
👍🏻👍🏻👍🏻👍🏻👍🏻👍🏻👍🏻👍🏻👍🏻👍🏻👍🏻👍🏻👍🏻👍🏻👍🏻👍🏻👍🏻👍🏻👍🏻👍🏻👍🏻👍🏻👍🏻👍🏻👍🏻👍🏻👍🏻👍🏻👍🏻👍🏻👍🏻👍🏻👍🏻👍🏻👍🏻👍🏻
Thank you :) :)
Organoid research on human cells is not ethical. Just because you can, doesn't mean you should
Well, we could just use gingers cells. Gingers have no soul.
I think not many years from now AI will probably consume more than 50% of energy of Earth and beyond. It shouldn't take too long to expand into space for energy needs. While we actively start to use fusion energy.
Deepsouth's 228 trillion synaptic operations per second is 1000 times slower than a human brain :-( The human brain has several hundreds of trillions of synapses and every synapse can conduct spikes 1000 times per second
Got to get off that FPGA tech haha. I think they're mostly using it to run simulations and understand the brain first. Speed later.
Thanks for the data point BTW, I couldn't think of a way to look it up
Somehow mimicking human brains just doesn't sound like the best road map to follow on this. Arent we after something better? Why start with a duplicate of us? Record isn't so great.
Definately I am quite optimistic about neuromorphic computing. While currently dominated by Nvidia , Intel n IMB can change the whole AI landscape anytime if successful in deploying it .
So buy Intel stock in advance if you can see the future . One Thing for sure with current trajectory of AI to AGI & worldwide use case , current system lacks local AI computing & need enormous Power. with these constraints forget about AGI & AI era . It'll hit the bottle neck in next 2-3 years ..n if Intel who's is just behind AI race can jump on top with its early R&D in neuromorphic computing.
Here in Australia, we think of "The Deep South" as Confederate country in the U.S.
mm hmmm, we are done, 50 years.
Creating Cylons...
Commodity hardware isn't very neuromorphic.
Agreed. It's a lot more based on logic and gates.
I want a robo kitty so badly! 😖
Let’s hope not destroy the Ai and robots
Я обожаю ваши шляпы и другие головные уборы.
What could go wrong?
So what will it be?
Altered carbon.
Terminator
Battle star Galactica
The Matrix
E. All of the Above.
5:55 That's complete BS like a lot in this video. If you would average the energy consumption of a human over the course of its entire learning phase = lifespan things would look differntly. But something with a similar output like a human brain doesn't consume megawatts during the inference part.
4:15 "and there is an ethical question of course". Of course? Really? What state of wokeness do one have to attain?
The logic behind this isn't rational. Firstly, the simple fact is that we don't fully understand how the brain physically works, so replicating how we assume it works is not only going to teach us nothing about a real brain but also create something that isn't working like a real brain. Secondly, we don't understand how information is stored within the brain so cannot replicate that. Finally, we don't even know what sentience is so wouldn't know if an AI is alive or imitating life.
This sounds to me like buzz words and pseudo-science being fed to clueless investors to get funding on research that is an inefficent use of time and money.
From my understanding, there have been years of research from several universities and they pooled their knowledge to try to form the most accurate model they could. They want to run large-scale simulations of what can happen in a brain precisely so that we understand it better. That's the stated purpose of the neuromorphic computer.
When it comes to AI, we know that our current systems are based on a very high level approximation of what happens in the brain. Since we have brains that are working pretty well, there's good reason to believe that approximating it more closely could result in better outcomes, if we get stuck with our current tech. It makes sense. It's not claiming that we already understand brains or that it's definitely the way forward for AI. This is research, after all.
@@DrWaku It's doesn't matter if every synapse in the brain is indexed and every form of input mapped against every part of the brain that lights up like a xmas tree; if we don't undertsand the connection between the why, how and what then we're a long way from having enough understanding to simulate thoughts let alone intelligence.
All we have currently is the biological equivilent of circuit schematics and that is where the research ends because of the aforementioned limits. It's like the three blind men and the elephant with neuromorphic researchers claiming to be able to recreate what the elephant looks like after only touching it's tail.
@taeallred There's such a thing as research for it's own sake with no intrinsic value. You see this in fields of study like cryptozoology, parapsychology, epsitemology, ect. In this case, money is being wasted on making a simulation of something we don't even understand or are capable of validating the results of.