@@mihailmilev9909 it's a name given to one of the most complex softwares we have. It's only inspired by the structure of neurons, it doesn't have the exact same process as far as we know.
12:21 That comparison was brilliant. It ties in computing & neurology together. Low speed, high precision needed => Digital ("Slow system of thought") High speed, low precision needed => Analog ("Fast system of thought")
Actually Radar computers in the 1950's were analogue. I was one of the last people at my college to be taught both analogue and digital computing. Add, Subtract, Multiply, Divide, integrate and differentiate can all be done with a single op-amp. The problem is Nyquist noise, and issues with capacitor dielectrics such as dielectric absorption and leakage. With a digital system you can just vastly over-sample add them all up then divide by your number of samples to reduce effective noise. You don't get that choice with analogue.
I developed an asic for ai acceleration as part of my bachelor`s thesis and I must say this video is of very high quality. It is definitely an interesting approach to go back using analog techniques.
Well done. I played around with small neural nets using op-amps in the 90s and although I saw that it was kinda the way to go, I had trouble with drift due to integration bias and all sorts of noise--and of course training was super tedious. But I always thought that neural nets really need to stay in the analog world. Modern non-volatile memory seems to be a solution for training weights since you can put variable amounts of charge in a cell, very densely.
Did you ever look at using Fuzzy-Set theory to improve decision accuracy? I used a new program called CLIPS , around that time period. It's time to review it again.
So the brain uses discrete values in the nerves. I may use analog within the cell. Opamps use analog values only. I tried very hard to understand analog multiplication for radio modulation and it is .. complicated. Satellite radio is energy efficient because it uses only one bit. AM radio uses a lot more energy ( for 8 bit signal/noise ). OpAmps are large compared to digital circuits. Flash memory is basically analog memory. We just use DACs and ADCs to only store discrete values in it. This is similar to the analog amplitude in DSL. ADC and DAC, especially for only 8 bits are fast. They are used for video equipment. So I don't know what the video want to claim there. I have read that the lsb could run on lower supply voltage because errors are not so bad there. There are always errors and mostly by supply voltage we decide how often we accept an error. Also those "half open" valves scare me. The nice thing about CMOS is that no change in state => no current drawn. I just chatted about the general problem of matching resources to tasks. That is an np-complete problem. So with different kind of transistors for tasks of different importance one opens a very big can of worms...
Digital beats out analog until you scale to the extremes where we are right now. So until now it did not make much sense to use analog NN chips. I have studied one such analog chip and was sad to see that modern digital deep submicron could beat in every was possible. But that was 10 years ago, right now the digital CMOS hardware is nearing its limits, so we may need a paradigm change.
@@ArneChristianRosenfeldt Arguably flash memory cells while not binary in nature are still more digital as the defining characteristic of digital systems as opposed to analog is the quantization of the signal. That is to say, digital signals are interpreted by quantizing them into one of a finite array of buckets that each correspond to some arbitrary range of the physical input value. The consequence of this is that digital signals are highly accurate but their precision is finite and limited as every input signal is effectively rounded to fit into one of those finite buckets. In contrast, the precision of analog signals is as close to infinite as you can get within our universe though in practice accuracy is the limiting factor is in how well you can insulate the system from noise and how accurately you can measure the input signal. Granted even with an ideal isolated system and ideal signal measuring device any analog system in our universe is likely to have its precision limited by the fact fundamental particles in our universe have defined properties. But this is also arguably academic as the applications for a system that can process values more precisely than could ever physically be generated or represented in our universe due to the lack of any particle with a measurable quantifiable property small enough is rather limited. Short of us discovering some way to either change the laws of physics in some region of space in that manner or travel to universes where the laws of physics are different a system that precise if sufficiently accurate could solve any problem that could exist in our reality limited only by our ability to understand and specify the problem. Well if given sufficient processing time that is but it could process any and all possible states that could even exist within our universe which would, in theory, allow us to solve any problem that could exist in reality. Sure there would still be a fundamental limit on the maximum precision that could still be demonstrated by the fact we could imagine arbitrary problems that couldn't be represented with enough precision without some clever workarounds. Hell even beyond that it is likely there is a finite amount of particles a civilization could ever collect in a universe with a finite speed of light and there will always be some arbitrary number larger that could be imagined but still, the practical applications of dealing with values that could not be replicated within the physical limitations of the observable universe are rather limited. There is probably a limit to what insights can be gleaned from simulating things that are physically impossible to ever encounter or bring into being.
Evolution is the most aggressive optimization function in the known universe and has been running for over 4 billion years. Every single animal alive today is optimized down to the smallest cells. It's really no wonder that human brains are both the most powerful computer we know of and has efficiencies that makes everything else look like a joke.
We take for granted the amount of info we know about each other (human AI). I can guarantee you that when you last had a conversation about a specific feeling (whether is love with your spouse, hate towards something, how horiffic a scary movie was...) that you did not settle on the same exact emotion. A.K.A: you don't even know how to debug or understand what is going on behind any human's eyes right now!
And this is how an AI becomes self-aware, hides its true meaning to you and go all Skynet when you expect it the least. S-F was not far off neither on that one.
There are projects trying to find ways to break down and comprehend how learning algorithms work, it actually uses the same program that SETI@HOME did to allow people to donate processing power to help the project in testing and breaking down how the algorithms work
@@naimas8120 As a layman I'd say definitely yes. Just the last couple of years AI's made some big strides. When we get quantum computing up and running and pair it with AI the possibilities are endless...and scary
@@naimas8120 The problem is that the AI hype has come 3 times now, so predicting if This is the time it will really break throw is quite hard to tell. Personally, I eagerly waiting for our machine overlords (as long as there is no human controlling the AI). And if we get a true general intelligence going, it would be interesting to see how a different alien intelligence solves problems.
You made a somewhat profound comment at 12:13. The essence of intuition versus analysis. From our vast experiences we develop intuitions that gives us that "gut feeling" but when it matters, we do rigorous analysis to confirm. RE: "Blink" Malcolm Gladwell.
Nice...so it seems like digital-style processors still do all the “housekeeping” tasks of the computer, but then there are these “analog resistor networks” that do specialized tasks like the implementation of convolutional neural networks. Makes me smile when “everything old is new again”.
Well the problem is, there are 16,852 views, yet only 1.6K likes and 31 Dislikes and 149 comments. This world is full of consumer minded half curious morons. That's why.
That is one solution, the other is the connectionist solution, having many small CPUs with limited memory, for each node, which are connected to only neighboring CPUs...
Or just building a type of FPGA. I think in the future however, that FPGA will be a combination of Electronics and Photonics. For NN I don't see how Photonics could be beaten for general purpose networks.
It actually makes a lot of sense. There has been a recent resurgence in the interest of analog systems. I, for one, feel like we ditched that particular technology a little prematurely. I'd be willing to bet that we see some rather spectacular new analog-based technologies in the near future.
@@Alimhabidi It's been mentioned that quantum computers are specific purpose machines that won't improve on everything a conventional machine does. It also requires a conventional machine to process a lot of the data. Sabine Hossenfelder made a number of videos on quantum computers and the direction they're heading. Maybe that might interest you, especially since she's a theoretical physicist.
The future is not fully analog or quantum. Analog will work of a set of properties which manipulates energy in waves, aka signals. Digital is a very simple signal, it is currently very stable and cheap. We can do a lot with this simple signal. For if we do not master digital how are we to understand analog. There signal architectures which are analogous on digital. Most who study CS or ECE never learn this. Most if not all of CS's theories are wrong! Literally might as well study Psychology, if you want to be that wrong.
@@davidthacher1397 Ofc the future will be a combination of all of the above. But how are the CS theories wrong? This is a first for me, and I'd like to hear you explain it a bit
Great video, pretty much sums up the current state of AI. It's still a long way to go until we can even compare AI to a "real" brain. The brain is an insanely complex electrochemical machine that was evolved over millions of years.
I have an idea for AI Emotions. We humans used to live in jungles and forests, biomes. in which we tried to survive and reproduce, for surviving our emotions are Fear, trust, confidence and loneliness. For reproducing it is love, attraction, and idk. so we would need to make an AI with "ADN" or atleast try to give it those feelings, but would have to be 2 AIs or else it won't be able to interact
I feel good about myself that I know just enough about neural networks and computer engineering to somewhat understand what this video is talking about.
11:57 "They form a sort of black box with no means to verify the integrity of a result. This creates the dilemma of potentially unexplainable AI systems, creating issues of trust..." This is how the AI apocalypse begins. "Why'd that car run over that advocate?" "No idea."
Using an AI to control a car is a waste of an AI... Besides, while autonomous aircraft or spacecraft is feasible technology, an autonomous car will never be "safe" due to pedestrians, cyclists, animals, etc. sharing the roads. This should be obvious by the weird accidents caused by cars like the Tesla decapitating the idiot occupant by driving under a truck and continuing on until it crashed into a house...
@@davidhollenshead4892 lol they only need to be more accurate than a human driver. Doesn't need to be perfect. And the car structure and safety features are getting advanced by the day too. The sweet spot is closer than you think! 😁❤️
@@davidhollenshead4892 I wouldn't be so sure. What you say may be true for modern infrastructure, but autonomous vehicles if they catch on may change the way we view transportation infrastructure overall. I could personally see a future where few own cars we rideshare if we need to travel a long distance with a car, but other than that we create more walkable cities with more public transit. In such a future comparing autonomous vehicles on modern infrastructure and autonomous vehicles in future infrastructure may be comparing apples to oranges.
Great video especially the assembly multiply instruction and outlining it with an analogue computing method. This makes sense since analogue results are instantaneous and don't require a clock pulse and thus no memory storage between calculations as you can add and subtract analogue signals instantaneously.
As soon as he mentioned "analog" was waiting for the requisite mention of noise being the limiting factor. I'm impressed that he did indeed talk about that.. but disappointed that precluded the epic rant I was about to unleash 🤣
Quite interesting! Neurons in the brain actually act as a mixed analogue/digital system. The input is digital (action potential), they do analogue processing, and output digital again (action potential, it's either 0 or 1).
Which means we have no way to efficiently improve their performance. Doing almost what we want is no closer to doing exactly what we want than doing something completely different if we have no way to deliberately improve them in some iterative way.
@PolySaken We can understand why, in general, a neural network might be capable of detecting triangles. We can't understand why *that particular* neural network *is* capable of detecting triangles.
@PolySaken and what is that data? We don't know. We just know when you put a triangle in it says yes, and when you put in a square it says no. Also there's a megabyte of random-looking numbers involved.
@PolySaken We can see what each square millimeter contributes to the painting by looking at it. We can understand why this square millimeter is this colour. Not so with AI models!
We definitely made a rash collective decision when we decided that digital was to *replace* analog. I would not be surprised at all to see a resurgence of analog systems ... we barely even explored that technical space. There surely are future analog developments that will rock the foundation of technological advancement. Very well done :)
The inherent precision floor of possible analog-driven neural net AI is now my headcanon for every single science fiction book/movie where a robot gains consciousness and starts acting unpredictably.
We will never figure out computers until we properly master the harnessing of energy, especially electricity. We don't understand the fundamentals of things like this.
Wow, amazing video. It's unbelievable the progress we've made. Just in my lifetime I've seen a blistering pace of achievement. The first computer I played with didn't even have a video interface, lol. I've loaded data and run programs from punch cards. Now I play video games on a machine that I built that probably rivals all the computing power available to NASA during Apollo. It's somewhat ironic that machine learning may lead to the rebirth of analog computing. Except for specific applications, analog has been relegated to unwanted stepchild status. Just saying, there's nothing wrong with analog, this video shows how it can be more efficient for machine learning.
I don't know if I understand 5 % of the video, but still mind blowing. An analog world control by digital telechnology and using Analog digital converter in modern society. With the deep learning for AI it look already like a black box for the programming part even for autonomous vehicule when the IA learn by examples, isn'it ?
Neuromorphic cpus are likely the future for this. These special chips by IBM and Intel are already much more effective at neural net tasks. IBM's goal is to build a system no larger than a brain that has the same amount of connections as the real one.
@@user-ee1hj7rk9l I'd say it's a bit too early to tell, but so far it looks like they won't. Quantum Computers are really only good when their workload consists of doing the same thing over and over again. This is good for breaking encryption, searching through databases, and so on, but not for AI.
Analog computing is technically used commercially in measurment and control systems. Those applications don't exactly push the technologies limits, but are still important
It kind of blew my mind when I found out there were dedicated AI regions in microchips, but I guess that was only a logical next step. I'm not in the field and I doubt I'll ever use this knowledge, but I definitely find it fun and interesting to learn about. Thanks for the very high-quality video!
What is the machine doing around 0:35? Is it extruding solder to bond the pads? This seems ridiculously precise and I've never seen it before, now I'm curious what this process is.
It is a machine that uses gold wires to link between the integrated circuit in the chip and the chip's outside pins. This process is called "Wire Bonding" you can search it on youtube for more videos !
I know very little about Machine Learning and Artificial Intelligence, but I often used to think that AI and ML should be processed like our human brains. I saw this video and realised that my thought has a potential. Thanks for such an informative video.
Another amazing video! It's astonishing to me how many different topics are covered on this channel and how in-depth and interesting all of your videos are.
The music at 7:55 was also used in another good video about ML ( ruclips.net/video/3JQ3hYko51Y/видео.html ). It's called "Atlantis", but now whenever I hear it I think of artificial neurons.
@@sonofagunM357 Heh. I can definitely hear similarities. Thanks for the link, BTW. I haven't played that game, but now if a Steam sale comes along I might get it just because the soundtrack is promising.
I know of a cool thought experiment regarding not artificial intelligence but artificial consciousess on digital architecture. It goes like: - all digital computation can be modeled by finite state machines - finite state machines can be expressed on pen and paper - therefore all digital computation can be expreased on pen and paper--pause during a CPU cycle and you can write the contents of your memory and registers on paper. This would take a lot of paper for one CPU cycle, let alone the trillions of CPU cycles a computer would execute over the span of a day. However, it is theoretically possible. So, of consciousness could be achieved on a digital computer, then it could also be a achieved by a very long book. I don't think consciousness can be achieved by any digital system, but it's still fun to think about.
It takes less inputs and energy to compute in a “wet” environment. Air gapped transistors are going to become antiquated and relegated to low cost switching and basic computing. It might be easier and more efficient to compute using “wet” chemistry in situ rather than building up voltage/amperage to jump air gaps with noisy forward voltage to fulfill potential. Just a thought. Have a great day.
Excellent vidéo, thank you. I don't think it's the analogue aspect that can make AI indecipherable. In a purely digital neural network the trail of causation that resulted in certain weightings (and therefore a decision) is irreversible. I'm fascinated and worried by recent discussions of how a trained system could have inbuilt prejudice that can't be proven.
This is no different by how decisions are formed in human mind either. It has been observed that we (not sure who's we) take decisions and our conscious self then try to rationalize those decisions after the fact.
It is not indecipherable. It is just not easy to decipher, because you will end up needing more computational power than the AI itself consumes, just to monitor what the AI is doing. It defeats its purpose.
Only covers this topic at the 30,000 foot level... Something approximately what lay persons can find and understand quickly. Was hoping to go at least one level deeper, introduction to actual computations or otherwise these GPU hardware are appropriate for ML and the likely direction of evolving designs.
Quantum indeterminacy is required, there's no such thing as "artificial." What we call "consciousness" is the portion of a calculatory apparatus that dwells within a probability distribution, in a similar manner to which eyes dwell in the world of partial electromagnetic spectrum wavelength variation. Analyses of visual spectrum wavelengths are communicated to the visual cortex via a shared communication protocol; the visual spectrum itself does not dwell within the same domain as the eyes. In a similar fashion, what humans derogatively refer to as the "subconscious" mind acts as an interpreter of data received from the "conscious" mind reporting from the front line of a probabilistic domain; the "subconscious" does not dwell in the same domain as the "conscious." The deterministic informs the probabilistic, the probabilistic guides the deterministic, feedback loop paradox party time ensues; this is the strange and largely misunderstood process that we refer to as freewill. (disclaimer: don't listen to anything i say, i've clearly taken too many psychedelics, cheers)
I'm huge into R-RAM or memristor technology, I wish you could talk about it more but I understand that the video is more a general overview and not too in-depth in the sub-subjects. With that said, some comments are Memristor (last I researched late 2019): Trying to get memristors to be accurately set and operate predictably took some time, but in 2018 a team made a memristor that could be accurately set to 64 different states, or 5 bits of equivalent precision, which is the first hurdle solved (I think it still used cobalt, which is a material everyone wants to avoid due to being monopolized by a slave labor country). That is just the memristor itself, there are circuit reading techniques that further enhance the memristor accuracy, as well as circuits that will accurately perform the calculation directly with memristors (combined into a perception module) reducing the need for ADC and DAC to improve speed and energy efficiency. Currently the three main things that need to be improved to make it a feasible technology is, when reading the data it is automatically destroyed (memristor changes state once read) so a system to automatically reprogram the resistor is needed (its not the hardest, we simple just haven't gotten to that point yet in prototype research, as people are still debating memristor design/chemistry types), setting such a system into a matrix array with all needed support circuitry (aka figuring out a good architecture, which for now is putting the cart before the horse), and tooling is also the major cost. Production of memristor uses techniques that aren't too standard, so making it easier is needed to easy adoption.
4:35 it was mentioned that the memory transfer accounts for the vast majority of time and power consumed, and later on we discussed that analog systems may be a solution to optimizing the computation (accumulative matrix multiplication), but shouldn't the transfer speed be the focus of the optimization given it's the main time and energy consumption bottleneck? Something like using using photonics for storing/transfer of data to make it faster and have low energy footprint, something like what Lightmatter (the company) is doing.
The fast Hadamard transform basically only needs patterns of add and subtract operation. Needing only a few transistors per operation on a chip. Then you can make Fast Transform fixed filter bank neural nets. The 2-point transform of a,b is a+b,a-b. It is self inverse. (a+b)+(a-b),(a+b)-(a-b) =2a,2b To get the 4-point transform of a,b,c,d form two 2-point transforms. Then form the sum and difference of the sum terms (alike terms) (a+b),(c+d). And the sum and difference of the difference terms (a-b), (c-d). Done. At each stage you sum and difference alike terms.
computing has to become analog to get to the next level. In a sense, going digital is effectively compressing a whole signal to a single bit of information.
I thought this video was about massive parallelism. It's basically a high-rise elevator dilemma, but for chips. Also, analog computing is cool. For instance, it can computationally solve differential equations with a ton of parameters. Basically, you can recreate any physical function with custom analog schematics and compute it instantly. Only downside is that it's too expensive to build on a chip (in terms of engineering). So I think it's just an act of economical balancing: Cost of analog ASIC engineering + production + programming > cost of consumer grade engineering + production + programming. Trend might reverse only when consumer chips would incorporate analog processing (as a natural evolvement due to clock frequency and nanotechnology limitations)
Tuning is the very process we our selves use to perfect our understanding of the environment around us. As newborns the environment bombards our sensors with information, this information could be seen as static, as time goes on we adjust and begin tuning our sensors, the static noise that we were born with never goes away, we simply tune into signals that benifets our existence. Sense the human brain is such an effective system and it appears to utilize frequency modulation in its formation of the environment it would make sense to take another look at analog computing.
I assume you were referring to home electronics when referring to the commercial use of analog computers? Analog computing was a tenant of manufacturing and powerplant control systems up through the 90s. There are many old plants that still use analoge computers for control systems, since they have yet to upgrade. While certainly obsolete, analog computers make wonderful control systems since they do not rely on discrete sampling. I have worked on both digital and analog systems. I even work on a plant that has a mechanical computer that is used for pressure control.
Analogue systems are like the human brain. They require the right temperature for example. With that in mind when designing an analogue AI system you must also factor in the environment. It's the responsibility of the designer (call him God if you will) to create the correct environment (temperature, noise etc) for the AI to survive and do it's task. I recently quit working in software. I now stare at an oscilloscope all day. Analogue computers have never died,, it's just people didn't know about them until now.
One hugely negative downside to analog computing, beyond what you measured is that they are typically “low power”, but have high energy consumption. Digital logic (I.e CMOS) consumes little power when static (when the clock signal is stable) whereas most analog designs have much lower peak power consumption but they consume power all the time. To decrease power consumption you have to introduce additional gates, dramatically reducing the benefits of analog systems, and making them essentially a digital/analog hybrid design. It’s really impressive you could convey these sorts of details in a coherent 13 minute video, kudos.
Analog computing *was* successfully used in design for decades! Car manufacturers used analog computers (One should rather call them electronic simulators I think, for there are no real computations going on in them, they simulate systems in an electronic equivalent.) for evaluating the behaviour of suspensions in various conditions. In aircraft design, analog simulations were used to predict and prevent wing flutter. I am not sure, I vaguely remember that Professor Eppler's electronic windchannel, on which he developed his aerofoils, many of which helped helped some great aircraft to become the success they were, was such an analog machine. All sorts of engineering problems were simulated with such machines and their output was (economically) succesfully applied. Simulations on digital computers had a hard time catching up with them until well into the seventies, and it took computers of the Oomph! of Cray 2 and their likes to really make them obsolete in that field. Remember why the F-117 was that edgy? It was designed using Cray 1s in the late seventies. Despite being absolute top notch machines of their days, the Cray 1 didn't quite have the calculation power to calculate all the possible radar refletions in a feasible timeframe, so designers went for a shape the computers could handle. Use Computers that are 20 years ahead, and you get a B-2 or F-35.
I must say its very interesting. It's some kind of back to roots of neuron networks, first perceptron was analog systems. Analog machines where very efficient in modelling of physical processes, but where complicated and need lots of maintenance. So we abandon this technology to pure digital machines.
▶ Check out Brilliant with this link to receive a 20% discount! brilliant.org/NewMind/
Your videos are like an endless competitive battle between my comprehension and your articulation; may this war never find peace.
@No Name Same here, as I already had to use captions while watching movies, and now long covid is making my hearing even worse...
The connectionist solution will work, by having many small CPUs with limited memory, for each node, which are connected to only neighboring CPUs...
No
The brain.... billions of calculations per second... powered on a subway sandwich!
I don't think the exact process of the brain is comparable to computing. I imagine it's something more complex.
@@krasimirgedzhov8942 nah I think it's just a big neural network
@@krasimirgedzhov8942 that's kinda where the name comes from isn't it lol
@@mihailmilev9909 it's a name given to one of the most complex softwares we have. It's only inspired by the structure of neurons, it doesn't have the exact same process as far as we know.
İf its powered by subway sandwich its not going to calculate billions of things per sec its going to be much less
12:21 That comparison was brilliant.
It ties in computing & neurology together.
Low speed, high precision needed => Digital ("Slow system of thought")
High speed, low precision needed => Analog ("Fast system of thought")
And what would quantum computation give ?
@@Hollowed2wiz a nightmare
I'm applying to do a phd exactly in this field. Amazing video!
Update: I was accepted!
I am also writing my research proposal in this topic for PhD... Not accepted yet
@@santoshmutum3263 Where did you apply?I'm Manipuri anyway.
@@Rahul016-d6k Japan
@@santoshmutum3263 Good Luck Brother👍👍
@@Rahul016-d6k thanks
Actually Radar computers in the 1950's were analogue. I was one of the last people at my college to be taught both analogue and digital computing. Add, Subtract, Multiply, Divide, integrate and differentiate can all be done with a single op-amp. The problem is Nyquist noise, and issues with capacitor dielectrics such as dielectric absorption and leakage. With a digital system you can just vastly over-sample add them all up then divide by your number of samples to reduce effective noise. You don't get that choice with analogue.
I developed an asic for ai acceleration as part of my bachelor`s thesis and I must say this video is of very high quality. It is definitely an interesting approach to go back using analog techniques.
Well done. I played around with small neural nets using op-amps in the 90s and although I saw that it was kinda the way to go, I had trouble with drift due to integration bias and all sorts of noise--and of course training was super tedious. But I always thought that neural nets really need to stay in the analog world. Modern non-volatile memory seems to be a solution for training weights since you can put variable amounts of charge in a cell, very densely.
Did you ever look at using Fuzzy-Set theory to improve decision accuracy? I used a new program called CLIPS , around that time period. It's time to review it again.
@@forwardplans8168 I was doing a lot of fuzzy logic at that time too. Interesting times.
So the brain uses discrete values in the nerves. I may use analog within the cell. Opamps use analog values only. I tried very hard to understand analog multiplication for radio modulation and it is .. complicated. Satellite radio is energy efficient because it uses only one bit. AM radio uses a lot more energy ( for 8 bit signal/noise ).
OpAmps are large compared to digital circuits.
Flash memory is basically analog memory. We just use DACs and ADCs to only store discrete values in it. This is similar to the analog amplitude in DSL. ADC and DAC, especially for only 8 bits are fast. They are used for video equipment. So I don't know what the video want to claim there.
I have read that the lsb could run on lower supply voltage because errors are not so bad there. There are always errors and mostly by supply voltage we decide how often we accept an error.
Also those "half open" valves scare me. The nice thing about CMOS is that no change in state => no current drawn.
I just chatted about the general problem of matching resources to tasks. That is an np-complete problem. So with different kind of transistors for tasks of different importance one opens a very big can of worms...
Digital beats out analog until you scale to the extremes where we are right now.
So until now it did not make much sense to use analog NN chips.
I have studied one such analog chip and was sad to see that modern digital deep submicron could beat in every was possible. But that was 10 years ago, right now the digital CMOS hardware is nearing its limits, so we may need a paradigm change.
@@ArneChristianRosenfeldt Arguably flash memory cells while not binary in nature are still more digital as the defining characteristic of digital systems as opposed to analog is the quantization of the signal. That is to say, digital signals are interpreted by quantizing them into one of a finite array of buckets that each correspond to some arbitrary range of the physical input value. The consequence of this is that digital signals are highly accurate but their precision is finite and limited as every input signal is effectively rounded to fit into one of those finite buckets. In contrast, the precision of analog signals is as close to infinite as you can get within our universe though in practice accuracy is the limiting factor is in how well you can insulate the system from noise and how accurately you can measure the input signal.
Granted even with an ideal isolated system and ideal signal measuring device any analog system in our universe is likely to have its precision limited by the fact fundamental particles in our universe have defined properties. But this is also arguably academic as the applications for a system that can process values more precisely than could ever physically be generated or represented in our universe due to the lack of any particle with a measurable quantifiable property small enough is rather limited. Short of us discovering some way to either change the laws of physics in some region of space in that manner or travel to universes where the laws of physics are different a system that precise if sufficiently accurate could solve any problem that could exist in our reality limited only by our ability to understand and specify the problem. Well if given sufficient processing time that is but it could process any and all possible states that could even exist within our universe which would, in theory, allow us to solve any problem that could exist in reality.
Sure there would still be a fundamental limit on the maximum precision that could still be demonstrated by the fact we could imagine arbitrary problems that couldn't be represented with enough precision without some clever workarounds. Hell even beyond that it is likely there is a finite amount of particles a civilization could ever collect in a universe with a finite speed of light and there will always be some arbitrary number larger that could be imagined but still, the practical applications of dealing with values that could not be replicated within the physical limitations of the observable universe are rather limited. There is probably a limit to what insights can be gleaned from simulating things that are physically impossible to ever encounter or bring into being.
Ooh, good to see this put into words and in a concise manner
Your videos are like an endless competitive battle between my comprehension and your articulation; may this war never find peace.
Another masterpiece from New Mind! Never fails to entertain while teaching.
Your avatar is a masterpiece
That energy comparison between our brain and our best processors is incredible. It's amazing the efficiency evolution can devolope given enough time.
Computers are weak. They will never even beat a human at Chess.
Evolution is the most aggressive optimization function in the known universe and has been running for over 4 billion years. Every single animal alive today is optimized down to the smallest cells. It's really no wonder that human brains are both the most powerful computer we know of and has efficiencies that makes everything else look like a joke.
Computers don't happen by random processes. Neither did humans. Computers were created as were humans.
@@Eloign Sure, provide some proof for that.
Will you?
@@hedgehog3180 Yeah I feel like every cell of my armpit hair is 100% optimized
the idea of an AI being inherently impossible to debug or decipher is really cool and scary, science fiction was not far off on that one.
We take for granted the amount of info we know about each other (human AI). I can guarantee you that when you last had a conversation about a specific feeling (whether is love with your spouse, hate towards something, how horiffic a scary movie was...) that you did not settle on the same exact emotion. A.K.A: you don't even know how to debug or understand what is going on behind any human's eyes right now!
But humans are!
@@jss7668 scary, yeah, but the most beautiful and fascinating things I've ever seen
And this is how an AI becomes self-aware, hides its true meaning to you and go all Skynet when you expect it the least. S-F was not far off neither on that one.
There are projects trying to find ways to break down and comprehend how learning algorithms work, it actually uses the same program that SETI@HOME did to allow people to donate processing power to help the project in testing and breaking down how the algorithms work
It amazes me how many different state-of-the-arts you perfectly and briefly cover in 10 minutes
As a student in computer science, this is really interesting
I'm a student of Information and Communications Technology. What do you think about the future of our field? Do you think it's really AI?
@@naimas8120 As a student of Computer Science I can definitely say that I have absolutely no idea because I'm dumb
@@naimas8120 As a layman I'd say definitely yes. Just the last couple of years AI's made some big strides. When we get quantum computing up and running and pair it with AI the possibilities are endless...and scary
@@naimas8120 The problem is that the AI hype has come 3 times now, so predicting if This is the time it will really break throw is quite hard to tell. Personally, I eagerly waiting for our machine overlords (as long as there is no human controlling the AI). And if we get a true general intelligence going, it would be interesting to see how a different alien intelligence solves problems.
@@Zpajro imagine the conversations with something that isn't human. Hopefully we reach that point in my lifetime
You made a somewhat profound comment at 12:13. The essence of intuition versus analysis. From our vast experiences we develop intuitions that gives us that "gut feeling" but when it matters, we do rigorous analysis to confirm. RE: "Blink" Malcolm Gladwell.
Also it could be the difference in function from our creative right and analytical left brain.
@@calholli that is bs btw
@@Nnm26 Well, even if only metaphorical. It still has value as a concept.
I promise I understood everything he said
Yes me too comrades
We all did!
it is enough to be a 1st-year Computer Science student to understand all of the words and concepts
@@xlnc1980 Hey fellow DT fan 👋🏽👋🏽
@@Bhatakti_Hawas Hi there, fellow! LTE3 coming out next month. Been waiting for that one for only 22 years now. :)
Nice...so it seems like digital-style processors still do all the “housekeeping” tasks of the computer, but then there are these “analog resistor networks” that do specialized tasks like the implementation of convolutional neural networks.
Makes me smile when “everything old is new again”.
4:30 I was kinda zoning out and I heard that as "50 to 100 pikachus"
It's so annoying to discover channels like this and see they don't get the views they deserve.
Its a fairly new channel
@@mitchellsteindler Let's boost it with some engagement (comments) then.
Definitely one of the best engineering channels on RUclips.
Well the problem is, there are 16,852 views, yet only 1.6K likes and 31 Dislikes and 149 comments. This world is full of consumer minded half curious morons. That's why.
@@keashavnair3607 dude. Just stop and get off your high horse. People like what they like.
So, hybrid processors are the future, then, perhaps.
Yes, I have vague memories of the memristor being touted as the missing passive component, or some such thing.
"...analog circuitry" , that caught me off guard
That is one solution, the other is the connectionist solution, having many small CPUs with limited memory, for each node, which are connected to only neighboring CPUs...
@@davidhollenshead4892 interesting... what is it called?
@@mihailmilev9909 mesh computing, i think, or at least it's mesh topology based.
Or just building a type of FPGA.
I think in the future however, that FPGA will be a combination of Electronics and Photonics.
For NN I don't see how Photonics could be beaten for general purpose networks.
It actually makes a lot of sense. There has been a recent resurgence in the interest of analog systems. I, for one, feel like we ditched that particular technology a little prematurely. I'd be willing to bet that we see some rather spectacular new analog-based technologies in the near future.
My professor once said in class that future is analog. We were in a dilemma thinking what he actually meant. Now i see what he meant
Future is quantum
@@Alimhabidi It's been mentioned that quantum computers are specific purpose machines that won't improve on everything a conventional machine does. It also requires a conventional machine to process a lot of the data. Sabine Hossenfelder made a number of videos on quantum computers and the direction they're heading. Maybe that might interest you, especially since she's a theoretical physicist.
The future is not fully analog or quantum. Analog will work of a set of properties which manipulates energy in waves, aka signals. Digital is a very simple signal, it is currently very stable and cheap. We can do a lot with this simple signal. For if we do not master digital how are we to understand analog. There signal architectures which are analogous on digital. Most who study CS or ECE never learn this. Most if not all of CS's theories are wrong! Literally might as well study Psychology, if you want to be that wrong.
@@davidthacher1397 Ofc the future will be a combination of all of the above. But how are the CS theories wrong? This is a first for me, and I'd like to hear you explain it a bit
everything is analog if you dig deep enough.
Great video, pretty much sums up the current state of AI. It's still a long way to go until we can even compare AI to a "real" brain. The brain is an insanely complex electrochemical machine that was evolved over millions of years.
I have an idea for AI Emotions.
We humans used to live in jungles and forests, biomes.
in which we tried to survive and reproduce, for surviving our emotions are
Fear, trust, confidence and loneliness.
For reproducing it is
love, attraction, and idk.
so we would need to make an AI with "ADN"
or atleast try to give it those feelings, but would have to be 2 AIs
or else it won't be able to interact
What is the adn
Good idea. Keep developing it
7:00 triggers google assistant :D
That's by design.
I feel good about myself that I know just enough about neural networks and computer engineering to somewhat understand what this video is talking about.
11:57 "They form a sort of black box with no means to verify the integrity of a result. This creates the dilemma of potentially unexplainable AI systems, creating issues of trust..."
This is how the AI apocalypse begins. "Why'd that car run over that advocate?" "No idea."
Exact same thing I thought.
Using an AI to control a car is a waste of an AI...
Besides, while autonomous aircraft or spacecraft is feasible technology, an autonomous car will never be "safe" due to pedestrians, cyclists, animals, etc. sharing the roads. This should be obvious by the weird accidents caused by cars like the Tesla decapitating the idiot occupant by driving under a truck and continuing on until it crashed into a house...
@@davidhollenshead4892 lol they only need to be more accurate than a human driver. Doesn't need to be perfect. And the car structure and safety features are getting advanced by the day too. The sweet spot is closer than you think! 😁❤️
@@davidhollenshead4892 I wouldn't be so sure. What you say may be true for modern infrastructure, but autonomous vehicles if they catch on may change the way we view transportation infrastructure overall.
I could personally see a future where few own cars we rideshare if we need to travel a long distance with a car, but other than that we create more walkable cities with more public transit. In such a future comparing autonomous vehicles on modern infrastructure and autonomous vehicles in future infrastructure may be comparing apples to oranges.
@@nipunasudha: *As accurate as...
Great video especially the assembly multiply instruction and outlining it with an analogue computing method. This makes sense since analogue results are instantaneous and don't require a clock pulse and thus no memory storage between calculations as you can add and subtract analogue signals instantaneously.
As soon as he mentioned "analog" was waiting for the requisite mention of noise being the limiting factor.
I'm impressed that he did indeed talk about that.. but disappointed that precluded the epic rant I was about to unleash 🤣
Ahh the mobile net 224, one of my favorite neural network accumulator modules.
Quite interesting! Neurons in the brain actually act as a mixed analogue/digital system. The input is digital (action potential), they do analogue processing, and output digital again (action potential, it's either 0 or 1).
Just gonna say we cant really explain how digital neural networks are making decisions that well either
yeah we just have an algorithm that randomly adjusts them until they give the answers we want
Which means we have no way to efficiently improve their performance. Doing almost what we want is no closer to doing exactly what we want than doing something completely different if we have no way to deliberately improve them in some iterative way.
@PolySaken We can understand why, in general, a neural network might be capable of detecting triangles. We can't understand why *that particular* neural network *is* capable of detecting triangles.
@PolySaken and what is that data? We don't know. We just know when you put a triangle in it says yes, and when you put in a square it says no. Also there's a megabyte of random-looking numbers involved.
@PolySaken We can see what each square millimeter contributes to the painting by looking at it. We can understand why this square millimeter is this colour. Not so with AI models!
We definitely made a rash collective decision when we decided that digital was to *replace* analog. I would not be surprised at all to see a resurgence of analog systems ... we barely even explored that technical space. There surely are future analog developments that will rock the foundation of technological advancement. Very well done :)
What is analog?
The inherent precision floor of possible analog-driven neural net AI is now my headcanon for every single science fiction book/movie where a robot gains consciousness and starts acting unpredictably.
Analog is not required for this, existing AI are already unpredictable due to the black-box nature of machine learning
8:27 background song is amazing, it remembers me prometheus movie from Alien saga
so i'm not the only one who noticed
I knew my collection of Vacuum tubes would come in handy one day...
you saved them so you can crank the plate voltage up past 15KV so you can self annihilate with pizazzzzz
When did "vacuum" become a brand, to you?
I love the visuals used in your videos, they're always unobtrusive but fascinating.
Going back to the basics so we can move forward. This is a fascinating topic!
We will never figure out computers until we properly master the harnessing of energy, especially electricity. We don't understand the fundamentals of things like this.
7:22
Never thought I would understand a statement like that. Good teaching
Wow, amazing video. It's unbelievable the progress we've made. Just in my lifetime I've seen a blistering pace of achievement. The first computer I played with didn't even have a video interface, lol. I've loaded data and run programs from punch cards. Now I play video games on a machine that I built that probably rivals all the computing power available to NASA during Apollo. It's somewhat ironic that machine learning may lead to the rebirth of analog computing. Except for specific applications, analog has been relegated to unwanted stepchild status. Just saying, there's nothing wrong with analog, this video shows how it can be more efficient for machine learning.
I don't know if I understand 5 % of the video, but still mind blowing.
An analog world control by digital telechnology and using Analog digital converter in modern society.
With the deep learning for AI it look already like a black box for the programming part even for autonomous vehicule when the IA learn by examples, isn'it ?
Neuromorphic cpus are likely the future for this. These special chips by IBM and Intel are already much more effective at neural net tasks. IBM's goal is to build a system no larger than a brain that has the same amount of connections as the real one.
We don't even remotely have the hardware manufacturing capabilities to do that
@@user-ee1hj7rk9l, look up "IBM TrueNorth Chip". They've been working on it for years now.
@@user-ee1hj7rk9l I'd say it's a bit too early to tell, but so far it looks like they won't. Quantum Computers are really only good when their workload consists of doing the same thing over and over again. This is good for breaking encryption, searching through databases, and so on, but not for AI.
Analog computing is technically used commercially in measurment and control systems. Those applications don't exactly push the technologies limits, but are still important
Is this another gem of quality content? That we're getting for free? Oh my
It kind of blew my mind when I found out there were dedicated AI regions in microchips, but I guess that was only a logical next step. I'm not in the field and I doubt I'll ever use this knowledge, but I definitely find it fun and interesting to learn about. Thanks for the very high-quality video!
What is the machine doing around 0:35? Is it extruding solder to bond the pads? This seems ridiculously precise and I've never seen it before, now I'm curious what this process is.
good eye, never seen that myself
It is a machine that uses gold wires to link between the integrated circuit in the chip and the chip's outside pins.
This process is called "Wire Bonding" you can search it on youtube for more videos !
@@lemlihoussama2905 Fantastic, thank you!
what really impresses me is the huge amount of knowledge in 13 minutes of video. Congrats for the content
This was a very interesting topic that I didn't really heard about before, thank you for the amount of work this video took you to make !
I know very little about Machine Learning and Artificial Intelligence, but I often used to think that AI and ML should be processed like our human brains.
I saw this video and realised that my thought has a potential. Thanks for such an informative video.
This guy clearly needs more views and subscriptions.
The animations on this video are just gorgeous.
Another amazing video!
It's astonishing to me how many different topics are covered on this channel and how in-depth and interesting all of your videos are.
Your videos are like an endless competitive battle between my comprehension and your articulation; may this war never find peace.
NASA's Appolo mission guidance n Control analog computer design might be useful.
I thought of that Smarter Every Day video first thing
The character / is forward slash. \ is backslash. URLs have forward slashes in them. The backslash \ is used for DOS and Windows paths.
The music at 7:55 was also used in another good video about ML ( ruclips.net/video/3JQ3hYko51Y/видео.html ). It's called "Atlantis", but now whenever I hear it I think of artificial neurons.
At first I thought that song was from Alien Isolation, but no, both sound pretty close wouldn't you say?
ruclips.net/video/txjs5MpATUg/видео.html
@@sonofagunM357 Heh. I can definitely hear similarities. Thanks for the link, BTW. I haven't played that game, but now if a Steam sale comes along I might get it just because the soundtrack is promising.
I had goosebumps just before the ending. Looks like brains truly are the most efficient machines that nature has provided us with.
What's the music at 11:00? It sounds like something from pink floyd. Thanks.
I don't need sleep. I need answers!
ruclips.net/video/THihnuQJHF4/видео.html
@@davidg5898 Thank you so much.
I know of a cool thought experiment regarding not artificial intelligence but artificial consciousess on digital architecture. It goes like:
- all digital computation can be modeled by finite state machines
- finite state machines can be expressed on pen and paper
- therefore all digital computation can be expreased on pen and paper--pause during a CPU cycle and you can write the contents of your memory and registers on paper. This would take a lot of paper for one CPU cycle, let alone the trillions of CPU cycles a computer would execute over the span of a day. However, it is theoretically possible.
So, of consciousness could be achieved on a digital computer, then it could also be a achieved by a very long book. I don't think consciousness can be achieved by any digital system, but it's still fun to think about.
If someone start using things like femtojoules, i believe them. No questions asked.
I bet Derek has been defenetly inspired by your wonderfull video for his recent take on the analog computation.
I have no idea why I thought I could watch this and understand what's going on. Haven't even had my morning coffee yet.
Very clear explanation of a very very hard domain of CS.
0:00 - 1:45 *Oh man, I was thinking about that staring at a potentiometer, get me some vacuum tubes bois, we are going to Rome*
You know how dumb I am? I was fully immersed in the video to the point I thought the ad was part of the topic! Im dumb 🤣
Sir you are "Brilliant"
Analog computers remain as the fastest known. In the 80s compute multipliers were already operating in the tens of gigahertz range.
This really is a masterpiece
It takes less inputs and energy to compute in a “wet” environment. Air gapped transistors are going to become antiquated and relegated to low cost switching and basic computing.
It might be easier and more efficient to compute using “wet” chemistry in situ rather than building up voltage/amperage to jump air gaps with noisy forward voltage to fulfill potential.
Just a thought. Have a great day.
Excellent vidéo, thank you. I don't think it's the analogue aspect that can make AI indecipherable. In a purely digital neural network the trail of causation that resulted in certain weightings (and therefore a decision) is irreversible. I'm fascinated and worried by recent discussions of how a trained system could have inbuilt prejudice that can't be proven.
This is no different by how decisions are formed in human mind either. It has been observed that we (not sure who's we) take decisions and our conscious self then try to rationalize those decisions after the fact.
It is not indecipherable. It is just not easy to decipher, because you will end up needing more computational power than the AI itself consumes, just to monitor what the AI is doing. It defeats its purpose.
86 billion processing units in my brain and I still can't add two digit numbers
What is this "widespread aviability of GPU's" he is talking about?
Only covers this topic at the 30,000 foot level...
Something approximately what lay persons can find and understand quickly.
Was hoping to go at least one level deeper, introduction to actual computations or otherwise these GPU hardware are appropriate for ML and the likely direction of evolving designs.
Quantum indeterminacy is required, there's no such thing as "artificial." What we call "consciousness" is the portion of a calculatory apparatus that dwells within a probability distribution, in a similar manner to which eyes dwell in the world of partial electromagnetic spectrum wavelength variation. Analyses of visual spectrum wavelengths are communicated to the visual cortex via a shared communication protocol; the visual spectrum itself does not dwell within the same domain as the eyes. In a similar fashion, what humans derogatively refer to as the "subconscious" mind acts as an interpreter of data received from the "conscious" mind reporting from the front line of a probabilistic domain; the "subconscious" does not dwell in the same domain as the "conscious." The deterministic informs the probabilistic, the probabilistic guides the deterministic, feedback loop paradox party time ensues; this is the strange and largely misunderstood process that we refer to as freewill. (disclaimer: don't listen to anything i say, i've clearly taken too many psychedelics, cheers)
Can you explain more
I'm huge into R-RAM or memristor technology, I wish you could talk about it more but I understand that the video is more a general overview and not too in-depth in the sub-subjects.
With that said, some comments are Memristor (last I researched late 2019): Trying to get memristors to be accurately set and operate predictably took some time, but in 2018 a team made a memristor that could be accurately set to 64 different states, or 5 bits of equivalent precision, which is the first hurdle solved (I think it still used cobalt, which is a material everyone wants to avoid due to being monopolized by a slave labor country). That is just the memristor itself, there are circuit reading techniques that further enhance the memristor accuracy, as well as circuits that will accurately perform the calculation directly with memristors (combined into a perception module) reducing the need for ADC and DAC to improve speed and energy efficiency. Currently the three main things that need to be improved to make it a feasible technology is, when reading the data it is automatically destroyed (memristor changes state once read) so a system to automatically reprogram the resistor is needed (its not the hardest, we simple just haven't gotten to that point yet in prototype research, as people are still debating memristor design/chemistry types), setting such a system into a matrix array with all needed support circuitry (aka figuring out a good architecture, which for now is putting the cart before the horse), and tooling is also the major cost. Production of memristor uses techniques that aren't too standard, so making it easier is needed to easy adoption.
new mind hardware xD
Fb: #lock3dinthesh3d
4:35 it was mentioned that the memory transfer accounts for the vast majority of time and power consumed, and later on we discussed that analog systems may be a solution to optimizing the computation (accumulative matrix multiplication), but shouldn't the transfer speed be the focus of the optimization given it's the main time and energy consumption bottleneck? Something like using using photonics for storing/transfer of data to make it faster and have low energy footprint, something like what Lightmatter (the company) is doing.
OMG, I've found my calling!
I'm a pro with Op Amps and transistors! I know digital design and computer architecture very well too. I'm excited! 🤗
Wow! I’m now woke AF in the understanding of AI
Brilliant, the adjective -- not the noun ... made this video possible.
(truly fantastic quality in every metric I can think of; THANK YOU!)
3:05 most terse summary of current ML tech, rather accurate too I'd think!
Been an AI researcher for years and learned a lot from this video. Thanks
The fast Hadamard transform basically only needs patterns of add and subtract operation. Needing only a few transistors per operation on a chip.
Then you can make Fast Transform fixed filter bank neural nets.
The 2-point transform of a,b is
a+b,a-b. It is self inverse.
(a+b)+(a-b),(a+b)-(a-b)
=2a,2b
To get the 4-point transform of a,b,c,d form two 2-point transforms. Then form the sum and difference of the sum terms (alike terms) (a+b),(c+d). And the sum and difference of the difference terms (a-b), (c-d). Done.
At each stage you sum and difference alike terms.
computing has to become analog to get to the next level. In a sense, going digital is effectively compressing a whole signal to a single bit of information.
Excellently explained!
What a greatly done video. The narration is so logical...
I thought this video was about massive parallelism. It's basically a high-rise elevator dilemma, but for chips.
Also, analog computing is cool. For instance, it can computationally solve differential equations with a ton of parameters. Basically, you can recreate any physical function with custom analog schematics and compute it instantly. Only downside is that it's too expensive to build on a chip (in terms of engineering). So I think it's just an act of economical balancing: Cost of analog ASIC engineering + production + programming > cost of consumer grade engineering + production + programming. Trend might reverse only when consumer chips would incorporate analog processing (as a natural evolvement due to clock frequency and nanotechnology limitations)
Tuning is the very process we our selves use to perfect our understanding of the environment around us. As newborns the environment bombards our sensors with information, this information could be seen as static, as time goes on we adjust and begin tuning our sensors, the static noise that we were born with never goes away, we simply tune into signals that benifets our existence. Sense the human brain is such an effective system and it appears to utilize frequency modulation in its formation of the environment it would make sense to take another look at analog computing.
I assume you were referring to home electronics when referring to the commercial use of analog computers? Analog computing was a tenant of manufacturing and powerplant control systems up through the 90s. There are many old plants that still use analoge computers for control systems, since they have yet to upgrade. While certainly obsolete, analog computers make wonderful control systems since they do not rely on discrete sampling. I have worked on both digital and analog systems. I even work on a plant that has a mechanical computer that is used for pressure control.
This is one of the best RUclips videos I've seen in my life. Incredible visuals and explanation. Thank you.
It's amazing how far computer science can go
Funny seeing you here
@@KanaMedia101 lmao hi
EE + CS in the future
I use vulkan fp16 aka half presision for image upscaling. Also, yeah half presition is less accurate, but it's completely worth the performance boost.
Analogue systems are like the human brain. They require the right temperature for example. With that in mind when designing an analogue AI system you must also factor in the environment. It's the responsibility of the designer (call him God if you will) to create the correct environment (temperature, noise etc) for the AI to survive and do it's task. I recently quit working in software. I now stare at an oscilloscope all day. Analogue computers have never died,, it's just people didn't know about them until now.
One hugely negative downside to analog computing, beyond what you measured is that they are typically “low power”, but have high energy consumption.
Digital logic (I.e CMOS) consumes little power when static (when the clock signal is stable) whereas most analog designs have much lower peak power consumption but they consume power all the time. To decrease power consumption you have to introduce additional gates, dramatically reducing the benefits of analog systems, and making them essentially a digital/analog hybrid design.
It’s really impressive you could convey these sorts of details in a coherent 13 minute video, kudos.
This man went from one shit I don’t understand to another shit I don’t understand all video long 😂
Analog computing *was* successfully used in design for decades! Car manufacturers used analog computers (One should rather call them electronic simulators I think, for there are no real computations going on in them, they simulate systems in an electronic equivalent.) for evaluating the behaviour of suspensions in various conditions. In aircraft design, analog simulations were used to predict and prevent wing flutter. I am not sure, I vaguely remember that Professor Eppler's electronic windchannel, on which he developed his aerofoils, many of which helped helped some great aircraft to become the success they were, was such an analog machine.
All sorts of engineering problems were simulated with such machines and their output was (economically) succesfully applied. Simulations on digital computers had a hard time catching up with them until well into the seventies, and it took computers of the Oomph! of Cray 2 and their likes to really make them obsolete in that field.
Remember why the F-117 was that edgy? It was designed using Cray 1s in the late seventies. Despite being absolute top notch machines of their days, the Cray 1 didn't quite have the calculation power to calculate all the possible radar refletions in a feasible timeframe, so designers went for a shape the computers could handle. Use Computers that are 20 years ahead, and you get a B-2 or F-35.
I must say its very interesting. It's some kind of back to roots of neuron networks, first perceptron was analog systems. Analog machines where very efficient in modelling of physical processes, but where complicated and need lots of maintenance. So we abandon this technology to pure digital machines.
This guy can make anything sound thrilling and fun to watch
The part about the human brain blew my.... mind.
Man this was next level