The First Neural Networks

Поделиться
HTML-код
  • Опубликовано: 30 сен 2024

Комментарии • 210

  • @dinoscheidt
    @dinoscheidt 3 месяца назад +275

    I’m in ML since 2013 and have to say: wow… you and your team do really deserve praise for solid research and delivery. I’ll bookmark this video to point people to. Thank you

    • @goldnutter412
      @goldnutter412 3 месяца назад +20

      He's great ! dad was a chip designer.. go figure :) amazing backlog of content sir
      Especially chips..

    • @chinesesparrows
      @chinesesparrows 3 месяца назад +20

      The span and depth of topics covered with a eye on technical details is truly awesome and rare. Smart commenters point out the occasional inaccuracies (understandable for the span of topics) which benefits everyone as well.

    • @WyomingGuy876
      @WyomingGuy876 3 месяца назад +1

      Dude, try living through all of this.

    • @PhilippBlum
      @PhilippBlum 3 месяца назад +21

      He has a team? I assumed he is just grinding and great at this.

    • @fintech1378
      @fintech1378 3 месяца назад +2

      He is independent AI researcher

  • @strayling1
    @strayling1 3 месяца назад +75

    Please continue the story. A cliffhanger like that deserves a sequel!
    Seriously, this was a truly impressive video and I learned new things from it.

    • @rotors_taker_0h
      @rotors_taker_0h 3 месяца назад +1

      In the 80's Hinton, LeCun, Schmidhuber and others invented backpropagation, CNNs (convolutional NNs), then RNNs, LSTMs in the 90's but it was still very niche area of study with "limited potential" because NNs always performed a bit worse than other methods, until couple breakthroughs in speech recognition and image classification in the end of 00's. 2012 AlexNet brought instant hype to CNNs, which was followed by one-liners critically improving quality and stability of training: better initial values, sigmoid -> relu, dropout, normalization (forcing values to be in certain range), resnet (just adding values of previous layer to the next one). That allowed to train models so much bigger and deeper that they started to dominate everything else by sheer size. Then came Transformer in 2017 that allowed to treat basically any input as a sequence of tokens and scaling hypothesis which brought us to the present time with "small NNs" being "just" several billion parameters.
      Between 2012 and now also been extreme progress with hardware for running these networks, optimizing the precision (it turned out that you don't need 32bit float numbers to train/use NNs, lower possible is 1 bit, good amount is 4 bit integer which is 100x faster in hardware), new instructions, matmuls, sparsity, tensor cores and systolic arrays and what not to get truly insane speedups. For comparison, AlexNet was trained on 2 GTX580 so it was about 2.5TFLOPs of compute. This year we have ultrathin and light laptops with 120TOPs and server cards with 20000TOPs and biggest clusters are in the range of 100 000 such cards, so in total 1 billion times more compute thrown at the problem than 12 years ago. And 12 years ago it was 1000x more than at the start of the century, so, we got about a trillion times more compute to make neural networks work and we still not anywhere close to be done. Of course, early pioneers had no chance without that much compute.

    • @honor9lite1337
      @honor9lite1337 3 месяца назад +7

      2nd that 😊

    • @thomassynths
      @thomassynths 3 месяца назад +3

      "The Second Neural Networks"

  • @amerigo88
    @amerigo88 3 месяца назад +49

    Interesting that Claude Shannon's observations on the meaning of information being reducible to binary came about at virtually the same time as the early neural networks papers.
    Edit - The Mathematical Theory of Communication by Shannon was published in 1948. Also, Herb Simon was an incredible mind.

  • @dwinsemius
    @dwinsemius 3 месяца назад +22

    The one name missing from this from my high-school memory is Norbert Weiner, author of "Cybernetics". I do remember a circa 1980 effort of mine to understand the implication to my area of training (medicine) of rule-based AI. The Mycin program (infectious disease diagnosis and management) sited at Stanford could have been the seed crystal for a very useful application of the symbol-based methods. It wasn't maintained and expanded after its initial development. Took too long to do data input and didn't handle edge cases or apply common sense. It was, however, very good at difficult "university level specialist" problems. I interviewed Dr Shortliffe and his assessment was that AI wouldn't influence the practice of medicine for 20-30 years. I was hugely disappointed. At the age of 30 I thought it should be just around the corner. So here it is 45 years later and symbolic methods have languished. I think there needs to be one or more "symbolic layers" in the development process of neural networks. For one thing it would allow insertion of corrections and offer the possibility of analyzing the "reasoning".

    • @honor9lite1337
      @honor9lite1337 3 месяца назад

      Your storyline is decades long, so how old are you? 😮

    • @dwinsemius
      @dwinsemius 3 месяца назад

      @@honor9lite1337 7.5 decades

  • @soanywaysillstartedblastin2797
    @soanywaysillstartedblastin2797 3 месяца назад +19

    Got this recommended to me after getting my first digit recognition program working. The neural networks know I’m learning about neural networks

  • @fibersden638
    @fibersden638 3 месяца назад +37

    One of the top education channels on RUclips for sure

  • @PeteC62
    @PeteC62 3 месяца назад +16

    Your videos sre always well worth the time to watch them, thanks!

  • @MFMegaZeroX7
    @MFMegaZeroX7 3 месяца назад +40

    I love seeing Minsky come up as I have a (tenuous) connection to him as he is my academic "great great grand advisor." In that, my PhD's advisor's PhD advisor's PhD advisor's PhD advisor was Minsky. Unfortunately, stories about him never got passed down, I only have a bunch of stories with my own advisor, and his advisor, so it is interesting seeing what he was up to.

  • @tracyrreed
    @tracyrreed 3 месяца назад +35

    5:14 Look at this guy, throwing out Principia Mathematica without even name-dropping its author. 😂

    • @PeteC62
      @PeteC62 3 месяца назад +4

      It's nothing new. Ton of people do that.

    • @theconkernator
      @theconkernator 3 месяца назад +11

      Its not Isaac Newton if thats what you were thinking. It's Russell and Whitehead.

    • @PeteC62
      @PeteC62 3 месяца назад

      Well that's no good. I can't think of a terrible pun on their names!

    • @dimBulb5
      @dimBulb5 3 месяца назад +1

      @@theconkernator Thanks! I was definitely thinking Newton.

    • @honor9lite1337
      @honor9lite1337 3 месяца назад

      ​@@theconkernatoryeah? 😮

  • @hififlipper
    @hififlipper 3 месяца назад +52

    "A human being without life" hurts too much.

    • @dahahaka
      @dahahaka 3 месяца назад +4

      Avg person in 2024

  • @francescotron8508
    @francescotron8508 3 месяца назад +26

    You always bring up interesting topics. Keep it up, it's great job 👍.

  • @VaebnKenh
    @VaebnKenh 3 месяца назад +4

    It's pronounced Pæpert not Pāpert, and that was a bit of a confusing way to present the XOR function: since you set it up with a XY plot, you should have put the Inputs on different axes with the values in the middle. Other than that, great video as always 😊

  • @ArturMorgan7491
    @ArturMorgan7491 3 месяца назад +20

    Please do a video on the decline of British manufacturing, it would be greatly appreciated

    • @MrHashisz
      @MrHashisz 3 месяца назад

      Nobody cares about the Brits

  • @stevengill1736
    @stevengill1736 3 месяца назад +10

    Gosh, I remember studying physiology in the late 60s when human nervous system understanding was still in the relative dark ages - for instance plasticity was still unknown, and they taught us that your nerves stopped growing at a young age and that was it.
    But I had no idea how far they'd come with machine learning in the Perceptron - already using tuneable weighted responses simulatong neurons? Wow!
    If they could have licked that multilayer problem it would have sped things up quite a bit.
    You mentioned the old chopped up planaria trick - are you familiar with the work of Dr Miachel Levin? His team is carrying the understanding of morphogenisis to new heights - amazing stuff! Thank you kindly for your videos! Cheers.

    • @klauszinser
      @klauszinser 3 месяца назад

      There must have been a speech of Demis Hassabis on 14 Nov 2017 in the late morning at the Society of Neuroscience in Washington. In this Keynote lecture where he told the audience that AI is nothing more than applied Brain Science. He must have said (I only have the translated German wording) 'First we solve the problem and understand whats intelligence (possibly the more German usage of the word) and then we solve all the other problems'. The 6000-8000 People must have been extremely quiet knowing what this young man already has achieved. Unfortunately I never found the video. (Source: Manfred Spitzer).

    • @honor9lite1337
      @honor9lite1337 3 месяца назад

      Studying in the late 60's? Even my dad was born in late 70's, how old are you?

  • @HaHaBIah
    @HaHaBIah 3 месяца назад +13

    I love listening to this with our current modern context

  • @TerryBollinger
    @TerryBollinger 3 месяца назад +2

    The difficulty with Minsky's adamant focus on symbolic logic was his failure to recognize that the vast majority of biological sensory processing is dedicated to creating meaningful, logically usable symbolic representations of a complicated physical world.
    Minsky’s position thus was a bit like saying that once you understand cream, you have all you need to build a cow.

  • @theorixlux
    @theorixlux 3 месяца назад +24

    I am probably not the first, but i am surprised at how far back the idea of artificial "intelligence" goes to.

    • @lbgstzockt8493
      @lbgstzockt8493 3 месяца назад +1

      It surprises me how "little" progress we have made in that time. Pretty much every other discipline has made incredible leaps in the past 60-70 years, yet AI is still nowhere near the human brain. Obviously an early perceptron is infinitely worse than a modern LLM, but AGI doesnt really feel any closer than back then.

    • @theorixlux
      @theorixlux 3 месяца назад +8

      ​​@@lbgstzockt8493 if you're comparing what a few smart computer geeks did over 80 years to what mother nature did over 3-ish billion years, then I would argue it's not surprising AT ALL that we haven't simulated a human brain yet...

    • @goldnutter412
      @goldnutter412 3 месяца назад

      We've been here before
      Before the universe..

    • @theorixlux
      @theorixlux 3 месяца назад

      @@goldnutter412 ?

    • @AS40143
      @AS40143 3 месяца назад +2

      The first idea of machines that could think appeared in the 17th century as Leibniz's mill concept

  • @JorgeLopez-qj8pu
    @JorgeLopez-qj8pu 3 месяца назад +10

    SEGA creating an AI computer in 1986 is crazy

  • @danbaker7191
    @danbaker7191 3 месяца назад +3

    Good summary. Ultimately, even today, there are no functionally useful and agreed definitions of intelligence and thinking. Maybe we're unintentionally approaching this from the back, by making things that sort of work, then later figuring out what's really going on (not yet!)

  • @TymexComputing
    @TymexComputing 3 месяца назад +5

    PERCEPTRON 😍😍

  • @Alex.The.Lionnnnn
    @Alex.The.Lionnnnn 3 месяца назад +2

    I love how cheesy that name is. "The Perceptron!" Is it one of the good transformers or the bad ones??

    • @LimabeanStudios
      @LimabeanStudios 3 месяца назад

      Working in physics one of the first things I learned is half the names are just "-tron" and it always makes me giggle

  • @JohnHLundin
    @JohnHLundin 3 месяца назад +3

    Thanks Jon, as someone who tinkered with neural nets in the 1980s and 90s, this history connects the evolutionary dots and illuminates the evolution/genesis of those theories & tools we were working with... J

  • @Wobbothe3rd
    @Wobbothe3rd 3 месяца назад +7

    Recurrent Neural Networks are about to make a HUGE comeback.

    • @FrigoCoder
      @FrigoCoder 3 месяца назад

      @@luciustarquiniuspriscus1408 MAMBA is already a valid alternative to transformers, and it is some kind of variant of linear recurrent neural networks.
      Also I do not see how could we avoid recurrent neural networks for music generation, they or their variants seem like a perfect fit for this very specific generation task.

    • @facon4233
      @facon4233 3 месяца назад

      xLSTM FTW

    • @clray123
      @clray123 3 месяца назад

      @@luciustarquiniuspriscus1408 The SSM/Mamba papers already address this. In fact you can train a GPT-3 like small model using Mamba right here and now, with excellent performance (both in terms of training speed and outputs). With "infinite attention" (well, limited by the capacity of the hidden state vector).

  • @rubes8065
    @rubes8065 3 месяца назад +2

    I absolutely love your channel. I look forward to your new videos. Thank you. I’ve learned sooo much 🥰

  • @Ray_of_Light62
    @Ray_of_Light62 3 месяца назад +1

    I studied the perceptron in the '70s. My conclusion was, the hardware was not up to the task. Using a matrix of photoresistors as the input proved the design principle but couldn't bring to a working prototype.

  • @freemanol
    @freemanol 3 месяца назад +1

    I think there's one guy that doesn't receive much attention, Demis Hassabis. I knew him as the founder of the game company that made Republic: The Revolution, but he then went on to take a PhD in Neuroscience. I wondered why. Now it makes sense. He founded DeepMind

  • @helloworldcsofficial
    @helloworldcsofficial 3 месяца назад +1

    this was great. A more in depth one will be awesome. The fall and rise of the perceptron. Going from single to multiple layers.

  • @techsuvara
    @techsuvara 3 месяца назад +17

    Having worked in ML and have done a lot with the perceptron. It feels like we're right where we started, promising the world, providing, not much...

    • @andersjjensen
      @andersjjensen 3 месяца назад

      I guess lonely people will be happy when chat-bots can render audio replies.

    • @chinesesparrows
      @chinesesparrows 3 месяца назад +3

      This is what real researchers say while companies go as far to boast their cat food is powered by AI.

    • @brodriguez11000
      @brodriguez11000 3 месяца назад +1

      AI winter.

    • @endintiers
      @endintiers 3 месяца назад

      Disagree. I worked on natural language NNs in the 80s (a failure). Now I'm using what we should no longer call LLMs to do real work, replacing older specialised AIs and improving accuracy. This is for horizon scanning. We are finding valuable insights for our government.

    • @techsuvara
      @techsuvara 3 месяца назад +1

      @@endintiers it’s the credibility issue that’s the problem. I’ve stopped using LLMs altogether even for coding, because it gets things incorrect often and there are usually better solutions.

  • @nexusyang4832
    @nexusyang4832 3 месяца назад +1

    I was just watching this video by Formosa TV and their piece of on the founder of SuperMicro. Just curious if there is any interest in the SuperMicro or its founder (for the English speaking folks that don't understand Mandarin hehe).... just thought I'd ask. 🙂

  • @TheChipMcDonald
    @TheChipMcDonald 3 месяца назад +1

    The Einstein, Oppenheimer, Bohr, Feynman, Schroeder and Heisenbergs of a.i.. McCulloch-Pitts neuron network, Rosenblatt's training paradigm, took 70 years to get to "here" and should be acknowledged. I remember as a little kid in the 70s reading articles on different people leading the symbolic movement, and thinking "none of them really seem to know or have conviction in what they're campaigning for".

  • @yellow1pl
    @yellow1pl 3 месяца назад +1

    Hi! Great fan of your channel! :)
    However this time I'm a bit puzzled. Several years ago I read somewhere Marvin Minsky talking about how he build this (awesome in my opinion) mechanical neural network. Since that time I was sure his network was the first. However here you talk about neural network build almost decade later and call it the first one... You mentioned that Marvin Minsky did some nerual network research previously, but he left. Ok, fine, so why his neural network that was bult before perceptron is not the first one in your opinion? :) Maybe next video? :)
    Also - to my knowledge Turings paper was published in 1937, not 36. In year 1936 Alonso Church published his paper related to Entscheidungsproblem. I don't know who was the second one that came up with theory of gravity or relativity, we don't usually remember them. But for some reason we remember Turing for being second in something :) Just fun fact :)

  • @GeorgePaul82
    @GeorgePaul82 3 месяца назад +1

    Wow thats strange timing I'm in the middle of reading the book " The Dream Machine by Mitchell Waldrop" Have you read that yet ? Its about these exact same people

  • @jakobpcoder
    @jakobpcoder 3 месяца назад +1

    This is the best documentary on this topic i have ever seen. Its so well researched, its like doing the whole wikipedia dive

  • @gabotron94
    @gabotron94 3 месяца назад +1

    Would love to hear you talk about Doug Lenat's Cyc and what ever happened to that approach to AI

  • @ktvx.94
    @ktvx.94 3 месяца назад +1

    Damn we're really going full cycle. We've been hearing eerily similar things from people in similar roles as folks in this video.

  • @perceptron-1
    @perceptron-1 3 месяца назад

    It is not enough to digitally model the most common LLM for Artificial Intelligence today, it doesn't matter if it is 1-bit or 1 TRIT 1.58b = log2(3), it has to be done with working ANALOG hardware!
    If software, then Machine Learning (algorithm)
    If hardware, then Learning Machine (Hardware that is better and faster than an algorithm)

  • @unmanaged
    @unmanaged 3 месяца назад +1

    great video love the look back at currently used technology

  • @cthutu
    @cthutu 3 месяца назад

    Great great video. But the McCulloch-Pitts Neuron didn't use weights. You displayed diagrams showing weights whenever you mentioned it.

  • @fredinit
    @fredinit 3 месяца назад

    Beyond the usual kudos to Jon and his research, the primary reason much of this area has, and will continue to come up short of what a human brain can do has to do with scale and complexity. The human brain is WAY more complex than even the combination of the current LLM systems. Sitting at over 50,000 cells and 150,000,000 synaptic connections per cubic mm (Science: A petavoxel fragment of human cerebral cortex reconstructed at nanoscale resolution), and using less than 300W/hr. of energy per day, the brain is a formidable piece of wetware.
    With advances of computer hardware and software we'll get there some day. But, not for a long time. Until then, remember that all the current LLMs are 100% artificial and 0% intelligent.

  • @thomascorner3009
    @thomascorner3009 Месяц назад

    Thank you for this segment, and the asianometry (strange name 🙂) channel. Lots of interesting stuff. I have worked in the field of neural networks for many years, and what I find most striking is how the field has been plagued by researchers who project the most abstract brain functions onto mechanisms with negligible complexity (here Rosenblatt 's hyperboles about a couple of linear units with dynamics weights). This is bad for the image of the field (especially to the general public who finances these research) but also for the field itself where new ideas have to fight these oversimplifications to be recognized. The work by people like Stephen Gross and Stanislas Dehaene (who once in a talk at the Montreal Neuronal Institute in the 2000s likened the process of a human becoming conscient of some stimulus to a printer that turns on to print a document) are unfortunate examples. But global warming will probably make this a moot point anyway: human society's inability to manage the responsibilities that come from the technology that our brain has allowed us to develop (together with the profit-at-all-cost economic model used to exploit it) will destroy us before we can understand the organ that made it possible. What a shame...

  • @noelwalterso2
    @noelwalterso2 3 месяца назад +1

    The title should be "the rise and rise of the perceptron" since it's the basic idea behind nearly all modern AI.

  • @iRiShNFT
    @iRiShNFT Месяц назад

    Your Audio is always WAY too low compared to everything else online ... have to turn speakers up too high and then back down after your videos...
    No other notes , Love your videos .,... nobody else is going to teach us this nerdy shit =)

  • @robertpearson8546
    @robertpearson8546 3 месяца назад

    Threshold gates are NOT Boolean circuits. Boolean logic is a subset of Threshold logic. You can simulate Boolean gates with Threshold gates but not vice versa.
    Threshold logic gates are not neural networks. Neural networks use simulated threshold logic gates.

  • @NanoAGI
    @NanoAGI 2 месяца назад

    As always I love your videos, the depth of knowledge and the people that comment, as they all have interesting stories about what is in your videos. So one of the descendants of the symbolic movement was cognitive architectures like SOAR and ACT-R from Newell's theories of Cognition. Symbolic systems are not gone and they perform many tasks that Neural Networks don't do well. However Neural Networks do something so much better than Cognitive Systems, and that is in getting all the data and knowledge of the world into the neural network, and being able to extract it out. There is no way you can program all of that as rules in symbolic systems. There will be a merger of both systems so they can perform better reasoning and cognitive tasks in the next iteration of all of this. We are really just at the beginning, standing on the shoulders of Giants.

  • @darnice1125
    @darnice1125 3 месяца назад

    There is no AI, even today, it just fancy software following rules. Until you show me the software people run on, oh, wait you can't. So no real AI exists if it is running on software.

  • @JiveDadson
    @JiveDadson 3 месяца назад

    Before the multi-layer perceptron, statisticians used that exact same model with sigmoid activation functions and called the process "edge regression." The statisticians knew how to "train" the model using second-order multivariant optimization and "weight decay" methods, which were vastly superior to the ad hoc backpropagation methods that neural network researchers were still using as late as the 1980's. The neural net guys were blinded by their unwarranted certainty that they were onto something new.

  • @belstar1128
    @belstar1128 3 месяца назад

    very forwards thinking people from a time when most people couldn't even comprehend computers. i know a lot of people born after this period who are only slightly older than me that can't even handle windows 10 and don't believe computers existed when they were young. and i am talking about people born in the 1970s here not boomers. yet you had these geniuses born in the late 19th century or early 20th century who made it all possible .

  • @travcat756
    @travcat756 3 месяца назад

    Minsky & Papert and the XOR problem was the invention of deep learning

  • @warb635
    @warb635 3 месяца назад

    Russian vessels close to the Belgian coast (in international waters) are being closely watched these days...

  • @AaronSchwarz42
    @AaronSchwarz42 3 месяца назад +1

    People are like transistors, its how they are connected that makes all the difference

  • @MorgothCreator
    @MorgothCreator 3 месяца назад

    Nothing has changed, currently AI is in the same state as then, as money grab promising dreams.

  • @onetouchtwo
    @onetouchtwo 3 месяца назад

    FYI, XOR is pronounced “ex-or” like “ECK-sor”

  • @harambetidepod1451
    @harambetidepod1451 3 месяца назад

    My CPU is a neural-net processor; a learning computer.

  • @SellamAbraham
    @SellamAbraham 3 месяца назад

    Minsky flew on the Lolita Express to Epstein Island. What's in his pants?

  • @vikramgogoi3621
    @vikramgogoi3621 3 месяца назад

    Shouldn't it be "the" Principia Mathematic?

  • @ahnabarnob5004
    @ahnabarnob5004 28 дней назад

    Now people use neural network to draw furry pictures🙂

  • @DamianGulich
    @DamianGulich 3 месяца назад

    There's more about this early history of artificial intelligence in this 1988 book: Graubard, S. R. (Ed.). (1988). The artificial intelligence debate: False starts, real foundations. MIT Press.
    The chapters also detail a very interesting discussion of related general philosophical problems and limitations of the time.

  • @leannevandekew1996
    @leannevandekew1996 3 месяца назад +8

    In 1996 neural networks were touted as predicting pollution from combustion sources without any need for chemical or visual monitoring.

    • @alexdrockhound9497
      @alexdrockhound9497 3 месяца назад

      looks like a bot

    • @leannevandekew1996
      @leannevandekew1996 3 месяца назад +1

      @@alexdrockhound9497 Why'd you write "channel doesn't have any conte" on your channel ?

    • @alexdrockhound9497
      @alexdrockhound9497 3 месяца назад

      @@leannevandekew1996 typical bot. trying to deflect. Your profile is AI generated and you look just like adult content bots i see all over the platform.

    • @leannevandekew1996
      @leannevandekew1996 3 месяца назад +1

      @@alexdrockhound9497 You totally are.

    • @anush_agrawal
      @anush_agrawal 3 месяца назад +1

      I would stalk you just as you said.

  • @douggolde7582
    @douggolde7582 3 месяца назад +6

    When I eat pulled pork I gain none of the pig’s memories. I do however gain an essence of the wood (tree) used. The next day I am able to impart this wood knowledge in the men’s room at work. Ahh, hickory with a bit of cherry.

    • @tipwilkin
      @tipwilkin 3 месяца назад +6

      Idk about you but when I eat pulled pork I feel like a pig

  • @SB-qm5wg
    @SB-qm5wg 3 месяца назад

    Well I learned a whole lot from this video. TY 👏

  • @0x00official
    @0x00official 3 месяца назад

    And people still think AI is a new technology

  • @Charles-Darwin
    @Charles-Darwin 3 месяца назад

    the 'boating accident' is peculiar

  • @ReadThisOnly
    @ReadThisOnly 3 месяца назад +1

    asianometry my goat

  • @firstnamesurname6550
    @firstnamesurname6550 3 месяца назад

    Very nice and well scoped contextualization about the developement of NNs ... I know that the video is about an specific branch of computer science ... but the seminal work for AI research was not Alan Turing papers ... the seminal work for AI and Computer science is George boole's The Laws of Thought (1854) which contains Boolean algebra.

  • @halfsourlizard9319
    @halfsourlizard9319 3 месяца назад

    symbolic AI was a neat idea ... rip

  • @0MoTheG
    @0MoTheG 3 месяца назад

    When I first read about N.N. around 2000 this was still the state of the matter 30 years later.
    When I was at university NN were no topic.
    Then after 2010 things suddenly changed. Training Data and Flops had become available.

  • @sunroad7228
    @sunroad7228 3 месяца назад

    “In any system of energy, Control is what consumes energy the most.
    No energy store holds enough energy to extract an amount of energy equal to the total energy it stores.
    No system of energy can deliver sum useful energy in excess of the total energy put into constructing it.
    This universal truth applies to all systems.
    Energy, like time, flows from past to future” (2017).
    Inside Sudan’s Forgotten War - BBC Africa Eye documentary
    ruclips.net/video/KIDMsalYHG8/видео.html

  • @LatentSpaceD
    @LatentSpaceD 3 месяца назад

    Super happy i found you again! Your content is off the charts amazing! I wish i could patreon you up- im in my 50's autistic af and i dont have an income. Appreciate you.. p.s. i thought you said Rosenblat died in a tragic coding accident! Lmfao. Love the flatworms! Keep on keeping your valuble perception turned on!!

  • @georhodiumgeo9827
    @georhodiumgeo9827 3 месяца назад

    An explanation of perceptrons and where did they go???...
    Get the heck out of my head, I was literally just wondering about this.

  • @perceptron-1
    @perceptron-1 3 месяца назад

    I built a multilayer perceptron from 4096 operational amplifiers in the '80s. It was a kind of analog computer where the weighting values could be set with electronic potentiometers controlled from a digital computer, digital port, and the freely reorganizable switch matrix could be set with electronic switches, how many layers to have and in what matrix.
    It beat the digital machines of the time in real-time speech recognition and live speech generation. Nowadays I want to integrate this into an IC and sell it as an analog computer.
    We tried to model the back propagation method using the PROLOG language, but the machines at that time were very slow.
    It took 40 years for the speed and memory size of machines to reach the level where this could be realized.
    The so-called scientific paper work was very far from the practical solutions realized then and now, there is a couple of decades behind in theory, because a lot of technical results achieved in practice could not be published.

  • @AABB-px8lc
    @AABB-px8lc 3 месяца назад

    I see what you do there. 3030 year: History of AI essey: As we know, our new hyperdeepinnercurlingdoubleflashing neural network almost working, we need few more tiny touches and literally 2 extra layers to show it awesomeness in next 3031year. And again, and again.

  • @AngelosLakrintis
    @AngelosLakrintis 3 месяца назад

    Everything old is new again

  • @renanmonteirobarbosa8129
    @renanmonteirobarbosa8129 3 месяца назад

    MLPs are very prominent still. Also Attractor NNs are very popular, transformers would not exist without ANNs.

  • @londomolari5715
    @londomolari5715 3 месяца назад

    I find it ironic/devious that Minsky criticized Perceptrons for inability to scale. None of the little toy systems that came out of MIT or Yale (Schank) scaled.

  • @bluedragontoybash2463
    @bluedragontoybash2463 3 месяца назад

    This (somewhat) has nothing to do with Asian

  • @Bluelagoonstudios
    @Bluelagoonstudios 3 месяца назад

    Wow, didn't know they researched that back then, so long ago. Thank you for educating me on this matter. Today, AI is amazing already. I developed a USB reader/ tester with GPT4. The code that it wrote was spot on. The rest was just electronics, an amazing tool.

  • @mattheide2775
    @mattheide2775 3 месяца назад

    I enjoy this channel more than I understand the subjects covered. I worry that AI will be a garbage in garbage out product. It seems like a product forced upon me and I don't like it at all. Thanks for the video.

  • @kevin-jm3qb
    @kevin-jm3qb 3 месяца назад

    As a fellow 4 hour sleeper. Any advice on brain health. I'm getting paranoid.

  • @halfsourlizard9319
    @halfsourlizard9319 3 месяца назад

    protip: 'xor' is pronounced 'x or' not 'x-o-r'

  • @bogoodski
    @bogoodski 3 месяца назад

    I completed a machine learning course from Cornell a little before genAI became really popular and we had to learn how to code a basic Perceptron. The rare times I see it mentioned, I always feel like I have some special, unique insight. (I definitely do not!)

  • @smoggert
    @smoggert 3 месяца назад +2

    🎉

  • @chinchenhanchi
    @chinchenhanchi 3 месяца назад

    I was just studying this subject in university 😮 one of the many lectures was about the history of IA
    What a coincidence

  • @Phil-D83
    @Phil-D83 3 месяца назад

    Minsky is currently frozen, waiting for return after his untimely death in 2016 or so

  • @darelsmith2825
    @darelsmith2825 3 месяца назад

    ELIZA: "Cat got your tongue?" I had a Boolean Logic class @ LSU. Very interesting.

  • @perceptron-1
    @perceptron-1 3 месяца назад

    I'm the PERCEPTRON
    Thank you for making this movie.

  • @MostlyPennyCat
    @MostlyPennyCat 3 месяца назад

    I took a generic algorithms and neural networks module at university.
    In the exam we would train and solve simple neutral networks on paper with a calculator.
    Good fun, this was in 2000.

  • @gscotb
    @gscotb 3 месяца назад

    A significant moment is when the instructor leaves the plane & says "do a couple takeoffs & landings".

  • @pvtnewb
    @pvtnewb 3 месяца назад

    As I recall, AMD's zen uarch also use some form of perceptron as their BTB or branch prediction

  • @bharasiva96
    @bharasiva96 3 месяца назад

    What a fantastic vide tracing the history of Neural Nets. It would also be really useful if you could put up the links to the papers mentioned in the video in the description.

  • @subnormality5854
    @subnormality5854 3 месяца назад

    Amazing that some of this work was done at Dartmouth during the days of 'Animal House'

  • @hisuiibmpower4
    @hisuiibmpower4 3 месяца назад

    hebb's postulate are stilling being taught in neuroscience,only difference is a time element is being added its now called "spike time dependent plasticity"

  • @jamesjensen5000
    @jamesjensen5000 3 месяца назад

    Is every cell conscious?

  • @marshallbanana819
    @marshallbanana819 3 месяца назад

    This guy has been messing with us for so long I can't tell if the "references and sources go here" is a bit, or a mistake.

  • @alonalmog1982
    @alonalmog1982 3 месяца назад

    Wow! well explained, and a way more engaging story than what I expected.

  • @JohnVKaravitis
    @JohnVKaravitis 3 месяца назад

    0:12 Is that Turing on the right

  • @sinfinite7516
    @sinfinite7516 3 месяца назад

    Great video :)

  • @Anttisinstrumentals
    @Anttisinstrumentals 3 месяца назад

    Every time i hear word multifaceted i think of chatgpt.

  • @Chimecho-delta
    @Chimecho-delta 3 месяца назад

    Worth reading up on Walter Pitts! Interesting life and work

  • @DarkShine101
    @DarkShine101 3 месяца назад

    Part 2 when?

  • @dcamron46
    @dcamron46 3 месяца назад

    “A book called Principia Mathematica”, you mean Newton’s Principia..?

    • @DSCH4
      @DSCH4 Месяц назад

      Russell and Whitehead. An attempt to ground mathematics in perfect rigor via type theory.