The First Neural Networks

Поделиться
HTML-код
  • Опубликовано: 2 фев 2025

Комментарии • 214

  • @dinoscheidt
    @dinoscheidt 7 месяцев назад +281

    I’m in ML since 2013 and have to say: wow… you and your team do really deserve praise for solid research and delivery. I’ll bookmark this video to point people to. Thank you

    • @goldnutter412
      @goldnutter412 7 месяцев назад +21

      He's great ! dad was a chip designer.. go figure :) amazing backlog of content sir
      Especially chips..

    • @chinesesparrows
      @chinesesparrows 7 месяцев назад +20

      The span and depth of topics covered with a eye on technical details is truly awesome and rare. Smart commenters point out the occasional inaccuracies (understandable for the span of topics) which benefits everyone as well.

    • @WyomingGuy876
      @WyomingGuy876 7 месяцев назад +1

      Dude, try living through all of this.

    • @PhilippBlum
      @PhilippBlum 7 месяцев назад +22

      He has a team? I assumed he is just grinding and great at this.

    • @fintech1378
      @fintech1378 7 месяцев назад +3

      He is independent AI researcher

  • @strayling1
    @strayling1 7 месяцев назад +80

    Please continue the story. A cliffhanger like that deserves a sequel!
    Seriously, this was a truly impressive video and I learned new things from it.

    • @rotors_taker_0h
      @rotors_taker_0h 7 месяцев назад +1

      In the 80's Hinton, LeCun, Schmidhuber and others invented backpropagation, CNNs (convolutional NNs), then RNNs, LSTMs in the 90's but it was still very niche area of study with "limited potential" because NNs always performed a bit worse than other methods, until couple breakthroughs in speech recognition and image classification in the end of 00's. 2012 AlexNet brought instant hype to CNNs, which was followed by one-liners critically improving quality and stability of training: better initial values, sigmoid -> relu, dropout, normalization (forcing values to be in certain range), resnet (just adding values of previous layer to the next one). That allowed to train models so much bigger and deeper that they started to dominate everything else by sheer size. Then came Transformer in 2017 that allowed to treat basically any input as a sequence of tokens and scaling hypothesis which brought us to the present time with "small NNs" being "just" several billion parameters.
      Between 2012 and now also been extreme progress with hardware for running these networks, optimizing the precision (it turned out that you don't need 32bit float numbers to train/use NNs, lower possible is 1 bit, good amount is 4 bit integer which is 100x faster in hardware), new instructions, matmuls, sparsity, tensor cores and systolic arrays and what not to get truly insane speedups. For comparison, AlexNet was trained on 2 GTX580 so it was about 2.5TFLOPs of compute. This year we have ultrathin and light laptops with 120TOPs and server cards with 20000TOPs and biggest clusters are in the range of 100 000 such cards, so in total 1 billion times more compute thrown at the problem than 12 years ago. And 12 years ago it was 1000x more than at the start of the century, so, we got about a trillion times more compute to make neural networks work and we still not anywhere close to be done. Of course, early pioneers had no chance without that much compute.

    • @honor9lite1337
      @honor9lite1337 7 месяцев назад +7

      2nd that 😊

    • @thomassynths
      @thomassynths 7 месяцев назад +3

      "The Second Neural Networks"

  • @PeteC62
    @PeteC62 7 месяцев назад +17

    Your videos sre always well worth the time to watch them, thanks!

  • @soanywaysillstartedblastin2797
    @soanywaysillstartedblastin2797 7 месяцев назад +20

    Got this recommended to me after getting my first digit recognition program working. The neural networks know I’m learning about neural networks

  • @fibersden638
    @fibersden638 7 месяцев назад +38

    One of the top education channels on RUclips for sure

  • @MFMegaZeroX7
    @MFMegaZeroX7 7 месяцев назад +40

    I love seeing Minsky come up as I have a (tenuous) connection to him as he is my academic "great great grand advisor." In that, my PhD's advisor's PhD advisor's PhD advisor's PhD advisor was Minsky. Unfortunately, stories about him never got passed down, I only have a bunch of stories with my own advisor, and his advisor, so it is interesting seeing what he was up to.

  • @JohnHLundin
    @JohnHLundin 7 месяцев назад +3

    Thanks Jon, as someone who tinkered with neural nets in the 1980s and 90s, this history connects the evolutionary dots and illuminates the evolution/genesis of those theories & tools we were working with... J

  • @dwinsemius
    @dwinsemius 7 месяцев назад +24

    The one name missing from this from my high-school memory is Norbert Weiner, author of "Cybernetics". I do remember a circa 1980 effort of mine to understand the implication to my area of training (medicine) of rule-based AI. The Mycin program (infectious disease diagnosis and management) sited at Stanford could have been the seed crystal for a very useful application of the symbol-based methods. It wasn't maintained and expanded after its initial development. Took too long to do data input and didn't handle edge cases or apply common sense. It was, however, very good at difficult "university level specialist" problems. I interviewed Dr Shortliffe and his assessment was that AI wouldn't influence the practice of medicine for 20-30 years. I was hugely disappointed. At the age of 30 I thought it should be just around the corner. So here it is 45 years later and symbolic methods have languished. I think there needs to be one or more "symbolic layers" in the development process of neural networks. For one thing it would allow insertion of corrections and offer the possibility of analyzing the "reasoning".

    • @honor9lite1337
      @honor9lite1337 7 месяцев назад

      Your storyline is decades long, so how old are you? 😮

    • @dwinsemius
      @dwinsemius 7 месяцев назад

      @@honor9lite1337 7.5 decades

  • @jakobpcoder
    @jakobpcoder 7 месяцев назад +1

    This is the best documentary on this topic i have ever seen. Its so well researched, its like doing the whole wikipedia dive

  • @hififlipper
    @hififlipper 7 месяцев назад +52

    "A human being without life" hurts too much.

    • @dahahaka
      @dahahaka 7 месяцев назад +4

      Avg person in 2024

  • @amerigo88
    @amerigo88 7 месяцев назад +50

    Interesting that Claude Shannon's observations on the meaning of information being reducible to binary came about at virtually the same time as the early neural networks papers.
    Edit - The Mathematical Theory of Communication by Shannon was published in 1948. Also, Herb Simon was an incredible mind.

  • @BobFrTube
    @BobFrTube 7 месяцев назад +1

    Thanks for bringing back memories of the class I took from Minsky and Papert (short, not long a in pronouncing his name) in 1969 just when the book had come out. You filled in some of the back story that I wasn't aware of.

    • @JiveDadson
      @JiveDadson 7 месяцев назад

      That book set AI back by decades.

  • @youcaio
    @youcaio 7 месяцев назад +1

    Thanks!

  • @stevengill1736
    @stevengill1736 7 месяцев назад +10

    Gosh, I remember studying physiology in the late 60s when human nervous system understanding was still in the relative dark ages - for instance plasticity was still unknown, and they taught us that your nerves stopped growing at a young age and that was it.
    But I had no idea how far they'd come with machine learning in the Perceptron - already using tuneable weighted responses simulatong neurons? Wow!
    If they could have licked that multilayer problem it would have sped things up quite a bit.
    You mentioned the old chopped up planaria trick - are you familiar with the work of Dr Miachel Levin? His team is carrying the understanding of morphogenisis to new heights - amazing stuff! Thank you kindly for your videos! Cheers.

    • @klauszinser
      @klauszinser 7 месяцев назад

      There must have been a speech of Demis Hassabis on 14 Nov 2017 in the late morning at the Society of Neuroscience in Washington. In this Keynote lecture where he told the audience that AI is nothing more than applied Brain Science. He must have said (I only have the translated German wording) 'First we solve the problem and understand whats intelligence (possibly the more German usage of the word) and then we solve all the other problems'. The 6000-8000 People must have been extremely quiet knowing what this young man already has achieved. Unfortunately I never found the video. (Source: Manfred Spitzer).

    • @honor9lite1337
      @honor9lite1337 7 месяцев назад

      Studying in the late 60's? Even my dad was born in late 70's, how old are you?

  • @HaHaBIah
    @HaHaBIah 7 месяцев назад +13

    I love listening to this with our current modern context

  • @TerryBollinger
    @TerryBollinger 7 месяцев назад +3

    The difficulty with Minsky's adamant focus on symbolic logic was his failure to recognize that the vast majority of biological sensory processing is dedicated to creating meaningful, logically usable symbolic representations of a complicated physical world.
    Minsky’s position thus was a bit like saying that once you understand cream, you have all you need to build a cow.

  • @tracyrreed
    @tracyrreed 7 месяцев назад +35

    5:14 Look at this guy, throwing out Principia Mathematica without even name-dropping its author. 😂

    • @PeteC62
      @PeteC62 7 месяцев назад +4

      It's nothing new. Ton of people do that.

    • @theconkernator
      @theconkernator 7 месяцев назад +11

      Its not Isaac Newton if thats what you were thinking. It's Russell and Whitehead.

    • @PeteC62
      @PeteC62 7 месяцев назад

      Well that's no good. I can't think of a terrible pun on their names!

    • @dimBulb5
      @dimBulb5 7 месяцев назад +1

      @@theconkernator Thanks! I was definitely thinking Newton.

    • @honor9lite1337
      @honor9lite1337 7 месяцев назад

      ​@@theconkernatoryeah? 😮

  • @francescotron8508
    @francescotron8508 7 месяцев назад +26

    You always bring up interesting topics. Keep it up, it's great job 👍.

  • @helloworldcsofficial
    @helloworldcsofficial 7 месяцев назад +1

    this was great. A more in depth one will be awesome. The fall and rise of the perceptron. Going from single to multiple layers.

  • @NanoAGI
    @NanoAGI 6 месяцев назад

    As always I love your videos, the depth of knowledge and the people that comment, as they all have interesting stories about what is in your videos. So one of the descendants of the symbolic movement was cognitive architectures like SOAR and ACT-R from Newell's theories of Cognition. Symbolic systems are not gone and they perform many tasks that Neural Networks don't do well. However Neural Networks do something so much better than Cognitive Systems, and that is in getting all the data and knowledge of the world into the neural network, and being able to extract it out. There is no way you can program all of that as rules in symbolic systems. There will be a merger of both systems so they can perform better reasoning and cognitive tasks in the next iteration of all of this. We are really just at the beginning, standing on the shoulders of Giants.

  • @Wobbothe3rd
    @Wobbothe3rd 7 месяцев назад +7

    Recurrent Neural Networks are about to make a HUGE comeback.

    • @FrigoCoder
      @FrigoCoder 7 месяцев назад

      @@luciustarquiniuspriscus1408 MAMBA is already a valid alternative to transformers, and it is some kind of variant of linear recurrent neural networks.
      Also I do not see how could we avoid recurrent neural networks for music generation, they or their variants seem like a perfect fit for this very specific generation task.

    • @facon4233
      @facon4233 7 месяцев назад

      xLSTM FTW

    • @clray123
      @clray123 7 месяцев назад

      @@luciustarquiniuspriscus1408 The SSM/Mamba papers already address this. In fact you can train a GPT-3 like small model using Mamba right here and now, with excellent performance (both in terms of training speed and outputs). With "infinite attention" (well, limited by the capacity of the hidden state vector).

  • @rubes8065
    @rubes8065 7 месяцев назад +2

    I absolutely love your channel. I look forward to your new videos. Thank you. I’ve learned sooo much 🥰

  • @TheChipMcDonald
    @TheChipMcDonald 7 месяцев назад +1

    The Einstein, Oppenheimer, Bohr, Feynman, Schroeder and Heisenbergs of a.i.. McCulloch-Pitts neuron network, Rosenblatt's training paradigm, took 70 years to get to "here" and should be acknowledged. I remember as a little kid in the 70s reading articles on different people leading the symbolic movement, and thinking "none of them really seem to know or have conviction in what they're campaigning for".

  • @devsuvara
    @devsuvara 7 месяцев назад +17

    Having worked in ML and have done a lot with the perceptron. It feels like we're right where we started, promising the world, providing, not much...

    • @andersjjensen
      @andersjjensen 7 месяцев назад

      I guess lonely people will be happy when chat-bots can render audio replies.

    • @chinesesparrows
      @chinesesparrows 7 месяцев назад +3

      This is what real researchers say while companies go as far to boast their cat food is powered by AI.

    • @brodriguez11000
      @brodriguez11000 7 месяцев назад +1

      AI winter.

    • @endintiers
      @endintiers 7 месяцев назад

      Disagree. I worked on natural language NNs in the 80s (a failure). Now I'm using what we should no longer call LLMs to do real work, replacing older specialised AIs and improving accuracy. This is for horizon scanning. We are finding valuable insights for our government.

    • @devsuvara
      @devsuvara 7 месяцев назад +1

      @@endintiers it’s the credibility issue that’s the problem. I’ve stopped using LLMs altogether even for coding, because it gets things incorrect often and there are usually better solutions.

  • @JorgeLopez-qj8pu
    @JorgeLopez-qj8pu 7 месяцев назад +10

    SEGA creating an AI computer in 1986 is crazy

  • @LatentSpaceD
    @LatentSpaceD 7 месяцев назад

    Super happy i found you again! Your content is off the charts amazing! I wish i could patreon you up- im in my 50's autistic af and i dont have an income. Appreciate you.. p.s. i thought you said Rosenblat died in a tragic coding accident! Lmfao. Love the flatworms! Keep on keeping your valuble perception turned on!!

  • @theorixlux
    @theorixlux 7 месяцев назад +24

    I am probably not the first, but i am surprised at how far back the idea of artificial "intelligence" goes to.

    • @lbgstzockt8493
      @lbgstzockt8493 7 месяцев назад +1

      It surprises me how "little" progress we have made in that time. Pretty much every other discipline has made incredible leaps in the past 60-70 years, yet AI is still nowhere near the human brain. Obviously an early perceptron is infinitely worse than a modern LLM, but AGI doesnt really feel any closer than back then.

    • @theorixlux
      @theorixlux 7 месяцев назад +8

      ​​@@lbgstzockt8493 if you're comparing what a few smart computer geeks did over 80 years to what mother nature did over 3-ish billion years, then I would argue it's not surprising AT ALL that we haven't simulated a human brain yet...

    • @goldnutter412
      @goldnutter412 7 месяцев назад

      We've been here before
      Before the universe..

    • @theorixlux
      @theorixlux 7 месяцев назад

      @@goldnutter412 ?

    • @AS40143
      @AS40143 7 месяцев назад +2

      The first idea of machines that could think appeared in the 17th century as Leibniz's mill concept

  • @VaebnKenh
    @VaebnKenh 7 месяцев назад +4

    It's pronounced Pæpert not Pāpert, and that was a bit of a confusing way to present the XOR function: since you set it up with a XY plot, you should have put the Inputs on different axes with the values in the middle. Other than that, great video as always 😊

  • @bharasiva96
    @bharasiva96 7 месяцев назад

    What a fantastic vide tracing the history of Neural Nets. It would also be really useful if you could put up the links to the papers mentioned in the video in the description.

  • @danbaker7191
    @danbaker7191 7 месяцев назад +3

    Good summary. Ultimately, even today, there are no functionally useful and agreed definitions of intelligence and thinking. Maybe we're unintentionally approaching this from the back, by making things that sort of work, then later figuring out what's really going on (not yet!)

  • @ktvx.94
    @ktvx.94 7 месяцев назад +1

    Damn we're really going full cycle. We've been hearing eerily similar things from people in similar roles as folks in this video.

  • @ArturMorgan7491
    @ArturMorgan7491 7 месяцев назад +20

    Please do a video on the decline of British manufacturing, it would be greatly appreciated

    • @MrHashisz
      @MrHashisz 7 месяцев назад

      Nobody cares about the Brits

  • @alonalmog1982
    @alonalmog1982 7 месяцев назад

    Wow! well explained, and a way more engaging story than what I expected.

  • @subnormality5854
    @subnormality5854 7 месяцев назад

    Amazing that some of this work was done at Dartmouth during the days of 'Animal House'

  • @-gg8342
    @-gg8342 7 месяцев назад

    Very interesting topic

  • @AaronSchwarz42
    @AaronSchwarz42 7 месяцев назад +1

    People are like transistors, its how they are connected that makes all the difference

  • @MostlyPennyCat
    @MostlyPennyCat 7 месяцев назад

    I took a generic algorithms and neural networks module at university.
    In the exam we would train and solve simple neutral networks on paper with a calculator.
    Good fun, this was in 2000.

  • @freemanol
    @freemanol 7 месяцев назад +1

    I think there's one guy that doesn't receive much attention, Demis Hassabis. I knew him as the founder of the game company that made Republic: The Revolution, but he then went on to take a PhD in Neuroscience. I wondered why. Now it makes sense. He founded DeepMind

  • @gscotb
    @gscotb 7 месяцев назад +1

    A significant moment is when the instructor leaves the plane & says "do a couple takeoffs & landings".

  • @TymexComputing
    @TymexComputing 7 месяцев назад +5

    PERCEPTRON 😍😍

  • @hisuiibmpower4
    @hisuiibmpower4 7 месяцев назад

    hebb's postulate are stilling being taught in neuroscience,only difference is a time element is being added its now called "spike time dependent plasticity"

  • @Bluelagoonstudios
    @Bluelagoonstudios 7 месяцев назад

    Wow, didn't know they researched that back then, so long ago. Thank you for educating me on this matter. Today, AI is amazing already. I developed a USB reader/ tester with GPT4. The code that it wrote was spot on. The rest was just electronics, an amazing tool.

  • @perceptron-1
    @perceptron-1 7 месяцев назад

    I'm the PERCEPTRON
    Thank you for making this movie.

  • @firstnamesurname6550
    @firstnamesurname6550 7 месяцев назад

    Very nice and well scoped contextualization about the developement of NNs ... I know that the video is about an specific branch of computer science ... but the seminal work for AI research was not Alan Turing papers ... the seminal work for AI and Computer science is George boole's The Laws of Thought (1854) which contains Boolean algebra.

  • @gabotron94
    @gabotron94 7 месяцев назад +1

    Would love to hear you talk about Doug Lenat's Cyc and what ever happened to that approach to AI

  • @AndyChananLevin
    @AndyChananLevin 7 месяцев назад

    Terrific

  • @Chimecho-delta
    @Chimecho-delta 7 месяцев назад

    Worth reading up on Walter Pitts! Interesting life and work

  • @leannevandekew1996
    @leannevandekew1996 7 месяцев назад +8

    In 1996 neural networks were touted as predicting pollution from combustion sources without any need for chemical or visual monitoring.

    • @alexdrockhound9497
      @alexdrockhound9497 7 месяцев назад

      looks like a bot

    • @leannevandekew1996
      @leannevandekew1996 7 месяцев назад +1

      @@alexdrockhound9497 Why'd you write "channel doesn't have any conte" on your channel ?

    • @alexdrockhound9497
      @alexdrockhound9497 7 месяцев назад

      @@leannevandekew1996 typical bot. trying to deflect. Your profile is AI generated and you look just like adult content bots i see all over the platform.

    • @leannevandekew1996
      @leannevandekew1996 7 месяцев назад +1

      @@alexdrockhound9497 You totally are.

    • @anush_agrawal
      @anush_agrawal 7 месяцев назад +1

      I would stalk you just as you said.

  • @thomascorner3009
    @thomascorner3009 6 месяцев назад +1

    Thank you for this segment, and the asianometry (strange name 🙂) channel. Lots of interesting stuff. I have worked in the field of neural networks for many years, and what I find most striking is how the field has been plagued by researchers who project the most abstract brain functions onto mechanisms with negligible complexity (here Rosenblatt 's hyperboles about a couple of linear units with dynamics weights). This is bad for the image of the field (especially to the general public who finances these research) but also for the field itself where new ideas have to fight these oversimplifications to be recognized. The work by people like Stephen Gross and Stanislas Dehaene (who once in a talk at the Montreal Neuronal Institute in the 2000s likened the process of a human becoming conscient of some stimulus to a printer that turns on to print a document) are unfortunate examples. But global warming will probably make this a moot point anyway: human society's inability to manage the responsibilities that come from the technology that our brain has allowed us to develop (together with the profit-at-all-cost economic model used to exploit it) will destroy us before we can understand the organ that made it possible. What a shame...

  • @rickharold7884
    @rickharold7884 7 месяцев назад

    Love it. Awesome summary.

  • @bogoodski
    @bogoodski 7 месяцев назад

    I completed a machine learning course from Cornell a little before genAI became really popular and we had to learn how to code a basic Perceptron. The rare times I see it mentioned, I always feel like I have some special, unique insight. (I definitely do not!)

  • @yellow1pl
    @yellow1pl 7 месяцев назад +1

    Hi! Great fan of your channel! :)
    However this time I'm a bit puzzled. Several years ago I read somewhere Marvin Minsky talking about how he build this (awesome in my opinion) mechanical neural network. Since that time I was sure his network was the first. However here you talk about neural network build almost decade later and call it the first one... You mentioned that Marvin Minsky did some nerual network research previously, but he left. Ok, fine, so why his neural network that was bult before perceptron is not the first one in your opinion? :) Maybe next video? :)
    Also - to my knowledge Turings paper was published in 1937, not 36. In year 1936 Alonso Church published his paper related to Entscheidungsproblem. I don't know who was the second one that came up with theory of gravity or relativity, we don't usually remember them. But for some reason we remember Turing for being second in something :) Just fun fact :)

  • @DamianGulich
    @DamianGulich 7 месяцев назад

    There's more about this early history of artificial intelligence in this 1988 book: Graubard, S. R. (Ed.). (1988). The artificial intelligence debate: False starts, real foundations. MIT Press.
    The chapters also detail a very interesting discussion of related general philosophical problems and limitations of the time.

  • @belstar1128
    @belstar1128 7 месяцев назад

    very forwards thinking people from a time when most people couldn't even comprehend computers. i know a lot of people born after this period who are only slightly older than me that can't even handle windows 10 and don't believe computers existed when they were young. and i am talking about people born in the 1970s here not boomers. yet you had these geniuses born in the late 19th century or early 20th century who made it all possible .

  • @0MoTheG
    @0MoTheG 7 месяцев назад

    When I first read about N.N. around 2000 this was still the state of the matter 30 years later.
    When I was at university NN were no topic.
    Then after 2010 things suddenly changed. Training Data and Flops had become available.

  • @khalidelgazzar
    @khalidelgazzar Месяц назад

    Thank you

  • @Alex.The.Lionnnnn
    @Alex.The.Lionnnnn 7 месяцев назад +2

    I love how cheesy that name is. "The Perceptron!" Is it one of the good transformers or the bad ones??

    • @LimabeanStudios
      @LimabeanStudios 7 месяцев назад

      Working in physics one of the first things I learned is half the names are just "-tron" and it always makes me giggle

  • @noelwalterso2
    @noelwalterso2 7 месяцев назад +1

    The title should be "the rise and rise of the perceptron" since it's the basic idea behind nearly all modern AI.

  • @luisluiscunha
    @luisluiscunha 7 месяцев назад

    I needed a video to do the dishes, after spending a day making pedagogical materials on Stable Diffusion. Now I will rewind and delight myself seeing this video carefully. *Thank you*

  • @sinfinite7516
    @sinfinite7516 7 месяцев назад

    Great video :)

  • @JiveDadson
    @JiveDadson 7 месяцев назад

    Before the multi-layer perceptron, statisticians used that exact same model with sigmoid activation functions and called the process "edge regression." The statisticians knew how to "train" the model using second-order multivariant optimization and "weight decay" methods, which were vastly superior to the ad hoc backpropagation methods that neural network researchers were still using as late as the 1980's. The neural net guys were blinded by their unwarranted certainty that they were onto something new.

  • @Ray_of_Light62
    @Ray_of_Light62 7 месяцев назад +1

    I studied the perceptron in the '70s. My conclusion was, the hardware was not up to the task. Using a matrix of photoresistors as the input proved the design principle but couldn't bring to a working prototype.

  • @pvtnewb
    @pvtnewb 7 месяцев назад

    As I recall, AMD's zen uarch also use some form of perceptron as their BTB or branch prediction

  • @unmanaged
    @unmanaged 7 месяцев назад +1

    great video love the look back at currently used technology

  • @perceptron-1
    @perceptron-1 7 месяцев назад

    I built a multilayer perceptron from 4096 operational amplifiers in the '80s. It was a kind of analog computer where the weighting values could be set with electronic potentiometers controlled from a digital computer, digital port, and the freely reorganizable switch matrix could be set with electronic switches, how many layers to have and in what matrix.
    It beat the digital machines of the time in real-time speech recognition and live speech generation. Nowadays I want to integrate this into an IC and sell it as an analog computer.
    We tried to model the back propagation method using the PROLOG language, but the machines at that time were very slow.
    It took 40 years for the speed and memory size of machines to reach the level where this could be realized.
    The so-called scientific paper work was very far from the practical solutions realized then and now, there is a couple of decades behind in theory, because a lot of technical results achieved in practice could not be published.

  • @chinchenhanchi
    @chinchenhanchi 7 месяцев назад

    I was just studying this subject in university 😮 one of the many lectures was about the history of IA
    What a coincidence

  • @SB-qm5wg
    @SB-qm5wg 7 месяцев назад

    Well I learned a whole lot from this video. TY 👏

  • @JohnVKaravitis
    @JohnVKaravitis 7 месяцев назад

    0:12 Is that Turing on the right

  • @londomolari5715
    @londomolari5715 7 месяцев назад

    I find it ironic/devious that Minsky criticized Perceptrons for inability to scale. None of the little toy systems that came out of MIT or Yale (Schank) scaled.

  • @cthutu
    @cthutu 7 месяцев назад

    Great great video. But the McCulloch-Pitts Neuron didn't use weights. You displayed diagrams showing weights whenever you mentioned it.

  • @nexusyang4832
    @nexusyang4832 7 месяцев назад +1

    I was just watching this video by Formosa TV and their piece of on the founder of SuperMicro. Just curious if there is any interest in the SuperMicro or its founder (for the English speaking folks that don't understand Mandarin hehe).... just thought I'd ask. 🙂

  • @GeorgePaul82
    @GeorgePaul82 7 месяцев назад +1

    Wow thats strange timing I'm in the middle of reading the book " The Dream Machine by Mitchell Waldrop" Have you read that yet ? Its about these exact same people

  • @kevin-jm3qb
    @kevin-jm3qb 7 месяцев назад

    As a fellow 4 hour sleeper. Any advice on brain health. I'm getting paranoid.

  • @travcat756
    @travcat756 7 месяцев назад

    Minsky & Papert and the XOR problem was the invention of deep learning

  • @fintech1378
    @fintech1378 7 месяцев назад

    Yuxi in the Wired, any audio essay?

  • @fredinit
    @fredinit 7 месяцев назад

    Beyond the usual kudos to Jon and his research, the primary reason much of this area has, and will continue to come up short of what a human brain can do has to do with scale and complexity. The human brain is WAY more complex than even the combination of the current LLM systems. Sitting at over 50,000 cells and 150,000,000 synaptic connections per cubic mm (Science: A petavoxel fragment of human cerebral cortex reconstructed at nanoscale resolution), and using less than 300W/hr. of energy per day, the brain is a formidable piece of wetware.
    With advances of computer hardware and software we'll get there some day. But, not for a long time. Until then, remember that all the current LLMs are 100% artificial and 0% intelligent.

  • @AABB-px8lc
    @AABB-px8lc 7 месяцев назад

    I see what you do there. 3030 year: History of AI essey: As we know, our new hyperdeepinnercurlingdoubleflashing neural network almost working, we need few more tiny touches and literally 2 extra layers to show it awesomeness in next 3031year. And again, and again.

  • @Charles-Darwin
    @Charles-Darwin 7 месяцев назад

    the 'boating accident' is peculiar

  • @warb635
    @warb635 7 месяцев назад

    Russian vessels close to the Belgian coast (in international waters) are being closely watched these days...

  • @darelsmith2825
    @darelsmith2825 7 месяцев назад

    ELIZA: "Cat got your tongue?" I had a Boolean Logic class @ LSU. Very interesting.

  • @renanmonteirobarbosa8129
    @renanmonteirobarbosa8129 7 месяцев назад

    MLPs are very prominent still. Also Attractor NNs are very popular, transformers would not exist without ANNs.

  • @onetouchtwo
    @onetouchtwo 7 месяцев назад

    FYI, XOR is pronounced “ex-or” like “ECK-sor”

  • @DarkShine101
    @DarkShine101 7 месяцев назад

    Part 2 when?

  • @Anttisinstrumentals
    @Anttisinstrumentals 7 месяцев назад

    Every time i hear word multifaceted i think of chatgpt.

  • @jamillairmane1585
    @jamillairmane1585 7 месяцев назад

    Great entry, very à propos!

  • @Phil-D83
    @Phil-D83 7 месяцев назад

    Minsky is currently frozen, waiting for return after his untimely death in 2016 or so

  • @perceptron-1
    @perceptron-1 7 месяцев назад

    It is not enough to digitally model the most common LLM for Artificial Intelligence today, it doesn't matter if it is 1-bit or 1 TRIT 1.58b = log2(3), it has to be done with working ANALOG hardware!
    If software, then Machine Learning (algorithm)
    If hardware, then Learning Machine (Hardware that is better and faster than an algorithm)

  • @jamesjensen5000
    @jamesjensen5000 7 месяцев назад

    Is every cell conscious?

  • @douggolde7582
    @douggolde7582 7 месяцев назад +6

    When I eat pulled pork I gain none of the pig’s memories. I do however gain an essence of the wood (tree) used. The next day I am able to impart this wood knowledge in the men’s room at work. Ahh, hickory with a bit of cherry.

    • @tipwilkin
      @tipwilkin 7 месяцев назад +6

      Idk about you but when I eat pulled pork I feel like a pig

  • @marshallbanana819
    @marshallbanana819 7 месяцев назад

    This guy has been messing with us for so long I can't tell if the "references and sources go here" is a bit, or a mistake.

  • @IllyrianTraveler
    @IllyrianTraveler 7 месяцев назад

    Didn't I build this in Alpha Centauri?

  • @mattheide2775
    @mattheide2775 7 месяцев назад

    I enjoy this channel more than I understand the subjects covered. I worry that AI will be a garbage in garbage out product. It seems like a product forced upon me and I don't like it at all. Thanks for the video.

  • @georhodiumgeo9827
    @georhodiumgeo9827 7 месяцев назад

    An explanation of perceptrons and where did they go???...
    Get the heck out of my head, I was literally just wondering about this.

  • @ReadThisOnly
    @ReadThisOnly 7 месяцев назад +1

    asianometry my goat

  • @harambetidepod1451
    @harambetidepod1451 7 месяцев назад

    My CPU is a neural-net processor; a learning computer.

  • @iRiShNFT
    @iRiShNFT 6 месяцев назад

    Your Audio is always WAY too low compared to everything else online ... have to turn speakers up too high and then back down after your videos...
    No other notes , Love your videos .,... nobody else is going to teach us this nerdy shit =)

  • @LydellAaron
    @LydellAaron 7 месяцев назад +2

    The first neutral network theory is/was valid. The computing hardware is catching up. All the equations are valid with wave or wave states.

    • @Frostbytedigital
      @Frostbytedigital 7 месяцев назад +2

      ALL the equations from the original perceptron are correct if you sub in waves or wave states? That's fascinating. Any sources?

    • @LydellAaron
      @LydellAaron 7 месяцев назад

      ​@@Frostbytedigital Yes, totally fascinating. In many cases, you just have to see the equivalence in mathematical form of sum of products form like in 2:35. In some cases, you just sub in a complex number for the most part. I modeled a polychromatic light particle in our recent wave-based patent where we expand the equation of a photon c=lamda*nu as a sum of products. I filled it under my company "Calective." Also look up "Higher dimensional quantum computing" by Sabre Kais and Barry Sanders.

  • @smoggert
    @smoggert 7 месяцев назад +2

    🎉

  • @ahnabarnob5004
    @ahnabarnob5004 5 месяцев назад

    Now people use neural network to draw furry pictures🙂

  • @halfsourlizard9319
    @halfsourlizard9319 7 месяцев назад

    symbolic AI was a neat idea ... rip