Mindscape 184 | Gary Marcus on Artificial Intelligence and Common Sense

Поделиться
HTML-код
  • Опубликовано: 17 ноя 2024

Комментарии • 62

  • @davegrundgeiger9063
    @davegrundgeiger9063 10 месяцев назад

    I'm a big fan of Gary Marcus and I'm grateful for his voice in the discourse. And as always I'm grateful for Sean's intelligent and engaged questions.

  • @dajandroid
    @dajandroid 2 года назад +12

    Wonderful discussion! Thanks Sean and Gary. Surprised that Noam Chomsky’s idea of the capacity for language with grammars wasn’t mentioned. One could argue that the grammar of discussion goes well beyond a single sentence’s grammar. E.g. We have punctuation for ideas in the form of paragraphs, etc., rules of writing and the symbolic organization of words conveys the continuance of an idea and its context for more linear thought and multiple participant conversations.

  • @arthurpenndragon6434
    @arthurpenndragon6434 10 месяцев назад

    Dr Carroll, this has been the most productive interview of Dr Marcus I have ever heard. This was a proper back and forth of honest scientific discussion. Thank you for your insightful questions. I especially loved the anecdote about Newton's abstraction over raw data.

  • @hamsandwich9807
    @hamsandwich9807 2 года назад +5

    Wonderful podcast. I am most likely going to pursue a degree in cognitive and behavioral neuroscience so this really is inspirational

  • @rafaelarevalo8047
    @rafaelarevalo8047 2 года назад +1

    wow this was an absolutely fascinating episode, surely one of my favorites. marcus is a brilliantly insightful man. thank you for the fantastic discussion.

  • @rajeevgangal542
    @rajeevgangal542 2 года назад +5

    Thus guy's views gel with mine as a long time applied ML practitioner. Most ML falls into the "find the underlying function" or "optimize a function" or "search for the solution efficiently in a landscape" categories
    Regardless of the technical stuff- deep learning, reinforcement learning or otherwise. Multi modal learning with DL, reinforcement learning and newer technique with some ontologies will move it somewhat. What this means is that AI ML can discover laws of gravity from data. Keplers laws and so on. What it can't do is to propose space time as fundamental and generate special and general relativity from it. Even if we were to somehow use gpt-3 and any quantitative data driven ML model

  • @TheOriginalRaster
    @TheOriginalRaster 2 года назад +2

    Excellent discussion. This conveys useful information that I think is not being presented in other popular forum. The concepts presented are timely. This is what we need to help make sense of what is written in the popular press about recent AI developments. Great work you guys!

  • @GurtTarctor
    @GurtTarctor 2 года назад +5

    I highly recommend getting Joscha Bach on the podcast.

  • @RobRoss
    @RobRoss 2 года назад +4

    I’ve always thought that if you create a general AI, you would need to train it like we train human children. You have to give them curated materials that are consistent and truthful, and build up a model of the world over time that grows in sophistication. I don’t see how you can get away with just turning the AI loose on some data and expect it to figure it all out on its own.

    • @miraculixxs
      @miraculixxs Год назад

      This! I've always thought the same & discussed so with peers. Surprisingly the answer has mostly been "this takes too much time".

  • @breaktherules6035
    @breaktherules6035 2 года назад +1

    EXCELLENT ideas! THANK YOU so much!

  • @judgeomega
    @judgeomega 2 года назад +2

    phenomenal episode

  • @I-0-0-I
    @I-0-0-I 2 года назад

    I would like to agree that GPT3/modern ML is a parlor trick. But maybe a valid counter example is that some humans are extremely skilled at creating music, yet have "no idea" what they are doing. They can't read sheet music, don't know music theory, but can play and write in the top 1%. They must have learned this amazing skill only through excellent inference, correct?
    My main worry in the last few years has been that human consciousness/intelligence is likely itself just a parlor trick. Would love to be talked off that ledge.
    edit: I should add a personal experience influencing my intuition on this. I began learning English as a 5 year old. Decades later, I can pull some obscure vocabulary out of thin air, mid-conversation. I am often completely unable to define the word but my use is perfect. When I google the definition, I am often shocked how perfect the fit is.
    My next thoughts here are that maybe I am just identifying incredible b.s. artists. Is the excellent musician who knows no theory a b.s. artist?

  • @akumar7366
    @akumar7366 2 года назад +2

    Fascinating discussion, I learnt so much.

  • @stephenkamenar
    @stephenkamenar 2 года назад +2

    40:10 this isn't true. you just keep appending to your prompt. it'll have a memory of all the previous questions and answers.
    you might run out of room at some point so it probably can't go for an hour, but, yea

  • @fugslayernominee1397
    @fugslayernominee1397 2 года назад

    Wow it was really a great episode to watch. The talk about putting common sense into symbols so that AI could understand really had me question the very basic things like emotions and how we'll be able to describe other innate things to AI when we ourselves have problem interrupting making sense of some of the mathematical theories. Like what exactly is charge or spin in quantum particle.

  • @hc8379-f4f
    @hc8379-f4f 2 года назад +1

    Perhaps the useful test of the reliability of an AI system is to have it correctly answer "I know" or "I don't know" what to do "next" in multiple situations.

  • @hughJ
    @hughJ 2 года назад +1

    The reason that research investment is so heavily biased towards DL techniques is because it shows promise for being productized in the short to mid-term and provides results that are promising for further investment. The goal isn't about science, knowledge, or academic achievement -- it's about the merits of it as a tool to create value (engineering new applications, or better applications.) Neuroscience, psychology, and philosophy are fields that conveniently benefit from that AI investment while the interests of private industry and academia are aligned, but that can and will diverge.
    The potential for creating a human-level general intelligence is not sufficient on its own to invest billions in research on it -- it must also deliver value that offsets the cost, and right now we're in the midst of witnessing the economies of scale in semiconductor fabrication beginning to break down. Cost per transistor is going up while the value of increasing transistor density is decreasing. DRAM scaling is plateauing. Fewer and fewer components of chips are able to leverage newer nodes. DL AI research has gone from being done on affordable consumer-level hardware to hardware racks that cost 6-7 figures each. It costs millions to train something like the GPT-3 network, and that level of training performance is unlikely to ever find its way down to affordable consumer devices.

  • @czerskip
    @czerskip 2 года назад +1

    I wouldn't go as far as suggesting there's any intelligence in recommendations. We're yet to experience the works of a successful recommendation engine 🤣

  • @Life_42
    @Life_42 Год назад

    Loved this episode!

  • @HarryNicNicholas
    @HarryNicNicholas 2 года назад

    talking of scrabble my first job when i moved to london was pasting up corrections in science journals, i worked for plenum publishing (1975?) who translated russian journals into american, we used to sit around at lunchtime playing risk! or scrabble, but we had enormous dictionaries at our disposal - i warn people not to play scrabble with me.

  • @willthecat3861
    @willthecat3861 2 года назад

    Hi: Perhaps I missed it... but, I didn't hear much about the Fourth Generation Computing Project...which along with the associated advanced hardware, and advanced software, was supposed to allow computer programming to be done in human natural language. At least, this is how I know about the project/title, as coined? by James Martin.

  • @chrispalmer4317
    @chrispalmer4317 2 года назад +1

    WHAT COULD POSSIBLY GO WRONG
    WHY ARE WE DOING THIS???

  • @kenneththomas7016
    @kenneththomas7016 2 года назад

    Great discussion--although I had to check to make sure Sean wasn't interviewing Ira Flatow s (from npr ) little brother

  • @deedeemao6809
    @deedeemao6809 2 года назад

    it better be able to weigh the means n the ends, know context, have empathy, and be able to predict all outcomes n choices that lead to those outcomes.

  • @DudokX
    @DudokX 2 года назад +2

    I remember getting amazed by the AI Dungeon 2 game but you quickly start to see how dumb it actually is. The most entertaining thing about it is the insane nonsensical stuff it can generate.

  • @judgeomega
    @judgeomega 2 года назад +3

    prof marcus seems extraordinarily intelligent and insightful, but also very sensitive. he refers to critics far more often than is probably healthy for his mental well being.

    • @frede1k
      @frede1k 2 года назад

      Totally agree.. IMAO He also downplays deep learning too drastically. AI right now is just s step on the way, but the way it has developed the last 10 years is an unprecedented marvel in AI research

    • @hector338
      @hector338 2 года назад +2

      @@frede1k My fairly ignorant interpretation is that skyrocketing computing power has enabled bigger neural networks capable of increasingly tricky pattern recognition, but no real shift in the technology itself has happened. Am I wrong in this?

    • @frede1k
      @frede1k 2 года назад

      @@hector338 you are correct and incorrect:) computing power have enabled the old technology but a lot of new ground have been covered in optimizing the networks and inventing new ways to learn like adversarial networks and so on

  • @Emilis2023
    @Emilis2023 Год назад

    Don't get me wrong, I don't know what the answer is, but I get very worried about people talking about instilling values into AI. If we can't even agree on most values as a society, then whose values get imbedded into an AI? I really feel like we shouldn't be trying to put our biases into them at all and they shouldn't even be attempting to make moral judgements. An AI should be able to tell you what Hitler or Martin Luther King did, but not if they were good or bad people.

  • @borntobemild-
    @borntobemild- 2 года назад +1

    Analogy. Douglas Hofstadter describes this as the missing piece.
    Humans understand like, or as.

  • @thewiseturtle
    @thewiseturtle 2 года назад +1

    Yeah, my own approach to what AI is most useful as is to categorize reality in ever more meaningful detail. I mean, look at how totally disconnected even science is. And then look at the dictionary. We have all of these different patterns (words, ideas, emotions, actions, etc.) and very little clue how to relate them to one another. Without being able to find relationships between things, we can't really solve problems logically. Think about it, one of the first things infant human brains have to learn, for effectively operating in the universe, is how the parts of their bodies relate to the outside world, and then how different parts of the outside world relate to one another. You know, the basics of all kinds of relationships from drinking mom's milk, to identifying living things that are complex vs. inanimate objects that are simple, to all of the physical interactions of in, on, under, on top of, around, through, etc. If we actually want something even moderately "intelligent" in our computers, we'll need for it to be able to recognize the diverse sorts of patterns/relationships of whatever sorts of elements we want the AI to solve problems for.

  • @Pancunian
    @Pancunian Год назад

    Once I noticed, I couldn't help but tune in every this guy m". He's a clever person, I think maybe he hasn't done as well as he should have and knows it. A little bitter about summat and over-emphasising his achievements "when I wrote", "I had that idea in..." "my article about that..." "I was the first to ..." got wearing.

  • @helicopter_traffic
    @helicopter_traffic 2 года назад +2

    Please get Joscha Bach on we need him

    • @bendavis2234
      @bendavis2234 2 года назад +1

      Yes! Him and Sean would have such a deep and interesting talk

  • @life42theuniverse
    @life42theuniverse 2 года назад

    15:00 You don’t need deep learning to plan the route home... but you need it to detect the animal or person that jumps into the road. The deep learning still only learns the actions we ask of it... 58:00 I think breaking it apart is how the brain functions ... an technical overview of the brain ruclips.net/video/2LzZMWGQe1k/видео.html

  • @lucamatteobarbieri2493
    @lucamatteobarbieri2493 2 года назад +1

    Biological neural circuitry wirings, the connectome, encodes the instincts, innate understanding. Learned knowledge is acquired trough synaptic plasticity. Gpt3 reminds me the human cortex, not a whole brain. A whole brain has a much more intricate structure.

  • @Skankhunt420.
    @Skankhunt420. 2 года назад

    A couple recommendations on the podcast - George Hotz & Ben Goertzel are really interesting to listen to for AI. Due to inflation and Canadian govt freezing bank accounts people have started buying more crypto. Vitalik Buterin, Silvio Micali & Robert Breedlove are good picks for crypto

  • @adfaklsdjf
    @adfaklsdjf Год назад

    I feel like many or most of the things Marcus said about GPT-3 haven't aged well 1 year on..

  • @srikanthtupurani6316
    @srikanthtupurani6316 2 года назад +1

    Ai is going to make life of mathematicians, software professionals tough. They are going to loose their jobs.

    • @Daniel-ih4zh
      @Daniel-ih4zh 2 года назад +1

      Mathematicians will be the last people who lose their jobs

    • @naturallaw1733
      @naturallaw1733 2 года назад +1

      that's fine if Robots can do it Better. we Need People to spend More of their Time Creating, Innovating, Exploring, Experimenting, Discovering, Living etc. etc.

    • @hughJ
      @hughJ 2 года назад +2

      I think more likely it'll become a tool to make mathematicians and software professionals more productive. There's an abundance of mediocre programmers in the world and a shortage of excellent ones. Anything that helps elevate average performers to an above-average level is going to have a hugely beneficial impact for most people.

    • @phoearwenien4355
      @phoearwenien4355 Год назад

      You're under the impression that software professionals only create software. To do a software you need a deep understanding of the world, client needs and expectations and business relations - that's something AI won't be able to do for a long time. The moment AI is able to do that, will be the moment it can replace all jobs people do.

  • @deloford
    @deloford Год назад

    Wow only 12 months later and Gary is looking pretty wrong across the board. LLM really is the direction not symbols.

    • @dinguslopper
      @dinguslopper Год назад +2

      Nah, they still work exactly as he describes and are showing performance improvements because we’ve given them an insane amount of parameters.
      They will hit the same wall eventually unless some breakthrough is made.
      Moore’s law is fully dead and computing power isn’t infinite.

  • @TheReferrer72
    @TheReferrer72 2 года назад +1

    No deep-learning will go all the way.
    You may be able to short circuit you way by using symbolic methods, but really all that matters is the network.

    • @frede1k
      @frede1k 2 года назад +1

      yes 100% agree. Don't know why Gary had to come with so much hate against deep learning. Of course we are not there yet but try to compare the last 10 years of AI research to the last 100. Its finally heading somewhere. The revolution came with Neural Networks after we skipped the idea of building purely logical and mathematical intelligence..

    • @naturallaw1733
      @naturallaw1733 2 года назад

      what does "all the way" mean?

    • @TheReferrer72
      @TheReferrer72 2 года назад

      @@naturallaw1733 That deep learning will lead to human level and above Artificial General Intelligences.

    • @naturallaw1733
      @naturallaw1733 2 года назад

      @@TheReferrer72
      okay but just don't say "Consciousness". 🙃

    • @hughJ
      @hughJ 2 года назад

      @@frede1k Because I don't think he's an engineer. Deep learning is seeing interest from engineering-related industry because it works and delivers real-world engineering value in commercial products. Money will always go where it sees a potential for a return on the investment. The onus is on him to show that there's profit to be had from alternative avenues of research, and you don't get that through evangelizing, but rather by proving it. DL works and it's scalable in a way that makes it productizable. AGI doesn't work currently, and we don't know if it'll be scalable or productizable.

  • @Neomadra
    @Neomadra 2 года назад

    I have the impression that he's playing down the successes of AI because of personal reasons. xD

  • @jweber4811
    @jweber4811 2 года назад +1

    I wonder is Gary's comment that "Elon Musk is beta testing on public roads. I don’t think that is cool. There have been some accidents" is a result of his ignorance about beta testing or an example of cognizant dissonant thinking? There have been "some accidents" in every car brand ever produced! I think Gary might consider educating himself about this topic before making such bias and uninformed and harmful comments.

    • @brokenacoustic
      @brokenacoustic 2 года назад +1

      An AI expert voices his opinion on AI driving cars, and you call him ignorant and uniformed, all the while Musk himself has said the technology isnt really ready. Your tribalism is showing...

  • @araneascience9607
    @araneascience9607 2 года назад

    I don't agree with him regarding GPT-3, i don't believe the system has consciousness, but we can call the statistical correlations in her as a way of understanding. I wrote a book called "GPT-3:Talking with an Artificial Intelligence", using the text-da-vinci-002 engine and i believe that is a great system with a lot of semantic understanding, Gary Marcus exagerate a lot of his claims regarding AI.

    • @phoearwenien4355
      @phoearwenien4355 Год назад +2

      Correlation doesn't equal causation and that's a problem with all models we currently have. They don't have an understanding of the world, because for that you need hunderds if not thausands of different models that have constant access to the real world, nor they have any implemented abilities aside from generating statistically plausible content. They're just better and better at fooling you they do thanks to implemented safeguards stopping them from spewing garbage. It's hubris in its finest to think people can simply create a language model and suddenly all secrets of the brain are solved while in fact they don't even know how exactly people don't even fully know how exatly our brain works.