Gary Marcus: Toward a Hybrid of Deep Learning and Symbolic AI | Lex Fridman Podcast #43

Поделиться
HTML-код
  • Опубликовано: 21 ноя 2024

Комментарии • 175

  • @lexfridman
    @lexfridman  5 лет назад +79

    I really enjoyed this conversation with Gary. Here's the outline:
    0:00 - Introduction
    1:37 - Singularity
    5:48 - Physical and psychological knowledge
    10:52 - Chess
    14:32 - Language vs physical world
    17:37 - What does AI look like 100 years from now
    21:28 - Flaws of the human mind
    25:27 - General intelligence
    28:25 - Limits of deep learning
    44:41 - Expert systems and symbol manipulation
    48:37 - Knowledge representation
    52:52 - Increasing compute power
    56:27 - How human children learn
    57:23 - Innate knowledge and learned knowledge
    1:06:43 - Good test of intelligence
    1:12:32 - Deep learning and symbol manipulation
    1:23:35 - Guitar

    • @AmericanbouyUSA
      @AmericanbouyUSA 5 лет назад

      Thanks Lex. When it comes to programming A.G.I., should programming be considered an approach to the rules of a particular game or should programmers focus on a machine's ability to obtain knowledge, relative to it's environment (which can be an independent variable, when it comes to potential new physics)? The definition of survival is being comfortable when it comes to common sense acquired over time, in my opinion. Your thoughts would be appreciated, if you have time.

    • @jonahwelch6766
      @jonahwelch6766 5 лет назад +3

      try to look your interviewee in the eye a bit more when asking questions, it'll help with general flow of conversation, and great video once again

    • @Miki736
      @Miki736 5 лет назад

      There are no subtitles. I have problems with hearing. Can you fix that?

    • @lexfridman
      @lexfridman  5 лет назад +3

      @@Miki736 Automatic captions are usually enabled by default. I'm not sure why they're not ready yet. I reached out to RUclips just in case it helps to speed it up.

    • @lexfridman
      @lexfridman  5 лет назад +3

      @Николай Каретников Privet, will do. See my comment above. Automatic captions haven't been automatically generated yet for some reasoning. The song is All Along the Watchtower by Jimi Hendrix (Bob Dylan original): ruclips.net/video/TLV4_xaYynY/видео.html

  • @aullvrch
    @aullvrch 2 года назад +34

    I would really like it if you had Gary Marcus on again! I think his prospective is unique and it would be interesting to hear what he has to say about GPT-3 as well as more about symbolic AI.

    • @TheBilly
      @TheBilly 2 года назад +4

      ClosedAI denied him access to gpt3 during the early access period (but gave it to complete randos) so, indeed, it sounds like he has some important criticisms that they're running scared from, trying to artificially craft an early narrative that he could tear down.

    • @KDawg5000
      @KDawg5000 Год назад +4

      I doubt that will happen as Lex has blocked Gary on Twitter. 🤷‍♂

    • @isaiahcastillo898
      @isaiahcastillo898 Год назад +4

      @@KDawg5000 why would he do that

    • @darylallen2485
      @darylallen2485 6 месяцев назад

      ​@@isaiahcastillo898 [ redacted ]

  • @ericpmoss
    @ericpmoss Год назад +6

    I think this is one of the best discussions I have ever heard on these topics, especially the unfair, unscientific demise of symbolic AI.

  • @JohnLoganR
    @JohnLoganR 5 лет назад +67

    Hey lex just wanna shout-out and let you know I appreciate the good work put into this channel. Fascinating as always. Keep it up.

  • @PatrickQT
    @PatrickQT 5 лет назад +31

    You're doing a fantastic job lex. Great podcast.

  • @still2weirdfoU
    @still2weirdfoU 5 лет назад +1

    I honestly didn’t think I liked the style/demeanor of your interviews, but it’s grown on me, especially after the Susskind interview. The fact you attack the interview from different angles and put all the topics with timestamps in the comments earned you subscriber. Please keep doing what your doing.

  • @walterpark8824
    @walterpark8824 Год назад +1

    Picking your best interview is as nonsensical as a single, narrow Turing test, but I must say this interview with Marcus is to5ally satisfying. He's so right with you and the field that he repeatedly interrupts to get right to the essence and strip away any unuseful assumptions. The breadth of the discussion is staggering, and once again, so satisfying. This just isn't a simple topic, and the many parts are essential to the whole. Also, the little musiçal ending hit just right -- any conversation that can leap from Bertrand Russell to Hendricks and Lennon has unassailable joys, Thanks so much for this 90 minutes of hype-free general intelligence. I may have to watch this one again, not for the info, but just the pleasure of it.

  • @Mithon81
    @Mithon81 5 лет назад +1

    Not sure that I've seen all of your interviews, but this is the best of the ones I've seen so far.
    You are a very good listener, and when you speak it's measured, insightful and gives a lot of room for you interviewees.
    Keep them coming. You made my day.

  • @aminesadou9168
    @aminesadou9168 5 лет назад +27

    When de can see an episode with Jedea Pearl on causal inference and causal learning AI ?

  • @viveknayak9899
    @viveknayak9899 4 года назад +2

    Lex, this podcast is a treasure. I've witnessed spectacular improvements in your interviewing skills over the last few months.... Kudos to you and keep going!

  • @Anyreck
    @Anyreck Год назад +1

    Very inspiring concepts explained here by Gary!

  • @entiretwix1480
    @entiretwix1480 5 лет назад +7

    Lex is always apologizing that his quality content isn't as quality as it could be, which is silly because I think it's great content regardless of it relative to it's potential, but I guess that's how some people improve. I love your videos lex

  • @Aj-ut8co
    @Aj-ut8co 5 лет назад +3

    In love with these podcasts lately. Can’t stop watching. Keep it up! So much good information here.

  • @scottharrison812
    @scottharrison812 Год назад +1

    Can we have a council of wise men AND wise women please☺️

  • @aproperhooligan5950
    @aproperhooligan5950 2 года назад +1

    Very good interview, Lex. Thank you!

  • @ianborukho
    @ianborukho 5 лет назад +3

    You're also so humble in letting your guest have their opinion even when they don't seem to give your point a fair chance.

  • @mattcs5834
    @mattcs5834 4 года назад +1

    Presision talking both of you, ready with answers right away, both suportive of fast free thinking. Almost zero distractions with "you knows or likes" just great communications.

  • @thegamerwhopwns
    @thegamerwhopwns Год назад

    Truly incredible that I am able to watch this

  • @M.C.1997
    @M.C.1997 5 лет назад +1

    Lex, we all owe you a massive thank-you.
    I'm praying everyday to see Hinton/Demis on here...keep up the good work.

  • @TimGreigPhotography
    @TimGreigPhotography Год назад +1

    "Committees of wise people" What a concept...

  • @danielbigham
    @danielbigham 5 лет назад +3

    Great interview! I've found Gary to come off as caustic in Twitter in his spats with Lecun, but I really enjoyed listening to his thoughts here. And I've long resonated with the intuition that the proper synergy between deep learning and more symbolic approaches will lead to great things.

    • @jonabirdd
      @jonabirdd 5 лет назад

      It sounds like a bunch of bullshit to me. It's based off poor intuition about what is means for humans to reason logically. Humans don't reason by manipulating symbols, but by connecting concepts.
      A beginner approaching the field can say the types of things he is saying. I said it before I trained my first model. What we need is people who can say HOW to do it, not some vague idea of what we still can't do. Basically, this is a real whiny bitch who isn't doing any actual work.

  • @ErvinNemesszeghy
    @ErvinNemesszeghy 5 лет назад +2

    Great conversation. I was looking for a question about the full self driving.
    What Gary thinks about Tesla? Would Tesla be able to pull off self driving only with Deep learning, or Tesla should look for program 'Innate knowledge" about the world in their cars in order to succeed. (They may do it if they have to.)
    I would really like to know his opinion about this. Or is the answer in one his books. I order the latest book, will see, but if others know his opinion about this, please let me know.
    Thanks for the great interview ...

  • @pramod.athani
    @pramod.athani 4 года назад

    Absolutely fascinating interview

  • @connor-shorten
    @connor-shorten 5 лет назад +15

    Great guest, I need to get a copy of Rebooting AI! It is interesting to learn more about the intersection of cognitive science and deep learning / deep RL. I would also recommend checking out papers from Brenden Lake at NYU as well such as "Improving the robustness of ImageNet classifiers using elements of human visual cognition" (pre-print) and their success on the Omniglot one-shot classification task using techniques inspired by cognitive science.

  • @masdeval2
    @masdeval2 5 лет назад

    Lex, your questions were excellent!

  • @empathylessons2267
    @empathylessons2267 5 лет назад +2

    I think one thing we take for granted is how long it took evolution to design our "common sense" systems, are other task-specfic brain nuclei.
    Personally, I think the road forward is to continue developing useful mechanisms like Convolution, Recursion, Deep Q learning, and, a personal favorite, the Transformer. But the way these blocks fit together (and the depth or parallelization of the blocks) to produce a generalizing intelligence, should be evolved throughout a very long period of time.
    The real challenge is developing a virtual environment with diverse enough data and tasks to facilitate selection for inter-architecture relationships that support general intelligence.

  • @eliastouil7686
    @eliastouil7686 5 лет назад +1

    I liked it that you asked a music question :)

  • @mikejordan2428
    @mikejordan2428 5 лет назад

    Excellent guest! Thanks for the podcast Lex!

  • @JamesCairney
    @JamesCairney 5 лет назад +3

    Jimi Hendrix all along the watchtower, quality tune, good answer!
    (I can play it on the guitar too, you're not alone there Lex 🎸)

  • @keylanoslokj1806
    @keylanoslokj1806 5 лет назад

    what an awesome bunch of interviews you got there. unbelievable people.

  • @danellwein8679
    @danellwein8679 4 года назад +1

    thank you Lex for this very much … I am in my 60s … but have never enjoyed knowledge as much as I do now … thanks ..

  • @kozepz
    @kozepz 5 лет назад +1

    Thanks for this interesting conversation and touching the point of ethics briefly.
    Would love to see a conversation with George Church somewhere in the future if possible. If I remember well he was also a panel member at one the World Science Festivals with some ethicists, scientists and philosophers. A heated but nonetheless very interesting discussion.
    Don't change your interview style, I like it very much!

  • @summersnow7296
    @summersnow7296 5 лет назад +1

    Limitations of DL is a known fact. I don't think many people in the field are saying DL is the only thing in AI, but DL does solve many practical problems that weren't possible before. Rather than throwing stones at deep learning and lamenting that a better approach is needed, it would be great if symbolists can show something that can augment progress already made by DL.

  • @kylegushue
    @kylegushue 5 лет назад +1

    Awsome job bringing up music! Cambridge, MA, based recording software brand Izotope is now incorporating ai into their stuff. Would love an episode with them.

  • @davemilke3110
    @davemilke3110 5 лет назад +1

    I've listened to many of your AI podcasts. This one in particular, will revolve in my top 3 for the foreseeable future . For me, some quality thinking and dialogue - Thanks.

  • @dapdizzy
    @dapdizzy 4 года назад

    I loved this conversation so much! One of the most enjoyable for me it was.

  • @josy26
    @josy26 4 года назад +3

    53:33 gotta say on this one I'm with Gary till the Natural Languare Understanding part, we have consistently beaten most benchmarks and solve with Transformers tricky dependencies or ones that need real world context, e.g. Winograd. Thanks for doing this podcast Lex, I enjoy it greatly.

  • @MarcoMugnatto
    @MarcoMugnatto 2 месяца назад

    4 years later.... (about 35:04 )
    Me: What would happen if a bottle were sealed only with a piece of plastic in the shape of a circle placed over the mouth of the bottle?
    ChatGPT: If a bottle were sealed only with a circular piece of plastic placed over the mouth of the bottle, several things could happen, depending on the context:
    Imperfect Seal: If the plastic is not well-fitted or is not heavy enough, it may not seal the bottle completely. This would allow air to enter and any gas or vapor inside the bottle to escape.
    Internal Pressure: If the bottle contains a liquid or gas that can generate pressure (like a carbonated beverage), the internal pressure may push the plastic out, resulting in an unsealed opening.
    Evaporation: If the bottle contains a volatile liquid, such as water, the liquid may slowly evaporate, especially if the plastic is not airtight, leading to a decrease in the volume of the liquid over time.
    Contamination: An unsealed plastic may allow dust, dirt, or microorganisms to enter, which could contaminate the contents of the bottle.
    Stability: Depending on the weight and shape of the plastic, it may be easily displaced, especially if the bottle is moved.
    In summary, the effectiveness of using a circular piece of plastic as a cap depends on how well it fits the mouth of the bottle and what is inside it. For an effective seal, it would be better to use a proper cap that fits securely.

  • @AsgerAlstrupPalm
    @AsgerAlstrupPalm 5 лет назад

    Symbolic AI was pursued by relatively few and primarily technically-minded people. If you bring experts in understanding experts in, that alone will be a boost. There is an entire discipline focused on distilling expert knowledge, so it can be taught. The focus is on teaching humans, but the methods are maybe the key to symbolic AI. This approach holds promise for how to bridge deep learning and symbolic AI . I would love to hear an interview with a guy like Bror Saxberg. He has done learning engineering for humans at scale. If anyone knows how to teach based on science, it is him.

  • @dot.O
    @dot.O 5 лет назад

    I watch these and normally would only understand a small percentage of what is being said. But his answer for that evolution question, made the most sense to me when it comes to relating humans to AI.

  • @franktfrisby
    @franktfrisby 5 лет назад +3

    I agree with Gary about deep learning. There is a limitation with deep learning using supervised learning.

    • @jonabirdd
      @jonabirdd 5 лет назад

      You shouldn't be agreeing with Gary, then, since Hinton and Le Cun have been saying that since the beginning, just that, unlike this schmuck, they focus on producing useful work with the limited means we now have to incrementally push our knowledge of what's possible and what specific form solutions will have, instead of throwing general criticism worthy of a beginner

  • @scottcarlon6318
    @scottcarlon6318 5 лет назад

    For machines to understand human thought process in regard to problem solving and learning, the machine needs to understand pain. This has been the motivating factor for humans and without it progress would not have been possible.

  • @iam1nerd
    @iam1nerd 5 лет назад

    I pick up a lot from this podcast. Hey Lex, since you mention sophia so often, I think you should invite Ben Goerztel to your podcast to talk about OpenCog and SingularityNet. Both seem to be really interesting stuff.

  • @ianborukho
    @ianborukho 5 лет назад

    What a great intro! Way to be so bold and put your own personal development out there!

  • @gallerksee
    @gallerksee 3 года назад

    Good conversation

  • @PhillipRhodes
    @PhillipRhodes 4 года назад

    Good stuff. Any chance of ever having Terry Winograd on the show?

  • @user-hh2is9kg9j
    @user-hh2is9kg9j 5 лет назад

    fascinating, finally someone publicly vocalized what I think about AI (in a more coherent and professional way).

  • @wave9142
    @wave9142 5 лет назад

    Never thought I'd listen to two people talking about bottles for an hour and a half :^)

    • @vast634
      @vast634 4 года назад

      Standard topic in a pub.

  • @tedhoward2606
    @tedhoward2606 5 лет назад +2

    Imagine is one of my favourite songs also.
    I am now clear that the outcomes imagined in the song are impossible as long as we use markets.
    They are only possible if we use automated systems to deliver all the essentials and most of the options of life to everyone on the planet.
    As long as we think of AI within a value context that uses markets to generate value metrics, we are in serious existential risk territory from a very broad set of classes of problems.
    Agree with Garry completely that intelligence is a deeply nested set of sets of hybrid systems.
    We need to look very closely at the Bayesian priors evolution has installed in us as heuristics.
    We each seem to exist in our own personal predictive VRs, and think of them as reality. That idea of modeling intelligence within a model seems to me to be one key component of doing it well. Some very good work being done on highly dimensional valence spaces. It seems clear to me that individual sapient life and the liberty of individual sapient entities (human and non-human, biological and non-biological) need to be at the top of the valence tree, if any of us are to have any significant probability of living a very long time.
    We need to deeply understand that all new levels of complexity are predicated on new levels of cooperation (which is not yet a common understanding of evolution). And we need to be able to see ourselves as the most cooperative entities on the planet, with deeply encoded heuristics to detect and remove cheating strategies on the cooperative.
    Competitive markets are not a safe environment for humans, however useful they may be to solve limited problems spaces.
    We have the tools to provide a secure future for us all, but few people can yet see how that might happen. That needs to change, quickly.

  • @Georgiavr-1
    @Georgiavr-1 4 года назад

    Gary has a really fresh, interesting and realistic perspective on current A.I. I really enjoyed this conversation. Thanks Lex!

  • @ondrejmarek1980
    @ondrejmarek1980 5 лет назад +1

    awesome podcast/interview, are you planning in the future do some expanded like you and two or more guests?

  • @franksiam2975
    @franksiam2975 5 лет назад +2

    Gary Marcus hybrid method of mixing symbolic logic and machine learning is the next door we have to open to bring ai to a other new level.

  • @PhilipTeare
    @PhilipTeare 5 лет назад +1

    Motivated reasoning is fundamentally motivated by the need for computational efficiency.... There are already analogues in ML. The simplest possibly being momentum. Momentum is basically a form of confirmation bias. And... it 'helps'.

  • @tandavme
    @tandavme 4 года назад

    very good talk

  • @zackandrew5066
    @zackandrew5066 5 лет назад +1

    Great interview

  • @LuisGuillermoRestrepoRivas
    @LuisGuillermoRestrepoRivas 3 года назад

    The path of neural networks towards intelligence that generates abstract concepts, generalizations and analogies at a level that allows, for example, the generation of humor, is extremely long. Research in symbolic AI and symbolic learning should not be abandoned.

  • @rasbperrypi133
    @rasbperrypi133 Год назад +1

    Back to watch again post the most recent beef 🥊

  • @juhanleemet
    @juhanleemet Год назад

    excellent interview! very relevant observations! I agree that ML systems are basically trained classifiers! We also need symbolic logic, probably both in combination.

  • @RalphDratman
    @RalphDratman 5 лет назад +4

    I think the problem of physical "common sense" is more difficult than the psychological one.

    • @jeff_holmes
      @jeff_holmes 5 лет назад

      Perhaps they aren't so different. Both seem to rely on making connections between various models of the world.

  • @marcagouni5084
    @marcagouni5084 5 лет назад +1

    Great opinion of this huge study in IA !! really appreciate this interview ***** Genius Gary Marcus °°°°°°°°

  • @WaylonFlinn
    @WaylonFlinn 5 лет назад

    That's one hell of an opening question.

  • @MrOftenBach
    @MrOftenBach 5 лет назад

    Great ideas !

  • @georget5874
    @georget5874 5 лет назад +1

    CYC ....I remember reading about this in wired(I think) in 1995 when I was going my computer science degree. The article was like this is going to be the next big thing, this will solve AI, Marvin Minsky loves it, blah blah blah. After that I never heard of it again.

  • @Masaq_TM
    @Masaq_TM 5 лет назад +1

    Any thoughts on Procedural Generation in gaming and if/how this could be adapted for AI learning? I’m just a layman but I think there could be value in experimenting with PG in the learning process

  • @AllUserNamesTaken111
    @AllUserNamesTaken111 5 лет назад +2

    Marcus comes across as a Mr. Know-it-all despite the counterfactual reality.

  • @stefanxhunga1681
    @stefanxhunga1681 5 лет назад

    One Special Discussion!

  • @freeintellect
    @freeintellect Год назад +1

    Lex: Isn’t big data AI amazing?
    Marcus: No. There are better ways to proceed.
    Lex: You’re a cynic.

  • @p00ki62
    @p00ki62 5 лет назад

    Never knew how smart Seth Rogen was

  • @thenextweek2416
    @thenextweek2416 5 лет назад

    This was a very informative discussion and I think I gleaned more information here on the underlying way AI "thinks" then any other discussion on this topic. I have wondered if AI needs a sort of baseline set of instincts / understanding that it can build on to be truly intelligent. It would seem that is what narrow AI might be accomplishing in the current state and in the future, maybe we will use a bunch of different narrow AI's as a way to program instincts in a system. Such as using computer vision and physics as a precursor to understanding environments or actual intelligence.

    • @jeff_holmes
      @jeff_holmes 5 лет назад

      I agree. Since humans are so good at developing common sense reasoning, we should be looking at what those basic building blocks/algorithms/instincts are in newborn/young humans. Certainly a great deal appears to rely on perception and experimentation.

  • @tonisilva2094
    @tonisilva2094 5 лет назад

    very good

  • @hemsworth38
    @hemsworth38 5 лет назад +3

    make AI robot babies for phisycal learning, and make a free mobile game where in both cases people play the role of teacher and compete for position on online leaderboard. this could speed up evolution of AI if done properly.

  • @empathylessons2267
    @empathylessons2267 5 лет назад

    Ey, the biomimicry thing speaks to me. I'm starting a PhD in neuro with my main focus being application of neuroscientific principles to ML.

  • @BagMrMan
    @BagMrMan 5 лет назад

    It is going to be huge! Buy AGI (SingularityNet) tokens!

  • @lolerskates876
    @lolerskates876 5 лет назад +1

    1:02:00 An amazing part of evolution is that it allows for life to adapt to different environments. Evolutions allows life to play both on a square Go board and a rectangular Go board.
    After many mass extinctions life has come back and thrived. Not only is evolution cumulative; but as the world changes evolution allows for a completely different fitness functions to take over.
    A fish would die in the desert and a dog would die in a corral reef; yet in the geological records we have sandstone and mudstone ontop of limestone (and all layers can have fossils).

  • @anthonyrossi8255
    @anthonyrossi8255 5 лет назад

    Why no captions

  • @loresofbabylon
    @loresofbabylon 5 лет назад

    Free Association assumes that all variables have potentially infinite hidden variables associated with them, which is the basis of a complex number, or the basis of electricity.

  • @josy26
    @josy26 4 года назад

    Who else is here preparing for the debate?

  • @JulioHuato
    @JulioHuato Год назад

    17:52 “AI systems with or without physical bodies …”. How can AI systems exist without residing in physical structures? What effect would they have on this physical world we live in? And who would build them?

  • @rag_llm
    @rag_llm 4 года назад +1

    14:32 - Vocabulary is finite, language is not, even at the most basic combinatorics level. Add to that complexity the parts of the semantic signal that are implied in language that come about through creative phrasing, voluntary injection of errors to create affective effect, the intentional omission of expected syntactic and semantic elements to create additional implied meanings, and on and on, and the *actual* solution space provided by language for expressing meaning and emotion is *incredibly* large.
    But the more important underlying principle behind the complexity of language is the most damning. Language is only the "visible" projection of our unbelievably complex model of the world and our self. How these elements interact with both internal and external signals to create the words and sentences that end up in speech and in writing is what is important, not the words themselves. It is impossible for machines to extrapolate all the nuances of meaning hidden in human dialogue without that same cognitive machinery.

  • @kmolnardaniel
    @kmolnardaniel 5 лет назад

    I really liked that you dare to go/ask against the interviewed person. For example when he questions the effectiveness of deep learning, and you ask back, what successes were before, and he kind of comes up with a really generic non answer.

  • @WaylonFlinn
    @WaylonFlinn 5 лет назад

    Innovations like variational autoencoders seem to be moving the abstractions learned by deep systems towards human comprehensibility.

  • @jkobject
    @jkobject 5 лет назад

    that guy is wrong.
    *this is just a personal statement, I do not intend to make any type of scientific judgement or ad hominem ones. Just an overall reflexion of what has been said about limits of deep learning and especially examples chosen to illustrate them*

  • @PatrickOliveras
    @PatrickOliveras 5 лет назад

    What is the conference he is talking about at 7:30? I can't seem to find it

    • @mtpco
      @mtpco 5 лет назад

      It used to be called NIPS, but it's renamed as NeurIPS. Zuckerberg attended a Q/A session in 2013, it seems.

    • @PatrickOliveras
      @PatrickOliveras 5 лет назад

      @@mtpco Thanks, I get that, but I couldn't find that specific talk

    • @mtpco
      @mtpco 5 лет назад +1

      @@PatrickOliveras it was not a talk, he attended a panel and there was an informal q/a session. You can find the summary of it on Reddit if you search for it.

    • @PatrickOliveras
      @PatrickOliveras 5 лет назад

      @@mtpco Ah I see, thank you!

  • @oisiaa
    @oisiaa 5 лет назад

    I'm more in line with Gary's future of AI than Elon Musk's. I think we will have narrow AI with certain functions, but general AI will *never happen.

  • @JM-rl9ve
    @JM-rl9ve 5 лет назад

    I wonder what Lex looks like in a T-Shirt

  • @sbandc
    @sbandc Год назад

    Biomimicry is the inspiration for many of our competitive sports. In fact, it's arguable that the distinction between human intelligence and that of other species is the efficiency by which we borrow inspiration. We are the world's greatest copycats.

  • @PhilosopherRex
    @PhilosopherRex 5 лет назад

    I wonder if we had really good environmental simulation games with a solid economic model and with characters that resemble humans in detail including taking damage in multiple complex ways (while not pain in and of itself, losing the game through damage gives reason enough to avoid taking damage). Putting both humans and AI's into the game may allow the AI's to learn to behave appropriately with humans in the mix.

  • @MuratUenalan
    @MuratUenalan 5 лет назад

    Unhyping well done !

  • @JQ1337
    @JQ1337 Год назад

    interesting.

  • @empathylessons2267
    @empathylessons2267 5 лет назад +2

    On a better Turing test, I believe the multiplayer-gaming community is an untapped data pool, with which you could train NLP networks to learn to communicate in a way that human players (that opt into the possibility of playing with an AI) cannot tell whether they're interacting with an AI or a human player.
    I can even see a group of friends playing a Turing game for fun, since for them it would be about tricking their friends into thinking they're an AI (while the AI learns to produce languange more like the human players).

  • @CharlesVanNoland
    @CharlesVanNoland 5 лет назад +2

    @24:00 If we could indeed just erase a memory of an experience that we find unpleasant we would not have any motivation to avoid or otherwise make an effort to prevent having similar unpleasant experiences again in the future. Suffering is how we learn what not to do. @48:30 The "ocean of knowledge we all operate on" is garnered via life experience and is something we all assume eachother has. Therefore an AI agent that we'd expect to be able to operate with similar background knowledge would invariably be required to learn how to exist, in the same fashion that any conscious creature does. Only after an AI has been "raised" (i.e. 'trained') to be aware of what it should be aware of and behave as we would expect it to behave will you then be able to shortcut the training process with subsequent agents by simply mass-duplicating the trained agent's learned data structures into new machines. @57:40 The "innate stuff" that every conscious creature is born with boils down to having a sense of reward/punishment (punishment can be thought of as being on the same spectrum as just negative reward). Pain and pleasure are absolute, they are not learned, they are built into our DNA - the DNA of every conscious creature on the planet. They are the key to creating something that can gather an "ocean of knowledge" via the experience of its own traversal through the world through its existence as a physical being. That and an ability to make chains of spatio-temporal associations so as to be able to not just recognize spatio-temporal patterns but also how to react to them as a chain of voluntary actions that can be practiced and refined. I know most hope to boil what is intelligent about creatures/humans down into something that doesn't involve reward/punishment, or existing, but that is misled and ignorant. Intelligence doesn't exist for intelligences' sake. It exists for survival's sake. It's an instrument of existence as a being, and has no reason to exist or do anything - no matter how much you try to force it - unless there's an element of reward/punishment and the delineation between itself and its environment. It needs some form of perception, of something, anything, and it's ability to effect its learned behavior on what it is perceiving, whatever that may be. That doesn't necessarily means it needs a physical body, but it needs something to interact with and a means of interacting with it, and enough dimensionality in its perception that it can actually develop awareness and understanding that is of any value. At the end of the day, though, the only way you're going to create an AI which has human-like understanding is you raise a robot like a human, period. Teach it all the things every human learns how to do, otherwise it has no context, no grasp, no sense, no idea.

    • @jonabirdd
      @jonabirdd 5 лет назад

      Factually wrong, pain and pleasure are tightly coupled to presets but a significant proportion is acquired.

    • @CharlesVanNoland
      @CharlesVanNoland 5 лет назад

      I love when people can't actually back up their arguments with logic, it's like watching LoTR or Harry Potter (especially Harry Potter) where anything goes! Talk is cheap.

    • @jonabirdd
      @jonabirdd 5 лет назад

      @@CharlesVanNoland My friend, it is logic that is cheap; anyone can concoct a "logically" sounding argument, it is evidence that is sparse.

  • @JimStanfield-zo2pz
    @JimStanfield-zo2pz Год назад +1

    I think Gary needs to better elaborate on what he means by innate knowledge. I'm not convinced by this notion that ai requires something substantial and innate to be appropriately classified as ai. I am also not convinced that human intelligence is derived out of some deeply meaningful innate concepts. There are likely some innate aspects of human intelligence. But it seems like our intelligence is mostly about building models of analogous relationships between concepts of nested concepts and using that model for predicting future sensory input. Your brain is likely taking in much more information and incorporating it into this model than what we are able to consciously retrieve. There is no reason why an empirically built model of reality couldn't derive some statistical essence of what a bucket is and be able to appropriately classify any new bucket like object within that concept and make some meaningful predictions about how that bucket and all buckets interact with the world.

    • @Eggs-n-Jakey
      @Eggs-n-Jakey 10 месяцев назад

      He's very knowledgeable but he's just into theoretical concepts. He's not a computer scientist or mathematician. He is mostly grasping at straws, his papers are just collecting concepts and slightly expanding on existing work.

  • @tombombadillo1
    @tombombadillo1 5 лет назад +3

    His statement of a brain using just 20 watts at 55:55 was just silly. In terms of flops if you try to make such an analogy to brain spikes, brains are indeed on the magnitude of AWS servers (or up to 1000 times larger depending on how you estimate the calculations). Brains are just a lot more power efficient.

  • @hypersonicmonkeybrains3418
    @hypersonicmonkeybrains3418 5 лет назад

    I dont think anyone finds their meaning through tedious work or dangerous jobs such as working down the sewers or in a coal mine etc. I certainly found no meaning when i was picking an packing items in a warehouse. In fact it felt like some kind of prison sentence/torture.

  • @rickyboyz1006
    @rickyboyz1006 5 лет назад +1

    I think Lex may be an advanced robot!! A very good one however!!

    • @gweliver
      @gweliver 5 лет назад

      Lex's interview with Ian Goodfellow had me thinking the same about Ian!
      Ian's expression-less face must have been intentional, his upper lip rarely moved in relationship to his lower lip, laughs or chuckles were always almost identical, and he tilted his head to the side?
      I'm really not trying to Judge or Make fun of anybody ever, but it was ackward to watch after the thought entered my mind.
      if that wasn't a robot then I appologise to Ian, he's such a brilliant person that the AI community is fortunate to have!
      There I said it, (MIT has made a Ian Goodfellow clone robot) even if it was in the second layer replys!

    • @rickyboyz1006
      @rickyboyz1006 5 лет назад

      @@gweliver lol, you never know. Maybe just a soft intro into humanity! I have to check that interview out!

    • @rickyboyz1006
      @rickyboyz1006 5 лет назад

      @@gweliver What if lex tries to get me know with his advanced A.I. knowledge and military grade components!!! AAhhhhhhhhh!!

  • @bearwolffish
    @bearwolffish 5 лет назад

    Dude handles his hiccups like a boss.

  • @kyuhyoungchoi
    @kyuhyoungchoi 5 лет назад

    Great talk.
    Lex. How did you remove Gary's burp and cough in the recorded video? Was it done by AI or manually?

  • @RalphDratman
    @RalphDratman 5 лет назад +1

    To suggest that human intelligence is a low bar is charmingly absurd when uttered by a human speaking in a human language while sitting in the midst of human artifacts and transmitted to viewers by human communication technology.

  • @X_platform
    @X_platform 5 лет назад +7

    This guy is a bit outdated no?
    There is much research addressing his concern nowadays

    • @dot.O
      @dot.O 5 лет назад

      This research has been going on for 20+ years. He used to be the guy on the other side. He paused to write books, which made him stagnant. As long as the ones coming up, the younger ones, start learning from a point where the questions of the elders are answered and not revisited. Learn from a more advanced stage or there won't be anything new, just a different iteration of what was done in the past...if that made any sense

  • @ONDANOTA
    @ONDANOTA 5 лет назад

    27:10 "any human could play Go using a rectangular board" . yes, but at what level? pro? amateur? superhuman? we would just guess and make moves that look meaningful