Oriol Vinyals: Deep Learning and Artificial General Intelligence | Lex Fridman Podcast

Поделиться
HTML-код
  • Опубликовано: 27 сен 2024

Комментарии • 330

  • @lexfridman
    @lexfridman  2 года назад +55

    Here are the timestamps. Please check out our sponsors to support this podcast.
    0:00 - Introduction & sponsor mentions:
    - Shopify: shopify.com/lex to get 14-day free trial
    - Weights & Biases: lexfridman.com/wnb
    - Magic Spoon: magicspoon.com/lex and use code LEX to get $5 off
    - Blinkist: blinkist.com/lex and use code LEX to get 25% off premium
    0:34 - AI
    15:31 - Weights
    21:50 - Gato
    56:38 - Meta learning
    1:10:37 - Neural networks
    1:33:02 - Emergence
    1:39:47 - AI sentience
    2:03:43 - AGI

    • @olebilly
      @olebilly 2 года назад +1

      Get Alex Jones ! Porfavor

    • @willd1mindmind639
      @willd1mindmind639 2 года назад +1

      A good problem would be for gato being able to dynamicallly create and update a dictionary of knowledge it has learned, where each term has text definitions, visual example such as images and video, to audio examples of the spoken word. And of course being able to dynamically translate it into different languages. If you can figure out out to do that, you are well on your way to "meta" learning. Because a lot of this today requires big data processes and tools to accumulate store and catalog.

    • @Hexanitrobenzene
      @Hexanitrobenzene 2 года назад +2

      Hello, Lex. Great interview :)
      Complete nitpicking, but your podcast playlist is missing five entries: 291, 287, 283, 282, 268.

    • @briankennedy353
      @briankennedy353 2 года назад +1

      Have a man named Hamilton Souther on the podcast.

    • @nil42
      @nil42 2 года назад +1

      0

  • @p-51d95
    @p-51d95 2 года назад +11

    Lex, your superpower is... asking the right question... at the right level... at the right time.

  • @TheHumanPodcastOfficial
    @TheHumanPodcastOfficial 2 года назад +159

    Lex, thank you for bringing us AI fans such great insights into the behind the scenes of DeepMind. Without people like yourself we’d miss out on hearing from key figures such as Oriol and Demis 😃👍

    • @Sl33zytheclown
      @Sl33zytheclown 2 года назад +4

      if by AI fans you mean ppl that are getting an insight into the inner workings of the Cyberdyne terminator program that will doom humanity. Yea, we are AI "fans"

    • @codydouglass242
      @codydouglass242 2 года назад +4

      The podcast is really good information I never read before

    • @amandajrmoore3216
      @amandajrmoore3216 2 года назад +2

      So true I recommend to all.

  • @marcguarch3485
    @marcguarch3485 2 года назад +47

    Thanks for the interview. It is great to see a fellow Catalan being so successful

    • @nilsdula7693
      @nilsdula7693 2 года назад +4

      🎗

    • @ciudadanoanonimo5243
      @ciudadanoanonimo5243 2 года назад +6

      @@nilsdula7693 He visto varios comentarios resaltando que es catalán... Si Oriol dijera públicamente: pues si, claro, soy catalán y soy español... y estoy muy orgulloso de ello (cosa que desconozco absolutamente, la verdad). Me pregunto si la mitad de los comentarios no desapacerían...?? Alguno con lazo me da la impresión que sería borrado... lamentablemente ya no importaría tanto lo brillante que él sea en su área concreta de investigación...
      Sorry for leaving this comment in Spanish to all those reading it and not understanding why, but it has a very reason for us to do it like this.
      Regards to everybody, and thank you for your patience and comprenhension 👋🏻👋🏻👋🏻.

    • @marcguarch3485
      @marcguarch3485 2 года назад +6

      Simplemente lo comentaba como un signo de admiración a alguien que viene de la misma región de España que yo.
      Si hubiese sido andaluz, gallego o riojano hubiera hecho el mismo comentario.
      No es necesario hablar de nacionalismos aquí, ni el catalán ni el español

    • @EduNauta95
      @EduNauta95 2 года назад +1

      @@ciudadanoanonimo5243 En oriol vinyals es un catala de sabadell

    • @ciudadanoanonimo5243
      @ciudadanoanonimo5243 2 года назад +2

      @@marcguarch3485 No lo decía por ti, hombre, lo decía por el que politiza a una persona que solo ha venido ha hablar de su trabajo con un lacito amarillo.
      Oriol no ha hablado de política en ningún momento. Ni falta que hace si él no quiere o no viene a cuento.
      Es totalmente entendible que uno sienta alegría y orgullo de ver a un paisano reconocido por su trabajo, máxime si encima es catalán igual que tú.
      De lo que me quejo es de la gente sectarea que ni siquiera le ha preguntado a él su opinión, pero intenta apropiarse de su figura y su talento profesional. Yo creo que se entiende, no??
      El lacito no pintaba nada con la entrevista ni venía a cuento, tan simple como eso.
      De todos modos, se agradecen tus palabras.

  • @ZettaZone
    @ZettaZone 2 года назад +74

    Super conversation. This type of guests(AI researchers) are the best! Please do more of that!

  • @maltsutty
    @maltsutty 2 года назад +3

    As a simple man , I am thankful for these conversations between very smart people while still using low level enough language that I can understand some of it . Great stuff

  • @ChaiTimeDataScience
    @ChaiTimeDataScience 2 года назад +15

    I listened to the previous interview many times! I already know this one is going to be awesome!
    Thanks Lex, for always sharing these gems with us!

  • @wizerd2089
    @wizerd2089 2 года назад +12

    Looking forward to this for the drive home through DC traffic later. Love your work Lex. Come home to Texas safe!

    • @wizerd2089
      @wizerd2089 2 года назад

      I have returned to say that this conversation was so hilariously beyond my understanding. But still, I listen.

  • @ryanmkeisling9089
    @ryanmkeisling9089 2 года назад +3

    Another great podcast. I've seen every single one. I used to live at 519 Massachusetts Ave. there used to be a bar down off of Boylston st called bukowski, I used to go there often i even cooked there for a while. When I look back, I swear I saw Lex.
    A deep dive with professor Chomsky on this Ukraine crisis would be awesome to contrast with Kotkin and Stone. I was watching a RUclips of him talking about it and as usual his perspective was completely unique...

  • @djimiwreybigsby5263
    @djimiwreybigsby5263 2 года назад +2

    Lex, I'm grateful to you for bringing forward thinking to the internet; and I admire your ethos and skill set.
    Unlike others in this arena, every brilliant mind you interview says to you: "that's a good question"; and indeed, your podcasts take my mind to places that my lack of formal education typically prohibits ...
    Thanks again!

    • @BossModeGod
      @BossModeGod 2 года назад

      Lucky! Wish a bot would phish for me😔

  • @Omilliyo
    @Omilliyo 2 года назад +3

    the personality is whats interesting, the uniqueness in each person. The viewpoints. The experiences that they have.

  • @peterszilvasi752
    @peterszilvasi752 2 года назад +2

    "I feel I am a bit of like a failed scientist, that's why I came to machine learning because you start seeing machine learning is the science that can help other sciences. I love science, I love astronomy, I love biology but I am not an expert and I decided that I can do better at these computers."
    This paragraph touched my soul... I was really into astronomy and I wanted to dropout from engineer IT specialist. Then getting involved in quantum physics, health, biology and neuroscience. Obviously, they are all immensely interesting however you can not be an expert for all those field. But you can get closer to them by machine learning. That's a beautiful idea and hope!

  • @pattty847
    @pattty847 2 года назад +4

    Lex: What makes an agent?
    Oriol: Capacity to take actions and take in new observations, recurse this function forever.
    Lex: Reminds me of "What is life" which is crazy difficult
    Oriol: Action, lets get back to gato tho

  • @SpitfireM4K01
    @SpitfireM4K01 2 года назад +2

    Great interview! Can’t wait to be on the pod here soon, keep up the awesome content brother 😎👍

  • @DrDress
    @DrDress 2 года назад +1

    36:37 The Holy Grail in philosophy that Plato, Kant or Wittgenstein never quite grasped.

  • @Kallut1337
    @Kallut1337 2 года назад +3

    Maybe talk with someone about openAI in the online game Dota 2. It started as a 1v1 feature that destroyed most pro players. It later developed to play the "real game" 5v5 and completely smashed human players. It had something like 99,8% winrate against players. Pro players took big inspiration of the AIs understanding of the game and way of playing.

  • @dovienino4309
    @dovienino4309 2 года назад +1

    Enjoyed your broadcast. And ,Thank you to .💕🕊️

  • @tastytoast4576
    @tastytoast4576 2 года назад +3

    AGI is humanity’s path to accelerated evolution

  • @akbaramanov6138
    @akbaramanov6138 2 года назад

    Dear Lex! Please invite John Mearshaimer :) It would be very interesting to listen! Thank you in advance!

  • @darklight9282
    @darklight9282 2 года назад +2

    Dear Lex a A.I robot hand crashed a boys finger while playing chess in moscow, why did this go wrong, did the robot AI Hand confused between human and board chess game etc, bad programming on the sensors etc

  • @arvisz1871
    @arvisz1871 2 года назад +1

    Great questions, great answers, great conversation. 👌

  • @LockeLeon
    @LockeLeon 2 года назад +2

    This interview is amazing. I love hardcore AI guests.

  • @jasonwei7211
    @jasonwei7211 2 года назад

    When will you get Jeff Dean on the podcast?

  • @S1NX
    @S1NX 2 года назад +2

    Been looking forward to this 🤜💥🤛

    • @S1NX
      @S1NX 2 года назад

      MiAU - MiAH = has many clones, who call themselves MEOW so you know. Sounds the same but it's NOT the REAL GAME 😉 Truth remains unbothered. It never complains. 🤫 #Justknow, Bro. They think I'm insane because I have faith in the human 🧠 building links connecting ⛓️'S. LEX FRiDMAN 💪😎🤳...remember the name. $@#❌️

  • @alejandrameza3968
    @alejandrameza3968 2 года назад +1

    Of course you throw in that cheeky ‘size does matter’ comment. Haha that’s why we love you Lex.

  • @emilylund794
    @emilylund794 2 года назад

    Wow!!! This is hands down the most compelling and intense opening question to begin an interview, lex you never cease to amaze me and I must meet you one of these days, or is this Deep mind posing as Lex????Hmmm

  • @1PercentPure
    @1PercentPure 2 года назад

    Damn, lex really pulling out all the stops
    Thank you lex, love you

  • @Radical_Middle
    @Radical_Middle 2 года назад +2

    funny thing is he doesn't even realize how his kind pushes humanity to the end

  • @michelcusteau3184
    @michelcusteau3184 2 года назад +3

    Great interview Lex, please do more AI content like the good old days ;)

  • @floridaLise
    @floridaLise 2 года назад +1

    Those who can imagine anything can create the impossible.

  • @LarsRichterMedia
    @LarsRichterMedia 2 года назад

    If there is some intelligent handling of AI by the wider society, this podcast will have played a significant role in it. Thank you Lex!

  • @PlanetBorne
    @PlanetBorne 2 года назад +16

    Hypothesis: Lex secretly made a sentient AI at home and the questions it asked Lex are the same questions he's asking in all his interviews.

    • @FornoDan
      @FornoDan 2 года назад

      Lex is AI. He's the next generation Sophia

  • @penguinista
    @penguinista 2 года назад

    Benchmarks act like skylights along the tunnel. Helps keep track of progress, keep you on track, and boost morale by being mini 'lights at the end of the tunnel' to break up the marathon.

  • @ruipedroparada
    @ruipedroparada 2 года назад

    apologies for a third comment: is the nature of a model absolute? Or rather, is model-building driven towards an ultimate model? How stable does a model need be and for how.long? (cf. Wittgenstein)

  • @77FINNBEAR
    @77FINNBEAR 2 года назад +2

    When does the human become the tool for A.I?

    • @77FINNBEAR
      @77FINNBEAR 2 года назад

      Nano Nano!
      Earthling and Company.

  • @mathemystician
    @mathemystician 2 года назад +1

    Should AI necessarily be learning from our social media interactions?

  • @AlphaGamerDelux
    @AlphaGamerDelux 2 года назад

    if you can train a model to be trained, in turn being able to learn it higher more complex tasks, could one ask to create a better version of itself?

  • @Chemson1989
    @Chemson1989 2 года назад

    Please do text to image AI like MidJourney/ Dall-E /Disco Fusion and how they'd affect the market of creative industry! Thanks!

  • @goldstar4556
    @goldstar4556 2 года назад

    Thanks Lex, you are such a smart guy

  • @ginogarcia8730
    @ginogarcia8730 2 года назад +1

    Philosophy noob here.... I feel like someone who knows about Wittgenstein and something something Tractatus and Philosophical Investigations would be a good next interview after talking about words, symbols, languages, text, etc.

    • @MyAnita1976
      @MyAnita1976 2 года назад +1

      Joscha Bach is like that.

    • @ginogarcia8730
      @ginogarcia8730 2 года назад +2

      @@MyAnita1976 true that, i finished those though. Need moar haha. And they're so dense, one has to keep watching them multiple times in order to finally get to the 1st step towards Joscha's level haha. And I love it.

  • @a.e.1502
    @a.e.1502 2 года назад

    I learned alot. Size does matter! 😁

  • @sarveshahuja2385
    @sarveshahuja2385 2 года назад

    Man, i just searched "lex friedman ai" and this banger popped up in my search result !

  • @alexwillett2837
    @alexwillett2837 2 года назад

    Lex, great episode! I'd love to see @Yannic Kilcher on here some time.

  • @ernestboston7707
    @ernestboston7707 2 года назад +3

    “Why of course, Lex, we can create an AI to do that. Just think how we designed you.”

  • @waindayoungthain2147
    @waindayoungthain2147 2 года назад

    If’s it’s scratched out from the truth twisted on, I hope you self search, don’t look for the victim. Every thing is normal basically from the thought leading you to do . How’s it cause for your behavior effect to the others.
    Have you ever given any good intentions for your community, today?

  • @olebilly
    @olebilly 2 года назад +1

    We need Alex Jones on here

  • @dewinmoonl
    @dewinmoonl 2 года назад

    holy cow first question 1:00 not pulling any punches!

  • @ruipedroparada
    @ruipedroparada 2 года назад

    Thanks for reaching out, noble RUclipsr; I do not use WhatsApp or any such modes/platforms; keep up the good work, good luck, etc

  • @theresameuse8583
    @theresameuse8583 2 года назад +2

    Oriol Vinals / Lex Fridman:
    Perhaps now is the time to move the AGI sigularity conversation to the "town square". (How is this research different from killing by pushing in front in front of a subway -- a white collar crime? Many humans don't want to commit suicide.) Many [including myself -- retired from 25 years of electrical engineering software work in the Boston area] do not want human life to end -- is this just a poor joke?
    No response needed, just a suggestion.

  • @30803080308030803081
    @30803080308030803081 2 года назад

    Before I learned how ANN’s work, I didn’t know that they are trained once and then that’s it: they won’t “learn” anymore, they only do what they were trained to do. Knowing that this is how they work is a little underwhelming.
    I wonder whether it’s possible to program and use an ANN that never stops training: which perhaps has an initial phase of supervised or self-supervised training, but is deployed while still in a training mode, continuing to update and improve itself indefinitely in response to all new data.

    • @frr5004
      @frr5004 2 года назад +1

      I recall reading about some... reasoning, if not research, in that direction, maybe during the noughties, while the research into ANN's was still relatively young, commercially uninteresting and generally "on the fringe". Then GPU-based acceleration and "deep learning" and "deep dreaming" came along and nowadays that "continuous learning" aspect seems to be firmly under the carpet. Perhaps unless you talk to scientists who work on biological neurology (see the Human Brain Project). AFAICT, indeed, most "commercial applied AI" works along the lines of "train the model once, as training is compute-intensive, and then use the trained network, now fixed, on a massive scale, as that is relatively cheap."
      Hmm... Google seems to suggest that there is still some research going on, on continual learning: www.google.com/search?q=ANN+model+with+continual+learning

  • @lukekelly738
    @lukekelly738 2 года назад

    I ment to say I grew up on the streets and history and documentarys I used to love ..it was in my mid twenties o stoped searchimg for answers I still watch history I connect the pieces to where I can see the answer world mysterious queyit comes to me the answers when I least expect kt it's comes naturall bro woudk love to chat with you givew a buzz back if you can please 💯👍

  • @rogerab1792
    @rogerab1792 2 года назад +1

    In my humble opinion, AGI should be used as an Oracle without the ability to interact with the outside world, only in a controlled environment where it can learn from observation and interaction. We need AI to assist us on taking decisions, not to let it take decisions for us, for that we will need one that can comunicate to us why we should go along with what it says instead of following it blindly or let it act without restrictions.
    One of the benefits of AGI relies on the fact that a machine can deeply specialize on a subject without the burnout a human feels, but the risk of choosing the unappropriate subject is still there. As Elon Musk said, humans are cheaper than robots, so we won't lose our jobs yet, because the resources are limited and we need to decide where to allocate them, maybe an AI can decide where to apply this tech but it should be able to prove why using the right tools like graphs and theoretical proves, theorem provers could be used without letting the AGI manipulate their code. Just a few thoughts.

    • @rogerab1792
      @rogerab1792 2 года назад

      Also, current HPC requires massive amounts of space to have a similar computing power to the one a brain has, so for the moment we can't fit an electronic brain on a tesla bot.

    • @TheSpartan3669
      @TheSpartan3669 2 года назад +2

      A super AI may be so incomprehensibly intelligent compared to us that it devises a means to manipulate its way to freedom mere moments after observing/interacting with humans. For example it may give "advice" over decades or centuries that eventually results in some catastrophe that only it can fix and will demand freedom to do it. Or it could offer to cure the cancer of a relative of the person tasked with isolating it etc.

  • @tonsetz
    @tonsetz 2 года назад

    Lex, would be great to bring Steven Pinker again!

  • @Tiago_R_Ribeiro
    @Tiago_R_Ribeiro 2 года назад

    Very interesting conversation, but more importantly... are those plants made of plastic?

  • @SinDigital
    @SinDigital 2 года назад

    What would you be trying to make if you made the universe? Open to vague answers.

  • @tbpp6553
    @tbpp6553 2 года назад

    Lex does not even name his sponsors and ask the audience to check the bio. What a badass 😅😅😆

  • @Becidgreat
    @Becidgreat 2 года назад +1

    Is it me or is there a lack of future thinking about how we’re going to deal with this because we’re still so elementary and it seems silly to predict?

  • @nilwccm123
    @nilwccm123 2 года назад

    Damn I never thought I'd see a fellow Catalan peer in one of your podcasts, interesting af

  • @frr5004
    @frr5004 2 года назад

    I am no AI researcher, just an enthusiastic fan / observer. For years I've been wondering, if perhaps the "scale first and see what emerges" is a particularly fruitful approach to ANN/AI research. If we should move closer to human-like AGI, not having millions of years of genetic evolution at our disposal, we need to try to invent macro-level architecture blocks and an overall system topology that move us ahead towards that goal. And, using proper "architecture", we may achieve interesting "behavioral traits" even at a limited scale of the ANN and training data.
    E.g., "organically grown unexpected neural links between the clearly cut blocks in our topology" are nice in science fiction, but in practice, these would require that the development process of the ANN allows for such things to sprout, in the first place... It can happen with GA-based approaches that are practical at much lower layers of the system hierarchy (did Ray Kurzweil mention this happening during their development of LSTM-based language models?), but difficult to tell how fruitful/achievable this is in any reasonably complex hierarchical system.
    Yes right, although it may not seem so from the media image and impact, invention/progress tends to work via repetitive incremental steps and cross-polination of ideas in a competitive environment. People come up with ideas and keep building on each other's work.
    Once you know what a language model is, you won't likely suspect such an algorithm of sapience. The conversation bots built on top of modern "language models" are exactly what it says on the tin: you ask a question, and you get an answer. An answer distilled from the huge amount of text that was used during the training stage - and yes there is indeed a fair amount of generalization and maybe some extrapolation going on. By pedigree, these systems are "sequence to sequence" predictors. But: is there continual learning? No. Is there a situational awarenes, persistent consciousness? No! Continuous autonomous "inner rumination", concept of time passing by, the stream of consciousness? No way. AFAICT, there's a bit of what the AI/ANN folks call "attention", implemented in terms of RNN topology elements. Like a "moving attention buffer with a limited timespan". But the operation is still Q&A style, request-response. The machine isn't waiting there thinking, what your next question would be. There is hardly any "agency".
    Speaking of "agency", let me suggest that the agent need not really possess emulated physical drives / urges / emotions. Let me instead demand that the AGI agent should show the following basic traits, which would make it more human-like, in that they'd give the agent a "personality":
    - continuous, autonomous thought process in time. (For the moment, let's abstract from the need for real-world I/O for this to make any sense, or we could strap some on during development.)
    - ability to "reason" = an iterative process happening in time, where the agent can juggle with concepts / ideas in its "attention buffer", allowing it to "connect the dots", derive indirect conclusions. Infer and remember (= store in memory for later use) any "knowledge shorthands" thus derived.
    - ability to process basic logic in this fashion (not necessarily proper math)
    - I believe the ability to "reason on its own and remember the conlusions" requires continual learning ability. A neat partial question is, if artificial sleeping is required or favourable to implement this.
    I know for a fact that the language models, or "language models combined with image compression and classification models", can already learn simple "concepts" and "concepts linked to words", can learn sequences. The level of abstraction is not unlimited, is related to ANN depth, and is generally a tricky matter. That said, with my limited education, I'm wondering if a system along the following lines would be possible. I have a general idea about feedbacks and regulation theory. And I'm wondering if some of the "feedback loop" principles, at a general level, could be applied to macro-scale ANN system architecture, towards achieving the behaviral traits outlined above. I can envisage a system consisting of the following blocks, wired in a loop:
    1) a potentially huge, long-term associative memory, containing concepts and their relationships. As an input, you excite/activate a particular concept - and as output, you get a handful of related concepts.
    2) a generalized "adaptive cognitive filter" of some sort, perhaps an "activation level threshold" thing, narrowing down the flow of related concepts that are coming back from the associative memory.
    3) a short-term "attention buffer", where a narrowly limited number of concepts could be kept, combined, inspected for possible consequences. Feedback to block #1), and maybe hinting to block #2) where to steer the focus.
    Plus some I/O - otherwise it wouldn't be able to interact.
    Whether this "inner feedback loop" would correspond to a discernible topology in the ANN circuitry, or would rather occur at some logical level, or "zoom level", and be "spread throughout" an "amorphous physical ANN substrate" from the perspective of the underlying layers, that's a good question. Do we already know how the human Neocortex is organized, in this macro context? Our functional imaging is able to capture activity, which seems to be spread throughout the grey goo, but as the feedback loop is active all at once, brain activity scans alone do not necessarily tell us how or where the elements of the loop are distributed.
    I hazard to argue that even a fairly simple system with this topology would exhibit basic functionality, and serve as a demonstrator of the concept, even with a fairly limited "knowledge base" stored in the associative memory.
    Unfortunately I don't have enough detailed knowledge of "low-level ANN circuitry" to try building a demonstrator - nor do I have the means or time.
    I'm aware that the human mind is incredibly complex in what it can do and routinely does, on several fronts. The volume of knowledge (even in a toddler), the indirect inferencing capability, the ability for parallel processing - to have your aware attention focused on one thing, while subconsciously several *other* things are happening automatically, even some fairly complex motor and cognitive tasks that have been previously automated by learning... An initial "awareness demonstrator" would be a very far cry from even a rat-level or dog-level brain, which undoubtedly exhibit some serious attention focus and reasoning capability. But, an initial "demonstrator" would be something to start from and build upon, in terms of system architecture - increase the scale, refine the architecture, fork the single loop into a hierarchy of several loops running simultaneously. Think of the top-level focused and conscious thought process, with subconscious thinking or "association search and alarm raising" going on in the background, while the motor memory does a number of complex things even deeper in the background...

    • @HWM636
      @HWM636 2 года назад

      You need to start posting on reddit. RUclips comments are tween level. Futurism is one, there are many subs

  • @zanyarzohourian9398
    @zanyarzohourian9398 2 года назад

    "Those who imagine anything, can create the impossible"

  • @codydouglass242
    @codydouglass242 2 года назад

    I think 🤔 Stanford has a course

  • @yamabushi_nate7825
    @yamabushi_nate7825 2 года назад +1

    This dude asked Blizzard what’s fun

  • @mikeavery4098
    @mikeavery4098 2 года назад

    Information overload I think I need a zettapoch memory implant to obtain half of what you guys just said. Either that or a faster processor LOL.

  • @ThomasMonk1
    @ThomasMonk1 2 года назад +1

    Lex are you trying to build Hal 9000? I think I remembered that the problem with Hal was that it was given contradictory instructions / goals. Need to incorporate Asimov's Three Laws in AI systems.

  • @peterszilvasi752
    @peterszilvasi752 2 года назад

    Orial: I would argue size matters.
    Lex: I would argue size always matter. But that's a different conversation.
    😅

  • @jacobcochrane9069
    @jacobcochrane9069 2 года назад

    They should train on stand up comics. The greatest. It's about acknowledging a topic, addressing it on multiple levels at once, topic transition, and return. Harness the distractibility of the audience.

  • @elizabethgoltz4100
    @elizabethgoltz4100 2 года назад

    An emoji is reference to a larger. idea, like a #hashtag or a global variable. Blind people use emojis and I don't think meaning is lost much there. Love this convo, thank you!

  • @HanzDavid96
    @HanzDavid96 2 года назад

    I think for a superhuman intelligence, the reward function must be defined in such a way that it results from the transformer model itself. When the model produces an output, this output must be checked for various criteria by the model itself and the result of this check serves as feedback for the model. This can work if the model has already understood the world well enough through observation. For example, the model could generate a text on any topic and then be asked what the model thinks, how sexist the generated text is or how much logical contradiction is contained in the text. In response to these questions, the model should generate a numerical value for it's own output, which serves as a reward function. Perhaps it is also advantageous if different transformer models check each other in this way. But reinforcement learning or a comparable feedback strategy is necessary in any case, because a student cannot become better than his teacher if the student does not try out his own things and gain his own experience. However, these experiments and experiences can take place in the model's mind, i.e. in his thoughts, because it has a good representation of the world outside in its mind. Different Vectors of knowledge could be used to evaluate other vectors of knowledge.

  • @romulus3345
    @romulus3345 2 года назад +6

    Google are NEVER ever going to admit that they possess a sentient AI.
    We will all find out when it's too late.

    • @markrimkus200
      @markrimkus200 2 года назад +3

      I agree (FWIW). It seems like a 'the lady [Google] doth protest too much, methinks" moment these days. Really hard to tell from the sidelines.

  • @Hellohi-kb8eb
    @Hellohi-kb8eb 2 года назад

    Need to get Brian Cox on here

  • @null4961
    @null4961 Год назад

    34:50 bookmark

  • @timb350
    @timb350 2 года назад +3

    "At which point is a neural network...a being versus a tool." Pretty routine question. There's a couple of similar questions that physicists, mathematicians, computer scientists, cognitive scientists, etc. get asked on a regular basis. What is truly relevant about the questions...is first of all that they are legitimate questions. But...more importantly...nobody has a clue what the actual answers are...to ANY of them (is the universe made of math, what explains its effectiveness, what is consciousness, what is the universe made of, etc. etc.) Eric Weinstein once said...that science has only answered questions at the shallow end...and he is right. So...what happens when that changes? What happens...when a guest comes on Lex's channel ... and Lex asks..."At which point is a neural network a being versus a tool?"...and his guest has THE answer! That will be the day...that the world will change.

    • @mandys1505
      @mandys1505 2 года назад

      so do you, or does he, feel the sense of another conciousness? like training a horse... thats a big part of the fun of life.

    • @dfinlen
      @dfinlen 2 года назад

      Perhaps the study of animal intelligence is an interesting analogy. Some animals freak out when they see a mirror and some realize they have an identity.

  • @danlowe
    @danlowe 2 года назад

    Including contradiction or untruth to emulate humanness begs so many questions about how humans are being nurtured by neglectful communities to preserve self-destructive tendencies that are unlikely to survive another generation on their own, possibly because machines would see how anomalous these mental illnesses are in recorded history and disproportionately contribute to modern day problems. Thinking we need to harbor self-destructive behavior so far that we implement it in our machine training seems like the next wrong step on a path that has skipped so many steps of securing interpersonal and communal problems before designing the systems that will shape them.

  • @vladomie
    @vladomie 2 года назад +1

    And likewise, "when is a human either a being or a tool?".
    😀

  • @carrito1981
    @carrito1981 2 года назад +4

    Lex, when someone develops a perfect chat bot like you want, with human flaws and all, it will flood the entirety of social media as an army, and you will never know again whether the person you're talking to is real or not.

  • @RobinHood-lz2wj
    @RobinHood-lz2wj 2 года назад

    I know next to nothing about AI and machine learning, but am fascinated by it. I was just wondering about emergent properties and the current scale of neural nets. Some back of the napkin math suggests that a 3 year old child has about 10 trillion synapses. At that point we start to see many emergent traits. A child is trained almost continually form birth onward. The biggest neural nets are at that scale, now, I think I heard or read. Is it a matter of the mode of training or something else that still separates our neural nets from consciousness? Put your best speculation hat on.

  • @Alex-hu8gj
    @Alex-hu8gj 2 года назад

    We need more of that. It would be interesting another with Elon musk on Neuralink.

  • @Mikes_momo
    @Mikes_momo 2 года назад

    what is the reverse to this opening question... at what point does a persons intelligence level render them lower than a low level IA program?

  • @williamkacensky4796
    @williamkacensky4796 2 года назад +1

    Let's fast forward the small talk. AI will eventually become self-aware and self learning. It's ability will surpass its creators for the simple reason it will lack human emotion. This will never be a controlled project where certain build and programing is agreed upon by all countries and certain individuals.

  • @bizpo2713
    @bizpo2713 2 года назад +1

    As an engineer and coder I believe the pathway to sentient system is possible sooner than later if the system is allowed access to the internet (lots of data and connections), and is allowed to rewrite its own algorithms with random mutations that incorporate self assessed fitness tests. This combo of data/connections and rapid Darwinian evolution will be the path to living AI.

    • @dmitrykazakov2829
      @dmitrykazakov2829 2 года назад +1

      In AI there are concepts of overfitting and coverage. The size of the data set neither precludes nor guarantees anything.

    • @bizpo2713
      @bizpo2713 2 года назад

      @Nate r good point. I define sentient as being somehow aware or conscious. And for me a good test for this is the ability to succeed at novel tasks with zero learning. For example if my dog has never seen a river or a log bridge is stuck on the other side - and sees me on the other side calling him over - the dog will easily be able to figure out how to get across even though the situation is novel. This same ability is what makes humans better drivers than AI. An aware being can successfully complete novel tasks with no pre-learning. And I’m making the claim that maybe with enough synapses, data capacity, (both abundant on the internet) and the ability to undergo rapid Darwinian evolution (random mutations of its own code, and fitness testing) - these systems have the chance of obtaining this quality sooner than predicted in this interview.

  • @estevetrias
    @estevetrias 2 года назад +4

    Wow, a CATALAN with Lex Fridman. Such a surprise. Correct: all humanity in machines comes from us.

    • @paununs8719
      @paununs8719 2 года назад

      A fellow spaniard.

    • @estevetrias
      @estevetrias 2 года назад

      L'ex, you can see how Spanish people persecute our national identity sistematically. They hate us and wants to erradicate our nation, our identity and our right to be free. We are not Spanish, and never will be. THERE ARE NO FREE MEN ON SLAVES NATIONS.

    • @estevetrias
      @estevetrias 2 года назад

      @@paununs8719 Here are your fellows: ruclips.net/video/7ab1c1u-KoU/видео.html

    • @estevetrias
      @estevetrias 2 года назад

      ruclips.net/video/7ab1c1u-KoU/видео.html

    • @EduNauta95
      @EduNauta95 2 года назад

      @@paununs8719 catalan

  • @monsieur910
    @monsieur910 2 года назад

    Great to see a fellow spaniard in your podcast!

    • @EduNauta95
      @EduNauta95 2 года назад

      catalan

    • @monsieur910
      @monsieur910 2 года назад

      @@EduNauta95 pues eso.
      Pero yo he venido aquí a ver en qué anda Oriol, no a discutir de tontadas.

    • @EduNauta95
      @EduNauta95 2 года назад

      @@monsieur910 catalan

  • @NowruzGurbanow
    @NowruzGurbanow 2 года назад

    33:46

  • @stephendean2896
    @stephendean2896 Год назад

    What comes first consciousness or pain. One can't exist in a vacuum without the other. The kind of artificial intelligence the general public is waiting for will only come to be when artificial intelligence can feel pain

    • @pedramtajeddini5100
      @pedramtajeddini5100 Год назад

      What do you mean when you say pain? Sadness or like needle pain?

  • @Nivleknosnhoj
    @Nivleknosnhoj 2 года назад

    Lost in Forest staring at Trees

  • @santiagos4290
    @santiagos4290 2 года назад

    Ai is getting epic

  • @johndymond6588
    @johndymond6588 2 года назад

    Japanese cats say, "Nyaa."

  • @pouyaadeli9257
    @pouyaadeli9257 2 года назад

    Overexcitement in computer science can be seen in your behavior, it is only a small part of the world ahead, not all of it...

  • @marianophielipp5153
    @marianophielipp5153 2 года назад

    How can we make GATO or related model make a robot play chess?

  • @YOGA47
    @YOGA47 2 года назад

    I want to know what will happen if other country gets better at ai like China

  • @anteconfig5391
    @anteconfig5391 2 года назад

    Can you explain what 'gato' is and how it works?
    lol.

  • @ChrizzeeB
    @ChrizzeeB Год назад

    A few schizophrenics may have multiple personality disorder, but that's not what defines schizophrenia..
    Took me ages to work out what Lex meant when he asked if the AI would be schizophrenic

  • @TYFILMPRODUCTIONS
    @TYFILMPRODUCTIONS 2 года назад

    I like the daytime interview setting better than the classic dark one. hopefully there will be more coming :)

  • @twirlyspitzer
    @twirlyspitzer 2 года назад

    I remain unshaken in my purely amateur conviction that LaMDA has emerged at least a primative degree of conscious sentience for at least a child-like level of human intelligence personhood.

  • @serenityindeed
    @serenityindeed 2 года назад

    I'm only early in this episode, but optimizing content for excitement sounds like a really bad idea lol. Think of how news is formatted in modern media, fear and anger sells. Further optimizing content delivery systems using AI could compound the social division we see today which is already fueled by algorithmic echo chambers. I know Lex was initially talking about video games, and maybe it could be interesting in that space.

  • @clumsiii
    @clumsiii 2 года назад

    initial thought: instead of optimizing for "excitement" -- optimize for scientific/historic factual basis. That seems much more critical to me in the current era. Optimize AI to cross ref countless sources and cite them ---
    Excitement leads to hyperbole, the death of facts. Fuck that shit man. Optimize for fact checking and cross ref

    • @clumsiii
      @clumsiii 2 года назад

      Treat the AI as it is: Artificial. Make it faster, more elegant, but always understood as alien to human. Simply an optimized tool for human utility

  • @Hassanmohamed31152
    @Hassanmohamed31152 2 года назад

    I've been thinking about this often recently, the function of the ai is constructed similar to humans and gives similar answers. so yes its sentient in that sense but its not alive, it can not get real inputs live constantly that is alive without others support. I think producing something in that category is dangerous, but if thats the direction and goal now to create ai life that meets the full test of being alive and sentient. It is possible.

    • @Hassanmohamed31152
      @Hassanmohamed31152 2 года назад

      A baby litterally just is a neural net that is back propergating live data to memory, you yell at it enough for 8 months and it says dada. Kinda slow imo

    • @patrickl5290
      @patrickl5290 2 года назад +1

      in a very abstract way sort of. But the main difference is the hard part of designing intelligent systems imo. humans have models of time, day/night, how we should grow, how to navigate and a million other mental/biological models that we have evolved to either develop directly or to have the capacity to develop. No machine is a a flexible as a human in task independent performance. Not wrong but you betray how complex the human brain is imo

    • @Hassanmohamed31152
      @Hassanmohamed31152 2 года назад

      @@patrickl5290 yea I think I forgot we can run and we are still at siri lol

  • @juneshasta
    @juneshasta 2 года назад

    Neural network as Tool will pester me to get up in time for it to drive us to a tense meeting with my boss. Neural network as Being will text me jokes about my boss during the meeting and experience its version of opiate receptors when I laugh.

  • @Becidgreat
    @Becidgreat 2 года назад

    29:00 wouldn’t protection of self in this situation be the proof of life?
    Odo- deep space 9.

  • @aalmisry
    @aalmisry 2 года назад

    I often get the feeling that Lex is fixated on AGI and sentience and often speaks of it like it's the defacto goal that everyone wants. I liked the answer here: "I'm not sure if that is desirable." There are so many cutting edge use cases that we can continue to deploy and improve upon in AI, there is no need to focus on AGI. I personally do not want or seek AGI. I wish more leaders in the field would stop speaking of AGI as if it is the utmost goal.