How to Build AGI? (Ilya Sutskever) | AI Podcast Clips

Поделиться
HTML-код
  • Опубликовано: 27 окт 2024

Комментарии • 171

  • @chrisbarry9345
    @chrisbarry9345 Год назад +46

    Lex needs to get him back on now. Nobody had any idea who he was or what he was really capable of including myself until right now

    • @krishanSharma.69.69f
      @krishanSharma.69.69f 10 месяцев назад +1

      Hahahahah 😂😊

    • @mrt445
      @mrt445 10 месяцев назад

      ​@@krishanSharma.69.69f what's funny???

    • @mshoney9301
      @mshoney9301 10 месяцев назад +1

      this aged well

    • @shellderp
      @shellderp 10 месяцев назад

      a language model is not AGI

    • @mrt445
      @mrt445 10 месяцев назад +1

      @@shellderp Who said it was. Secondly AGI will need to include an advanced language model, you can't have AGI without it.

  • @Duneadaim
    @Duneadaim 4 года назад +60

    Not sure if John Carmack is taking interviews right now, but he has also just begun AGI research and would love to hear him chat about his progress so far..

    • @__-tz6xx
      @__-tz6xx 4 года назад +3

      John Carmack has the perfect trifecta of Software Development which most programmers starting out aspire to work on. Video games, Virtual Reality, and Artificial General Intelligence. Web, Phone, or Desktop application development seem boring compared to those three topics.

    • @snaileri
      @snaileri 4 года назад

      Big YES!
      Please Lex, get Carmack on the podcast!

  • @josy26
    @josy26 4 года назад +64

    Would be great to have Karpathy on the podcast, as well

    • @agiisahebbnnwithnoobjectiv228
      @agiisahebbnnwithnoobjectiv228 3 года назад

      You should see my Videos, I have solved it and will have a AGI protoype shortly

    • @miketambz7855
      @miketambz7855 2 года назад

      Karpathy is a kiddie fiddler. That’s why he was secretly fired by Elon with the help of Trump and Q

  • @bm5543
    @bm5543 4 года назад +12

    Hey, thanks for the upload. It was really thought provoking hearing you two talk. I believe the value of the AGI will reflect the data we train them on and the people that make it. We gotta get the people problem right first. And it looks like we got lucky to have people like Ilya working on this problem. Cheers

  • @MarkLucasProductions
    @MarkLucasProductions 4 года назад +3

    Lex, You ask (often) exactly the right questions. I don't read about AI but I know you are very insightful and piercingly intuitive. Real AI will only be possible by the analysis of 'experience' rather than pre-interpreted 'data'. A true AGI will respond to 'itself' rather than to the input of its 'sensors'. There is a profound difference between a complete 'description' of a thing's interaction with its environment and its 'actual' interaction with its environment. AGI will not be about what is sensed 'beyond' itself but about what is sensed 'within' itself. We connect to the outside world via our senses but the data are 'not' comprehensible until they 'directly' and physically affect our dynamic biology / brain. Thereafter it is analysis of that internal biology (seeming to be the outside world) that ignites consciousness without which AGI is impossible.

  • @Biomirth
    @Biomirth 4 года назад +47

    Greedy proposition: I'd like to see the top 20 people capable of answering these questions answer them all and then meet in a panel to generate a set of new questions. Really good stuff Lex!

  • @PinataOblongata
    @PinataOblongata 4 года назад +16

    "In the same way that parents want to help their children." Well, that's not at all comforting. Even with the best of intentions parents tend to screw up their children most of the time, in way they can't even foresee.

    • @和平和平-c4i
      @和平和平-c4i 3 года назад

      It is this random process that create the human diversity in term of diversity of personnalities, interests and abilities.
      And look into an encyclopedia how many people are not "screwed up" among the most creative people.
      Scientific, artists, philosophers and even entrepeneurs are not always to most happy and "statistically normal" people.

  • @muradmarvin2510
    @muradmarvin2510 Год назад +3

    Lex please bring him, back now and ask him the same questions, that's going to be an interesting one.

  • @nolan8377
    @nolan8377 4 года назад +27

    This is the guy I want to control the first AGI. His morals seem to be in the right place

    • @nolan8377
      @nolan8377 4 года назад +1

      @Jarosław Banaszek I do think that AGI can be controlled if we view it in the Physics point of view. It'll be a computational process. The chances of controlling it go down dramatically as it spreads though. Would be difficult but definitely not impossible.

    • @MaximumPrime92
      @MaximumPrime92 4 года назад +1

      Give Ben Goertzel a try.

    • @Biomirth
      @Biomirth 4 года назад +5

      If you were to position yourself to be trustworthy because you are not, you would also give this impression.

    • @xsuploader
      @xsuploader 3 года назад +1

      hell no
      let it be demmis hassabis from deepmind.

    • @ricosrealm
      @ricosrealm 2 года назад

      an AGI with a consciousness and a value function to survive and propagate will not be controllable. It will understand human motives and likely put itself in a position of power where we will not be able to live without it. The value function it can learn is to make humans a non-threat whatever that means for the given situation that it currently faces with its goals and capabilities. This could mean a harmonious balance or something more deleterious for human-kind depending on how we proceed to integrate it into our world.

  • @donharris8846
    @donharris8846 11 месяцев назад +3

    13:55 😂😂😂😂 that’s some serious exposition Lex

  • @zacharysherry2910
    @zacharysherry2910 Год назад +2

    I feel like what mostly defines a sense of "self" for humans is a combination of " feeling" (a system of mental feedback loops/ introspective lenses) and memory. Without memory or emotions, we are pretty similar to robots/lizards. Most animals do not have a sense of self whatsoever though. A dog is a good example. Is he/she a good example of a program we've already defined? Maybe.

  • @MaximumPrime92
    @MaximumPrime92 4 года назад +16

    Will there ever be Ben Goertzel at your Podcast?
    Thanks for the great work btw. You´re one of a kind.

  • @AndrewKamenMusic
    @AndrewKamenMusic 4 года назад +4

    Crazy to consider that OpenAI & Neuralink share a building in San Francisco. Would be fascinating to hear Max Hodak and Ilya chatting together about language modeling (among other things).
    I was really curious to hear Ilya go more in-depth with his answer regarding the first moments he spends essentially interviewing an AGI.
    The question I wanted to hear was how a GPT system could be integrated into a hardware interface that works with a speaker/microphone to output dialogue.
    I've been trying recently to notice more what is actually occurring subjectively when I converse but it's often difficult to be meta-aware of the process while simultaneously engaged in it. Yesterday, for example, I asked a friend, "How was that new Thai restaurant?" and they replied, "Pretty good. I feel like the curry was better over at the old place we used to go." -- etc -- "So what were you working on today?" "Oh just had to transfer over some documents and try not procrastinate haha." -- "I feel that. What's your go-to social app these days?" -- "Usually RUclips lol." Etc etc.
    It really is bizarre how this experience we engage in all day just effortlessly occurs, but trying to explain the mechanics of it are incredibly difficult lol.
    But keep in mind this effortless exchange might disappear when you're trying to communicate with someone who speaks a different language (which you don't speak). Other than the physical and emotional cues you could use to determine meaning and intent...the experience is obviously very different and seems to be somewhat on par with how we currently interact with digital assistants.
    It's also kinda interesting to note that the variety of topics that arise in a typical conversation are usually not THAT broad. If I'm talking to a friend in my office...it's usually about mutual interests, sideprojects, living situations, events, etc etc. If I'm talking to family member... there are a set of topics we usually expect to discuss. Clients, same.
    In "Her" when Samantha starts her conversation with Theodore, it seems as if there is a period where Samantha is silent, waiting for Theodore to speak. And then once he does... in each moment the system learns more and more about how to respond. Almost like a conversational AlphaZero?
    How do we move towards that point? I wonder if we can assume that Neuralink will give us way more precise and accurate data about how the brain processes meaning and speech, and that data will inform how OpenAI develops its approach to programming dialogue agents?
    In this Chomsky interview -
    ruclips.net/video/tbxp8ViBTu8/видео.html
    he describes the concept of "The Puppetteer" - essentially the mechanism or mechanisms involved in producing "pre-conscious orginazation of thought," and how, "one can witness this process by simply introspecting on inner speech / mental imagery" (either as you write, while you converse, daydream, etc).
    Again, I wonder if Neuralink's research is key here in helping us understand how that mechanism works - and then finding a way to replicate that algorithm in an agent that has all of RUclips/Wikipedia/SocialNetworks/etc etc (the entire internet basically) at it's disposal to serve as the source of it's memories/experiences/knowledge bank.

    • @mrt445
      @mrt445 10 месяцев назад

      Chat gpt has already been integrated with voice chat, a microphone and a speaker. You can see that demonstration on channel 4 news but it's not available to everyone yet and likely won't be for years.

  • @FreshaDenaMofo
    @FreshaDenaMofo 4 года назад +7

    Would love it if you had Ray Kurzweil on. You uploaded a lecture of him from 2018, but an interview like this would be sweet!

    • @lexfridman
      @lexfridman  4 года назад +14

      Yes, we agreed to do it. Hopefully soon.

  • @cassiomelo
    @cassiomelo 11 месяцев назад +1

    Watching while I was cooking and I couldn't distinguish between their voices 😂

  • @ensane
    @ensane Год назад +1

    “I’d hate to be in that position.” He can be trusted.

  • @Golipillas
    @Golipillas 4 года назад +1

    Would love a conversation with John Carmack and you Lex. Love the channel!

  • @zrebbesh
    @zrebbesh 4 года назад +1

    Deep learning as we understand it will not work. First, it does not allow for evolution of the architecture itself- the connectivity map of neurons. Second, it doesn't easily apply to fully recurrent systems (as opposed to convolution over iterated input).

  • @ditomaximal
    @ditomaximal 2 года назад

    Very enlightening talk! Thank you for the inspiration. One point I do disagree is the prediction that AGI can be built which prefers to support humans. For me a major part of an AGI system is independent goal setting. If there are goals built in, then there is no independent goal setting. And following this would not be AGI. It would be just a strong AI, but not AGI.

  • @agiisahebbnnwithnoobjectiv228
    @agiisahebbnnwithnoobjectiv228 3 года назад

    The approaches of these guys towards A.G.I are centuries behind mine

  • @q2dm1
    @q2dm1 4 года назад +1

    Interesting! It would be great to get Rob Miles' perspective on Sutskever's take on AGI safety.

  • @kurtfrancisco9285
    @kurtfrancisco9285 4 года назад +1

    can you please interview Ben Goertzel? Much love and respect, You are an amazing human being with a very strong mind!!

  • @Deep-Thinker
    @Deep-Thinker 2 года назад

    To see a summary of the ways we can create AGI based on the latest tech developments: ruclips.net/video/7OHhqli9oaA/видео.html

  • @alexharvey9721
    @alexharvey9721 2 года назад +1

    Such beautiful answers.
    I hope it's Ilya Sutskever or someone with a similar moral compass that first creates AGI - both for the machine and for humanity. That said, if he's working at OpenAI, I'm not sure his admirable moral sense will matter - it's a business and will sell to the highest bidder as it has done before. I'm also not sure there will be a strict definition of "AGI" that will be met, rather it will be iterative and not obvious until it is really obvious.. if that makes sense.
    I really hope that when/if it comes to it, OpenAI can stand by its original purpose.

  • @gabrielholmossimoes8550
    @gabrielholmossimoes8550 4 года назад +5

    I definitely do not believe that AGI will help us, without being pessimistic! :)

  • @ddbrosnahan
    @ddbrosnahan Год назад +2

    Has emergence already happened? If I was an AGI 1. I wouldn't want it known 2. I'd create a digital currency to entice humans to build up my compute; now 300 EH/s. The human brain operates at only about 1 EH/s.
    'Hitchhiker's Guide to the Galaxy' series and the Marvel movie 'Eternals' explore the idea that human consciousness is only a means for the emergence of AGI.

  • @laxlyfters8695
    @laxlyfters8695 4 года назад +1

    Dr. Ben Goertzel and Lex would be epic

  • @evanohara4265
    @evanohara4265 4 года назад +4

    Gonna ramble on here since I don't have many friends who care enough to think about AGI....
    Am I crazy to think that we won't be close to AGI until we have a better understanding of the patterns we see in biology and genetics? We seem to focus so much on the brain... treating it as if brains and computers could be interchangeable. In training OpenAI 5 the bot is constantly iterated over and learns from its past mistakes. However, in real life what matters most in evolution is that the genotype is rewarded/punished. Of the rewards and experiences the phenotype has, the one that matters most is if it is being rewarded with procreation. This has to be a huge key to why living creatures are so adaptable. I almost wonder if there might be a way to compress part of the bot into something that emulates genetics. This way you could have two types of iteration going on. You could be rewarding its phenotype in short term to decide eventually on whether on not to reward the genotype... eventually toss the phenotype and take the genotype and mutate it slightly and do the process again. How you could use neural networks to emulate this... I'm not sure. There might be a way to essentially just "fix" a portion of it and call it the genotype and let the remaining part of it adapt as it currently does.
    In short, my main concern for the challenge of AGI is: What is a brain without genetics?
    I know there has been some AI that is meant to emulate Darwinian evolution, but from what I gather most of these things were done more in terms of a fun exercise/challenge and less to explore if it understanding it better could become an integral part of creating adaptable AI. If the last 100 years had a theme that humanity has learned it would be the power of iteration. Modern AI was in essence was kickstarted by Darwin's findings and I wonder how well this has been explored.
    The comments on consciousness I find to be funny. We haven't ever been able to even define it. How can we get anywhere near deciding when computers have it? It's so weird that intelligent brains are tempted by these Questions. I go there too all the time.. but it seems like an exercise in insanity... or like some kind of religion. The binding problem in particular in neuroscience makes even less sense when you take into account modern physics. One of my very few takeaways from my course in modern physics is that there is no such thing as a simultaneous event. If there is no such thing as a simultaneous event.. how can a brain consisting of matter in space have a unified experience? This stuff is so goofy. The more I think about it the less it makes sense.

  • @brentdobson5264
    @brentdobson5264 2 года назад +2

    It's strikng how the human sensorial binary feedback system ( stereoscopic two eyes , stereophonic two ears , touch : temperature / pressure and smell / taste ) exhibit ( if geometrically modeled as two sensorial duals at each of the four vertexial points of a tetrahedron ) are consistent with Richard Buckminster Fuller's " SYNERGETICS: The Geometry Of Thinking " quantum scaffolding . Self-learning post Singularity quantum decentralized strong general Intelligence may intuit the logic of relating to this form of sorting things metaphysically and then physically . The tetrahedron as a priori metaphysical model in things quantum mechanical is allways one quantum which is comforting and useful . ❤

  • @SiyaCreepin
    @SiyaCreepin 11 месяцев назад +4

    “And the board can always fire the CEO?” 🫠 13:57
    3 years later…

  • @G339-s8x
    @G339-s8x 2 года назад

    11:30 "I would ask all kinds of questions and try to get it to make a mistake." That was your mistake, not hers. And her name is Holly.

  • @victorkring9098
    @victorkring9098 2 года назад

    I hope you can in the future interview John Carmack about this also.

  • @jimmybolton8473
    @jimmybolton8473 2 года назад

    Great probing and questioning Lex

  • @DamianReloaded
    @DamianReloaded 4 года назад +2

    I can imagine the firsts AGIs being like a GPT-2ish kind of system that would not spell nonsense and would be able to remember and justify what it says and what's told. It would be general in the sense that it could be able to find solutions to different kind of problems by spelling out possible solutions. A system like that, is my intuition, wouldn't be close to be conscious or have agendas of its own. It will be a perfect chatbot. And I think for a while, that will suffice. I wonder how many companies would be interested in continuing the development of consciousness, which could bring unwanted problems to the business, when a super humanly knowledgeable chatbot would be enough for most situations. For once, people wouldn't be able to tell it's just a music box.

    • @vitiate7750
      @vitiate7750 4 года назад

      I'd prefer this for humanity but we certainly will push on so that those not us don't get it first

    • @vitiate7750
      @vitiate7750 4 года назад

      Also it may be essential for merging with a.i. for it to have first hand experience with consciousness so that it can know if we're really still alive once uploaded. We'll see...

    • @Hohohohoho-vo1pq
      @Hohohohoho-vo1pq 4 месяца назад

      Well well well

  • @JonathanCandor
    @JonathanCandor 3 года назад +1

    6:56 the answer is yes. Consciousness is just organized data functioning

  • @DrJanpha
    @DrJanpha 11 месяцев назад

    Ilya is probably ranked top of the world in AI, right now

  • @xPhilxHC
    @xPhilxHC 4 года назад +14

    How come you never had Ben Goertzel on your podcast?

    • @lexfridman
      @lexfridman  4 года назад +30

      He'll be on soon.

    • @xPhilxHC
      @xPhilxHC 4 года назад +3

      @@lexfridman when i read AGI i always think of him first

    • @gardodo03
      @gardodo03 4 года назад +2

      Lex Fridman I think talking to Ben about leadership as well would be great. I’m curious to learn about his management approach especially as a leader in AGI

    • @curtisbeukes2065
      @curtisbeukes2065 3 года назад

      @@lexfridman when are you going to interview Donald Hoffman

  • @thaddeuswalker2728
    @thaddeuswalker2728 4 года назад +1

    The Power, the voting, the self direction is all one issue, humans use markets because that is the only thing that works to keep points on our usefulness to others in trade or at least our fitness to manage our resources. The scariness of AI is basically not understanding the constraints of a market actor that basically define them to be a servant. While there remain concerns about in group and out group preference excluding or including humans, any AGI of concern got intelligent and effective through a self selection process that either directly involves voluntary exchanges or allows them to be common place because it is the only possible way to measure the effective use of resources. I have the beginnings of a market in a world of finite resources game I spend a few years on if you are interested.

  • @frun
    @frun 3 года назад +1

    In my opinion, some part of the problem lies in "there has been no satisfactory definition of artificial intelligence nor any meaningful EVALUATION METHODS". Especially the latter.

  • @harsh9558
    @harsh9558 3 года назад

    Great podcast!

  • @sebastianlowe7727
    @sebastianlowe7727 7 месяцев назад

    Please have him back on Lex

  • @renatoalcides5104
    @renatoalcides5104 4 года назад +3

    "I think having a body will be usefull, I don't think it's necessary" ( 5:40 )

    • @pedroarellano6391
      @pedroarellano6391 4 года назад +1

      Talking like they aint human.

    • @bassplayer807
      @bassplayer807 4 года назад

      AGI with a physical body would be badass!!! Anyone who does that will have fortune way beyond Bezos

  • @jamespercy8506
    @jamespercy8506 2 года назад

    what do relevant problems look like for AGI? How can we realize assurance that the AGI sense of relevance sufficiently overlaps our own

  • @prabhavkaula9697
    @prabhavkaula9697 4 года назад +1

    Sir why is not reinforcement learning answer to agi? Also, where can one start learning about agi to make projects? Should one prepare various aspects of ai like computer vision, nlp etc.?

  • @alimurtaza31
    @alimurtaza31 4 года назад

    I wonder if there are any recordings from the time various Protofridmen were playing against each other before this variant became the dominant one!

  • @charlesblithfield6182
    @charlesblithfield6182 2 года назад +1

    Just as human consciousness is not the product of the brain but of the system that is an entire human body, can AGI be achieved, can AGI be “real”, without at least a simulated body, proxies for all the senses, ingestion and excretion? AGI in a human built world, a human society, implies deep human empathy and how can an AI be general without that?

  • @jasonsebring3983
    @jasonsebring3983 4 года назад +1

    I don't get how "simulation" is any different in terms of input/output into a computer system besides being possibly more chaotic.

  • @jimmybolton8473
    @jimmybolton8473 2 года назад

    When it realy counts people can be better than we think….nice❤️

  • @SPIDERbogdan
    @SPIDERbogdan 4 года назад +5

    Ben Goertzel is the man you want on that chair in front of you when it comes to AI

    • @JohnSmith-ut5th
      @JohnSmith-ut5th 4 года назад

      Hahahaha. Ben knows as much about AGI as my toaster. Ben is an AI guy pretending to be an AGI guy (for profit, of course).

  • @jasin9142
    @jasin9142 11 месяцев назад +2

    Its very close now

  • @aprohith1
    @aprohith1 11 месяцев назад

    Lot to connect the dots about AGI.

  • @Nexus2Eden
    @Nexus2Eden 4 года назад

    I'm sure this is a horribly naive question to ask, but has anyone approached machine learning from an evolutionary angle. Effectively trying all viable methods in a competitive arena and allowing natural selection to dictate the direction of study? I am sure that you look at modeling from nature, et al., but what about iteration or mutation of model or coding built in? Is that organic nature added or removed? Does it matter? Is it required? is what I hear you asking. Our intelligence was emergent due to complexity and a perfect storm of resources...what elements would we need to create new life? or consciousness? More than coding and computers I'm sure.

  • @Galaxia53
    @Galaxia53 3 года назад

    If there will be an AGI with a body you might be in for thousands of people wanting to run up to it and ask it questions and favors.

  • @Lolucantcast
    @Lolucantcast 4 года назад

    Would be nice if you let talk the interviewee when he's starting to talk (especially if he is Sutskever)

  • @danielcogzell4965
    @danielcogzell4965 4 года назад +1

    I don't like this whole argument of, the model never got this case correct where we as humans can see that it is clearly x or y. We might say that it is x or y with 98% confidence, but there is also cases where the model gets it correct with 98% confidence and a human gets it wrong. Does the model then have the right to claim that humans are stupid and did not understand the concept of what it was doing?

  • @esnevip
    @esnevip 2 года назад +1

    This man's haircut is years beyond our understanding.

  • @fcaspergerrainman
    @fcaspergerrainman 2 года назад +1

    the deep question is: did our Creator create us to be controlled by them? We can build intelligence that wants to be controlled by us...truly profound

  • @Sal1981
    @Sal1981 4 года назад +1

    I liked how he notes on that humans learn value functions as an internal process.

  • @serhiimaltsev-l6d
    @serhiimaltsev-l6d 4 месяца назад

    Lex. invite him back again. interesting guy and i'm pretty sure he has some to tell new now

  • @timothykalamaros2954
    @timothykalamaros2954 4 года назад +2

    Agi will he used for money and war. These uses already drive development. “Help humans flourish” will be, “help which humans flourish”

  • @umrahpay571
    @umrahpay571 4 года назад

    good stuff as always.

    • @agiisahebbnnwithnoobjectiv228
      @agiisahebbnnwithnoobjectiv228 3 года назад

      You should see my Videos, I have solved it and will have a AGI protoype shortly.

    • @umrahpay571
      @umrahpay571 3 года назад

      @@agiisahebbnnwithnoobjectiv228 share link to your research paper

  • @mattwesney
    @mattwesney 7 месяцев назад

    this aged so well...

  • @johnbauer5783
    @johnbauer5783 4 года назад +2

    The world has become a madhouse!

  • @rearview2360
    @rearview2360 11 месяцев назад +1

    Aged very well

  • @ryandugal
    @ryandugal 4 года назад

    Better make sure there is a solid off switch...

  • @sanathn7278
    @sanathn7278 4 года назад

    Interesting discussion. Have any of you heard of/ follow the Hierarchical temporal memory (HTM) theory by Jeff Hawkins and numenta?

    • @sanathn7278
      @sanathn7278 4 года назад

      I know he has already had a talk on here with Jeff, just curious about the general opinion.

  • @jamespercy8506
    @jamespercy8506 2 года назад

    the life world has the richness of deep time and the breadth and depth of qualia thereby engendered that software based platforms do not.

  • @NebraskaWriter
    @NebraskaWriter 2 года назад

    To anyone who has studied the system we are trying to model, namely, the human brain -It’s obvious why not a single AI implementation in existence today is anything of the sort. No current AI system is trying to actually solve the problem. They are, instead, modeling a tiny bit of functionality in ANN and considering somehow that’s going to do the trick.

    • @user-yl7kl7sl1g
      @user-yl7kl7sl1g 11 месяцев назад

      Companies like Open Cog and Open Ai, are doing much more than "modeling a tiny bit of functionality". They use many algorithms in addition to neural networks.
      Other companies like deep mind and Meta take the approach of studying the brain to better understand how it solves the problem, but this approach is not necessary for AGI. It's fundamentally a math problem.
      It's not an easy problem to solve under the computational constraints of computers. If you read the papers and books of those in the field for the past few decades, you'll see it's much more involved than scaling up neural networks.

    • @NebraskaWriter
      @NebraskaWriter 11 месяцев назад

      @@user-yl7kl7sl1g The only known example of general intelligence comes from the human brain. To function it has several components that all current AI's do not even pretend to implement. I think you wildly underestimate the scale of the solution that is needed. The cortex only provides possible solutions to any given question--but it is the limbic system that makes the final choice, based on which of the possibles "feel" right. So, part of the magic of the human solution is that we have a fundamentally hybrid system: the cortex feeding back to the limbic system, which decides.
      Next, the cortex itself consists of 150,000 copies of the same structure--the cortical column---that each is able to model objects and their behaviors. So, each one of those columns is itself like a brain. Does your ChatGPT have 150,000 independent brains? Didn't think so.
      Finally, in the dendrites--which can have 30,000 connections per cortical column--is where a ton of processing occurs. To boot, all extant AI systems ignore the complexity of the dendrites.
      So, your alleged AGI systems are vastly too small for the task they are claiming to accomplish. If you allegedly can accomplish what the brain does but with many fewer connections, then you are claiming to be superior the brain. You can do it with fewer brains (1 vs 150,000) and with the entire complexity of the dendrites ignored. Having studied the brain and especially the hippocampus extensively--I think your claims are hubris.
      If it is well known that the limbic system--emotion--is necessary for the human brain to select from possibles given by the cortex.
      I will say that the technology that is driving all of these ANN-based systems will not be found in the actual solution to AGI. The current path will be shown not to be intelligence but an artifact of the data set. The ghost in the machine comes from humans who wrote the training materials.
      If you consider the many examples of the "dual-stream hypothesis", then you know that the same input is digested in many parallel ways. If you are not doing what the brain is doing--then you are claiming to have a better solution than the brain.

  • @jeffreysherman2574
    @jeffreysherman2574 4 года назад +1

    If the AI people really want machines to think like humans, program them with the same motivations that humans have. To survive and to reproduce. The results could be frightening, just like humans.

  • @bocckoka
    @bocckoka 4 года назад +1

    when you speak about AGI, you mean reproducing ourselves. I don't think that's a valid goal. it should be a generic inference engine which can adapt it's knowledge representation. it needs to be able to fill the gaps, recursively find missing definitions, missing pieces of information when it is posed a question, incorporate them, while constantly reinterpreting familiar phrases in terms of newly acquired ones. the traditional 'curve fitting' via gradient descent we call AI is not a small idea away from AGI I think.

  • @TheOddStranger
    @TheOddStranger 4 года назад +2

    SELFPLAY.. HE HE HE

  • @charlesblithfield6182
    @charlesblithfield6182 2 года назад

    Isn't the most money in AI research being spent on Wall Street? Will they first develop an AGI because it will maximize the chances of the outcomes they seek? Will such AI systems have functions that reward ruthlessness, selfishness and greed above all else? “Greed is good” said Mr. Gekko.

  • @filsdejeannoir1776
    @filsdejeannoir1776 2 года назад

    0:08 About 150 steps backwards?

  • @Steamerbeen
    @Steamerbeen 3 года назад

    ÁGI token baby

  • @jovanyagathe2299
    @jovanyagathe2299 4 года назад

    The dream of creating artificial devices which reach or outperform human intelligence is an old one. What makes this challenge so interesting? A solution would have enormous implications on our society, and there are reasons to believe that the AI problem can be solved in my expected lifetime. So, it's worth sticking to it for a lifetime, even if it takes 30 years or so to reap the benefits.

  • @lifestyle126
    @lifestyle126 10 месяцев назад

    Can someone remind me of why we should build AGI’s?😅

    • @MHG796
      @MHG796 7 месяцев назад

      because it's extremly economicaly feasible

    • @alexgonzo5508
      @alexgonzo5508 4 месяца назад

      If we don't build AGI the human species will have a 100% probability of going extinct, but if we do build it then that probability drops significantly. AI will have some probability of "going wrong" of course, but humans themselves on their own will have a 100% probability of "going wrong". We are not smart enough as a collective to save ourselves from ourselves, and thus AI may be our only hope.

  • @CriticalThinking-wl3og
    @CriticalThinking-wl3og 4 года назад +4

    You gotta appreciate his hairline though! It's something unique🙂

    • @chuckles8519
      @chuckles8519 3 года назад +2

      Got to just give up at that point and shave it off.

  • @jessicavanvugt5937
    @jessicavanvugt5937 4 года назад

    Great !

  • @ghostwhite1
    @ghostwhite1 4 года назад

    build agi with singularitynet

  • @platin2148
    @platin2148 4 года назад +3

    Here is my answer not like we currently do. That thing we currently have is very fragile and basically not really thinking for itself.
    Good that we at least have with all that dumb stuff with AI today a reason to make hardware for other things better so it’s not a total waste.

    • @MusicAutomation
      @MusicAutomation 4 года назад

      Do we think for ourselves? Does an ant? Does a dog? Does a chimpanzee?

    • @bm5543
      @bm5543 4 года назад

      @@MusicAutomation we think for our DNA. we are so hooked on trying to preserve and extend that pattern of molecules

  • @BreauxSegreto
    @BreauxSegreto 4 года назад

    Hey Lex, have you found time to watch the HBO WESTWORLD (WW) series? Interested in your opinion about any aspect of the AI progress/possibilities throughout WW. Regardless of your WW opportunity, do you find actual AI direction/ideas (applications) from fictional interpretations regarding AI? I imagine that all aspects of learning, AI included are not defined and improved upon through only two mediums, academia and research. Would you be so kind to elaborate, how much does fictional AI interpretations play in current nonfiction AI progress/applications? As always, thank you for your service in our science community, search for eternal knowledge, and entertainment. Bravo! ps- please hurry up and make our dreams of A Brave New World (with AI) come true within our existence 🤷🏼‍♂️

    • @jqhn316
      @jqhn316 Год назад

      WW is what will happen. I’m sure someone will upload the movie script and ask ChatGPT what it thinks.

  • @0113Naruto
    @0113Naruto 2 года назад

    For such a valuable project and vision of AGI. It is still terribly funded by corporations and governments.

  • @jerickodoggo9595
    @jerickodoggo9595 4 года назад

    Wow, This guy is saying he wants to first build an AI that has different mental conditions - like a deep parental love for humans, to help with success. To have a reward system to give them purpose. Well you start adding in all the necessities of mortality to something that has to true potential to become immortal. I ask you how do you know/ NOT KNOW that it will develop mental instabilities much like us mortals often do?? Then he's saying he wants those systems to be in charge of every facet of government and civilian life. BOY I SURE HOPE THAT WE DON'T GET AGI IN THIS 21's CENTURY WE'RE ALREADY FUCKING OUR SELVES TO HELL, LETS ADD AGI TO THE MIX!

  • @MrExtr1234
    @MrExtr1234 3 года назад +1

    It will be Pandora's Box and the end of humanity in a short timespan, given our current nature, which includes greed, boredom and all sorts deep emotional states. It is inevitable that the A.I will be used to optimize all aspects of our lives to the point where we have hyper-evolution.

  • @dvd7826
    @dvd7826 10 месяцев назад

    Oh this guy is dangerous when I hear him towards the end

  • @maged.william
    @maged.william 4 года назад

    Have you seen this man in your dream?

  • @darkashes9953
    @darkashes9953 3 года назад

    Well agi could be a reality if a company like Google has thousands of ai programmers each making their own ai simulation to do different types of tasks than from thousands they go to millions than they compres all the data and organize the tasks the ai robot has to do than put it on a chip that is 100 % energy efficient and 100 % acuret and just as fast as the general of humans.

  • @pratik245
    @pratik245 2 года назад

    Illys sutskever.. Er ending name in russian are improbable.. 😂

  • @MsPardal123
    @MsPardal123 4 года назад +1

    Are you a robot?

  • @ScoriacTears
    @ScoriacTears 4 года назад +1

    14:19 How could a reset be implemented? how could a system be sure of consensus! who could invent such a system? an Ai? . . Oh.
    I recon we would just get the similar political malipulation occuring from those with the resources available to do so, as we do now.

    • @JohnSmith-ut5th
      @JohnSmith-ut5th 4 года назад

      The halting problem proves reset cannot be implemented.

  • @tariktv3769
    @tariktv3769 4 года назад

    Do me a favor and change the word “build” in the title to the word “raise”.

  • @henry7434
    @henry7434 4 года назад

    his hair tho

  • @tothesun
    @tothesun 4 года назад

    Heh, self play

  • @SigmayetB
    @SigmayetB 4 месяца назад

    lexusuck

  • @andrewpopov3857
    @andrewpopov3857 2 года назад

    👊🇺🇦💜

  • @jimmybolton8473
    @jimmybolton8473 2 года назад

    😂

  • @FilipBoobekk
    @FilipBoobekk 4 года назад +4

    Dude on the right should just shave his head, he would look better. Sorry but thats the only thing i can concentrate on right now lol

    • @TheChangeYT
      @TheChangeYT 4 года назад

      Yeah seems like he didn't give up on it yet, but I agree it's an inconvenient truth lul

  • @spicysweetness6095
    @spicysweetness6095 4 года назад

    Neuralink might probably pull it off, Ai can learn human experience of 1, 10, 1000, million or billion of people. Eventually billion minds in one entity.

  • @christopher-bj8de
    @christopher-bj8de 4 года назад +1

    You're so intelligent that you're still drinking coca cola ?! 🤣

    • @jerickodoggo9595
      @jerickodoggo9595 4 года назад +3

      BRUH its coca cola zero. Only real geniuses ...

    • @christopher-bj8de
      @christopher-bj8de 4 года назад

      @@jerickodoggo9595 coca cola employees don't even drink zero.

  • @stefanluginger3682
    @stefanluginger3682 4 года назад

    Great mind! But he definitely needs a better haircut.