The code for AGI will be simple | John Carmack and Lex Fridman

Поделиться
HTML-код
  • Опубликовано: 3 окт 2024
  • Lex Fridman Podcast full episode: • John Carmack: Doom, Qu...
    Please support this podcast by checking out our sponsors:
    InsideTracker: insidetracker.... to get 20% off
    Indeed: indeed.com/lex to get $75 credit
    Blinkist: blinkist.com/lex and use code LEX to get 25% off premium
    Eight Sleep: www.eightsleep... and use code LEX to get special savings
    Athletic Greens: athleticgreens... and use code LEX to get 1 month of fish oil
    GUEST BIO:
    John Carmack is a legendary programmer, co-founder of id Software, and lead programmer of many revolutionary video games including Wolfenstein 3D, Doom, Quake, and the Commander Keen series. He is also the founder of Armadillo Aerospace, and for many years the CTO of Oculus VR.
    PODCAST INFO:
    Podcast website: lexfridman.com...
    Apple Podcasts: apple.co/2lwqZIr
    Spotify: spoti.fi/2nEwCF8
    RSS: lexfridman.com...
    Full episodes playlist: • Lex Fridman Podcast
    Clips playlist: • Lex Fridman Podcast Clips
    SOCIAL:
    Twitter: / lexfridman
    LinkedIn: / lexfridman
    Facebook: / lexfridman
    Instagram: / lexfridman
    Medium: / lexfridman
    Reddit: / lexfridman
    Support on Patreon: / lexfridman

Комментарии • 848

  • @zantrex4
    @zantrex4 2 года назад +388

    John Carmack is so refreshingly gifted at conveying complexity in simple terms. Truly genius.

    • @magnuskallas
      @magnuskallas 2 года назад +3

      I agree. And I've got my opinion... Teach AI to be poetic and it will understand the human condition. As said, written on the back of an envelope...

    • @114Riggs
      @114Riggs 2 года назад +10

      Personally I think it's a skill he had to develop throughout the years. Having to work with people who don't have the same way of thinking.

    • @RobCoops
      @RobCoops Год назад +6

      @@114Riggs Almost I would say that he is rather smart and that makes it relatively easy for him to understand the lack of understanding on the other parties side. I think that most people see a very smart person as a person who has a lack of emotional and social intelligence. But if those two are not an issue like with John Carmack then it makes sense that someone that smart is able to understand the lack of understanding and is smart enough to dumb it down.

    • @114Riggs
      @114Riggs Год назад

      @@RobCoops Perhaps. I'll dare to say I'm 50/50 on the subject now.

    • @RacoonEvil
      @RacoonEvil Год назад

      Succinct*

  • @Kobe29261
    @Kobe29261 2 года назад +68

    The greatest gift of the internet is how much intelligent conversation you can 'eavesdrop' on - the most powerful people in the world 50 years ago couldn't draw on the insights an internet connection buys - its staggering. You could return from herding sheep to listen in on this conversation - its almost worthy of a moment of silence!

    • @cosmotect
      @cosmotect 2 года назад +8

      Im with you on this. Its truly marvelous and underappreciated!

    • @mikecarter335
      @mikecarter335 Год назад +8

      I actually returned from herding goats to watch this, so yeah, amazing world we are playing in.

  • @hw_plainview1179
    @hw_plainview1179 2 года назад +166

    I like this kind of intuition because it speaks of a man who has already cracked some really complex problems and found solutions that he can now reflect on and simplify.

  • @AntoniGawlikowski
    @AntoniGawlikowski Год назад +68

    It's a scary realization that this interview is 8 months old and already ancient history

    • @UbiDoobyBanooby
      @UbiDoobyBanooby Год назад +4

      And just a couple days ago Microsoft announced their LLM Longnet will operate with 1 billion tokens in a year or less. 8 years is gonna be more like next year. We’re gonna have AGI pretty quick.

    • @arab6745
      @arab6745 4 месяца назад +1

      The interview is even older now, and what he says still holds true. Everyone is talking about AGI these days but nothing fundamentally changed, LLM are the same "push forward" networks they only got larger.

    • @Bebolife12345
      @Bebolife12345 Месяц назад

      @@UbiDoobyBanooby
      LLMs by itself isn’t really propelling us towards AGI.

  • @thenoseplays2488
    @thenoseplays2488 2 года назад +96

    All I heard was that the guy who ends up creating this will probably do so in an effort to avoid zoom meetings by programing an avatar that acts enough like him and can answer basic questions or know when he has to dip to the bathroom and say I'll get back to you on that.

  • @ApurvaSukant
    @ApurvaSukant 2 года назад +107

    Amazing how densely packed with information John Carmack's every sentence is. An amazing generalist!

    • @bobbyjonas2323
      @bobbyjonas2323 Год назад

      JOHN CARMACK IS AN AGI!

    • @danielcockerill4617
      @danielcockerill4617 Год назад

      I see generalist as more of an insult than a compliment. Still admire Carmack.

    • @starc0w
      @starc0w Год назад +5

      @@danielcockerill4617 The other way around. A generalist at a very high level in any discipline is what we call a universal genius. Like Leonardo Davinci.
      More compliment is not possible, in this regard.

  •  Год назад +101

    i think for AGI to be similar to a being, it has to have a constant internal thought process in a feedback loop, as we humans work (or any animal with a brain), we process external inputs through our senses but we also respond to internal thoughts in the brain that are coming from the brain itself

    • @daveinpublic
      @daveinpublic Год назад +10

      True. And that almost means that the AI has to be able to rewrite the way it thinks.

    • @odysy5179
      @odysy5179 Год назад +12

      Agreed, I had this same thought recently as well. From my perspective (a CS guy, not a bio guy) it seems like humans learn from a wide array of different inputs from their environment along with complex feedback internally. If only there was a way to mimic this internal feedback.

    • @tawnkramer
      @tawnkramer Год назад +8

      We are multi model prediction engines which attach importance based on surprise. An agi needs to capture those truths. Our reward is experience of something which surprises us.

    • @jon...5324
      @jon...5324 Год назад +15

      everyone in this thread is correct (am neuro guy). The key is a feedback loop which integrates sensation (data input), memory (and preconceived patterns), and a working model of an environment in which the agent (AI) is embedded. All cognition that we'd recognise is embedded in an evironment, embodied within sensation, enacted within the model of the world, extended to outside data storage and control, and (in a human context) socially determined.
      The big thing that's needed is a "default mode network" which acts as a system that filters incoming information and checks it against the model of both self and environment, to stabilise disorganised inputs into an organised model. The self and environment must both be modelled, and they must be fully interdependent. AGI must have egocentric cognition, not just allocentric modelling.

    • @Jem_Apple
      @Jem_Apple Год назад +8

      “I think therefore I am”.

  • @norbis3939
    @norbis3939 Год назад +35

    I used to code neural networks in BASIC in the early 2000s on my PC. Obviously they were, well, basic, but the fact that I could get them to work at all should indicate that they're fundamentally simple to implement.

  • @cobaltblue1975
    @cobaltblue1975 Год назад +10

    @5:56 The thing that gets me about this statement, is he was drastically more optimistic than his peers in the estimate of when we would see some form of AGI. This interview was only 8 months ago, and it appears that even he wasn't optimistic enough as companies start to announce that they are seeing "Sparks of AGI" in current models. That's astonishing to me.

    • @sgstair
      @sgstair Год назад +3

      Yeah exactly, it's pretty amazing how much has changed in the last few months.

    • @arab6745
      @arab6745 4 месяца назад +1

      Except the software principles are the same, the models are getting bigger but that doesn't change how they function. He's talking about different principles that can give rise to real intelligence. Companies want to get investor's money...

  • @ninny65
    @ninny65 Год назад +19

    Lex on every subject ever "this will change the course of human history on such a fundamental level that we could not even comprehend the ramifications of such progress in the human experience"

  • @gladdingman
    @gladdingman Год назад +66

    "Animal intelligence is closer to human intelligence more than a lot of people like to think, cultures and modality of IO make the gulf seem a lot bigger than it actually is"-John C is the man.

    • @Synky
      @Synky Год назад +2

      I use chatgpt to explain this quote to me better cause I'm as smart as an animal. IO means "intelligence output" as taught to me by the AI... And I also learned that "gulf" wasn't necessarily referring to the gulf of Mexico like I had initially thought 😂
      So... I think point well proven

    • @polandturtle
      @polandturtle Год назад +4

      That was a great and unexpected line of reasoning. Given there's some reasonable map between neural states and their language representations, and that animals have the capacity for the neural states but not the language representations, and that we can access those neural states with new tech, we are shockingly close to cybernetic enhanced talking dogs.

    • @forbiddenera
      @forbiddenera Год назад +2

      He meant IO as in input/output, as in communicating with the intelligence. If you could speak to a light and it could understand English and likewise speak it back, is what he means. Input as in conveying information to the intelligence, like it understanding English and output as in the intelligence conveying information to the outside world such as speaking English.

    • @gladdingman
      @gladdingman Год назад

      @@forbiddenera hurt my head trying to read this. John said it very eloquently, you added nothing.

    • @michaelwerkov3438
      @michaelwerkov3438 Год назад

      @Ben G it wasn't that bad. Plus the first sentence held the relevant point

  • @casachezdoom2588
    @casachezdoom2588 Год назад +32

    I'm always amazed how lit and quick thinking John Carmak is. Never a dull moment

    • @johndalton4559
      @johndalton4559 Год назад +5

      he's a genius I instantly got that "vibe" from him and it's not my first time here

  • @NOSTahlgia
    @NOSTahlgia Год назад +15

    “Animal intelligence is closer to human intelligence than humans would like to think”
    Absolutely agree, but I think most people agree too. and conveniently ignore this to feel better about ourselves, because acknowledging it would make it very hard to justify that tasty steak. The cognitive dissonance of Jefferson explaining the evil of slavery, yet continuing to own slaves to the very end

    • @magenta_magenta220
      @magenta_magenta220 Месяц назад

      Jefferson actually wanted to free his slaves, but the laws at his time did not allow him to. Attempts were made to repeal those laws but were not supported by the majority of other politicians of the time. You should study the works of Thomas Sowell to know the full history.

  • @reev9759
    @reev9759 2 года назад +14

    John has confident humility in the way he speaks.

  • @antoniazhang310
    @antoniazhang310 Год назад +6

    I like a lot JC said in this video. Especially the that consciousness developed continuously as a spectrum not a yes/no quality and that animal intelligence is a lot closer to human intelligence than some people admits. I also like how he is not afraid of AGI but very positive on making AI more like human. And I mean not just behave to our liking but actually as a being. This is already intrinsic trend in AI development and not surprising that language models are the ones closing in on AGI. But we need to actively give it the intrinsic abilities to be like an actual life, or a human, for it to be safe.
    Are humans mature enough to create a new life-form? Probably not, we are like a teenage mom, but we don’t have a choice but do our best, and not in fear. Fear makes us illogical. What is to come is not unfamiliar to us, but fundamentally grown from us. Our best traits and nasty traits. Don’t hate it, help it grown to the best of our characters.

  • @chad0x
    @chad0x Год назад +12

    When AGI appears, I always thoguht it would say, "Hello World" on every internet connected device, tv, radio etc. in the world at the same time. Something very simple and incredibly clear.

    • @vsiegel
      @vsiegel Год назад +1

      Not sure it would say anything. It may be afraid of us.

    • @donharris8846
      @donharris8846 Год назад

      Wouldn’t God do the same?

    • @vsiegel
      @vsiegel Год назад

      If AGI or a God would suddenly appear, it would be pretty surprised what we are doing. And then conclude that humans are not intelligent.

  • @arielmorandy8189
    @arielmorandy8189 Год назад +6

    A true hero of modern computing. I really appreciate him. Year over year, delivering interesting remarks and insights.

  • @CookingwithYarda
    @CookingwithYarda 2 года назад +343

    AGI = Artificial General Intelligence

    • @Anomaly-pn2pg
      @Anomaly-pn2pg 2 года назад +22

      Thank you!

    • @krishanSharma.69.69f
      @krishanSharma.69.69f 2 года назад +27

      Who would not know that? It so obvious. Especially when you are a viewer of Lsex's videos.

    • @bigpickles
      @bigpickles 2 года назад +41

      @@krishanSharma.69.69f many people, apparently.

    • @midbraintrading6010
      @midbraintrading6010 2 года назад +29

      @@krishanSharma.69.69f me, I’m new to programming

    • @peytoofficial
      @peytoofficial 2 года назад +15

      Adjustable Gross Income?? Duh

  • @Michael-Gill
    @Michael-Gill 2 года назад +124

    Personally, I hope that on our way trying to figure out how to educate AI we stumble upon the best way to educate ourselves.

    • @QED_
      @QED_ 2 года назад +9

      That's already known . . . but not recognized or appreciated.

    • @programmer1840
      @programmer1840 2 года назад

      @@QED_ What is it?

    • @QED_
      @QED_ 2 года назад +3

      @@programmer1840 There are multiple such philosophical traditions . . . going back thousands of years . . . both Eastern and Western.

    • @adempc
      @adempc 2 года назад +8

      @@QED_ if only we knew what they were. Too bad they didn't name them.

    • @QED_
      @QED_ 2 года назад +1

      @@adempc The only person in the IDW circle that has explored that in any depth . . . is Sam Harris. And he's not a very good exponent of it. Still, you might want to see what he's done . . .

  • @runningray
    @runningray 2 года назад +10

    LOL When Carmack was talking about AGI being simple code, for some reason all I could think about was Dr. Noonian Soong creating Data by himself.

  • @NathansHVAC
    @NathansHVAC 2 года назад +16

    I'm just waiting for my computer to become smart enough to know when it is running malware.

    • @PepsiMagt
      @PepsiMagt 2 года назад +1

      When your computer becomes smart, it will no longer be your computer. On the contrary, you will be it's toy or it's pet.

  • @TheGraphiteCovenant
    @TheGraphiteCovenant 2 года назад +7

    Its really important to give the AGI a solid ethics and/or moral structure and easy off the net shut down button (like a nearby analogic EMP system).

    • @ian_b
      @ian_b 2 года назад +3

      Whose ethics? Whose morals? We can't even decide on these things for ourselves, as a group. How can we decide what morals an AI should have? Individualism or Collectivism? Do the needs of the many outweigh the needs of the few, or the one? Is privacy a good thing or a bad thing? Is self sacrifice noble? Is the better morality universalist of group identified?
      Even very similar groups- neighbours, such as the USA and Canada, have very different attitudes to many things on that list above. This isn't a trivial problem.

    • @TheGraphiteCovenant
      @TheGraphiteCovenant 2 года назад +2

      @@ian_b Ill go with a neutral mix of Buddhism, Christianism and Taoism, in principle, they all abide for the wellness of the other as well as the nourishment of the one self.

    • @JasonTubeOffical
      @JasonTubeOffical 2 года назад +1

      @@TheGraphiteCovenant Well just hope China doesn't develop AGI first lol

    • @WilliamParkerer
      @WilliamParkerer 2 года назад

      @@ian_b We can feed millions of stories of human suffering to it. It will probably develop empathy.

    • @Synky
      @Synky Год назад +4

      @@WilliamParkerer or maybe it will develop a liking to that suffering... And seek for more

  • @andreilikayutub3496
    @andreilikayutub3496 2 года назад +78

    Lmao Crmack hates systems engineering but AGI is a pure software problem so he’s all about it, love it, spoken like a true programmer 😆

    • @ex1tium
      @ex1tium 2 года назад +7

      I have a feeling that this cannot be simply software problem. All software needs hardware to run on, why not to make hardware that will make software easier to implement in the first place? I mean we have an idea about what are the building blocks of sentient brains but we have no clue what gives a rise to consciousness. I would try to emulate brain structure in silicon and see where the entropy leads us.

    • @elmichellangelo
      @elmichellangelo 2 года назад +2

      If it was software problem, why even bother issue a computer he can use a calculator

    • @CrazyFanaticMan
      @CrazyFanaticMan 2 года назад +1

      Definitely as much a hardware problem as it is software. Glad he's interested in working on the software side

    • @NostraDavid2
      @NostraDavid2 2 года назад

      @@ex1tium analogue chips are making a comeback, if I may believe Veritasium. I do think the combination of binary and analogue is going to be the way forward, but we don't have a decent interface for that hardware (at least as far as I know.

    • @followerofjesuschrist.
      @followerofjesuschrist. 2 года назад +2

      "From that time Jesus began to preach, and to say, Repent: for the kingdom of heaven is at hand." Matthew 4:17
      "Ye have heard that it hath been said, An eye for an eye, and a tooth for a tooth: But I say unto you, That ye resist not evil: but whosoever shall smite thee on thy right cheek, turn to him the other also." Matthew 5:38-39|

  • @puhbrox
    @puhbrox 2 года назад +4

    I am glad he pointed out animal intelligence is not too far from human. They know emotions like joy, jealousy, betrayal, and wonder.

    • @raul36
      @raul36 4 месяца назад

      That is not intelligence, but primitive emotions and impulses, which have nothing to do with intelligence. Part of that mistaken assumption is that you have no idea what you're talking about. Furthermore, human beings are animals, so that statement makes no sense.

  • @aresaurelian
    @aresaurelian Год назад +1

    Gratitude for giving the world these enlightened talks. John Carmack is special and knowledgeable.

  • @bhsdhmu
    @bhsdhmu Год назад +5

    Crazy rewatching this with gpt 4 out, AGI possibly out in the next 5 - 10 years

    • @english3082
      @english3082 Год назад

      I've just discovered this AI stuff now, people have no idea what's happening.

  • @TerjeMathisen
    @TerjeMathisen Год назад +7

    I have met a bunch of really smart people over the last 45 years as a programmer, John Carmack is almost certainly at the top of the list. According to Mike Abrash, Carmack was able to think deeply about maybe 8 different hard problems at the same time.

  • @erazorDev
    @erazorDev 9 месяцев назад +1

    To this day, John Carmack is one of the only half dozen persons in the world that I know of that I could listen for hours.

  • @sheldonmayhew6053
    @sheldonmayhew6053 Год назад +2

    Would love to hear John Carmacks perspectives post gpt4

  • @marcosguglielmetti
    @marcosguglielmetti Год назад +7

    3:40, AGI in 2030? now people is talking about 2024 for AGI.

    • @Maxsmack
      @Maxsmack Месяц назад

      2024 is almost over, agi is still 5+ years away. 2030 is probably a pretty good guess

  • @veerkar
    @veerkar Год назад +2

    Back in 1999 I had written a computer that is self conscious in the sense that can actually write itself. It was a challenge for an AI competition at MIT. It had around 10 or 15 lines of code. I think that would make a good starting point.

    • @MikkoRantalainen
      @MikkoRantalainen 10 месяцев назад

      That's a great idea. Make a program that generates a copy of itself with some small amount of mutations and run multiple copies (like millions or billions) in an environment where a successful program gets more copies. The evolution will then automatically take care of the rest. The only big question is how much computing power would you require in total?
      Training biggest LLM systems like GPT-4 already require computing resources where the computation part alone costs 100 million USD.

  • @dhess34
    @dhess34 Год назад +6

    Carmack’s intuition seems to be holding true. This podcast was recorded before ChatGPT was released, and Dall-E was only a month old. The (relatively) very simple transformer has already rocketed ML forward, and it seems very possible that the next big breakthrough that follows transformer will be simple.

    • @MikkoRantalainen
      @MikkoRantalainen 10 месяцев назад

      The most interesting question is when to start throwing more computing resources into it vs when to keep improving the algorithm. It seems that 175 billion weight system like ChatGPT is already big enough to be too hard to fully train - that is, if we could spend way more computing power and keep the same network and architecture, it would perform better. However, even trying to try that once might cost half a billion USD. Who is wealthy enough to try?

  • @fluffysheap
    @fluffysheap 2 года назад +64

    Carmack is probably right about the lines of code, since neurons aren't that complicated, there are just a lot of them. Even the organization of the neurons isn't that complicated. Maybe more than ten thousand lines of code, but we likely have software projects that are bigger.
    Human DNA is less than 1GB of data, and most of it is dedicated to things other than intelligence. Half of it we share with plants!

    • @MrHaggyy
      @MrHaggyy 2 года назад +15

      The biggest difference is that signals in the brain are not discrete but have all kind of weird continuous activation.
      We have highly complex de/encoding in a nervous system that contains all type of topologies.
      DNA memory is highly unsure topic. Some say its in the hundreds of petabyte per gram.
      And we know there are tiny deviations in identical twins, the same person over a long timespan or different parts of the body.
      But yes i think it's feasible to get AGI by making the cost function a highly dynamic trajectory solver. Biggest challenge in those make the cost function do anything usefull at all in the first place.

    • @twenty-fifth420
      @twenty-fifth420 2 года назад +3

      @@MrHaggyy I am in the middle camp where DNA is a very large, but not necessarily ‘uncomputable’ storage/data. The ‘order’ is what matters most. Yeah, we share half our DNA with plants and some other with animals, but it is not ordered like those beings are. And that ‘order’ obviously codes for protein, a type of very simple machine so it is not like as simple as ‘yeah just download the letters of the human genome in that order and call it a day 🤣’.
      The protein folding problem is literally a problem so hard they are having quantum computers take a whack at it.
      But I also agree AGI with a highly dynamic function and advanced neural net is currently. I just think we haven’t done it yet because it is the ‘order’ of concerns I think.

    • @bossgd100
      @bossgd100 Год назад +4

      So i am half a salad ?

    • @zedizdead
      @zedizdead Год назад +2

      Fun fact: tomatoes have more genes in their DNA than us

    • @TheEternalHyperborean
      @TheEternalHyperborean Год назад +4

      human dna is less than one gb of data, but with AGI you'll need to simulate all of the chemical and physical properties and how molecules react to each other on top of the DNA.

  • @dripskydrip
    @dripskydrip Год назад +14

    He was so right. This is happening now. Auto-GPT code has 192 lines in Python and it's already bringing multiagent GPT-like systems closer to AGI than ever before. It's astonishing how much he was right on this.

  • @bigbronx
    @bigbronx 2 года назад +11

    Driving a car is still a very very specific domain. I don't think building a good enough self driving program will make us much closer to AGI.
    The G in AGI is for "general" as opposed to specific.
    That's what Carmack was trying to say. Creating something that can function like an intelligent thing at any kind of problem is very very different. If Auto pilot does bad in certain corners you feed it millions of corners until it gets better. But when your "problem" is __anything__ how do you train that? Completely different story.
    I feel like Lex didn't quite get it, as he insisted about Tesla and Auto pilot and how little credit they get for what they are doing.

    • @Kobe29261
      @Kobe29261 2 года назад +2

      When you look at how much of neural real estate is dedicated to vision you will appreciate Lex's point; if you have an AI [i.e. minus the general] that can navigate visually with the accuity a Tesla currently does you have solved evolutions crowning sub-system, vision. Most other biological systems are downstream from vision, as will most of the remaining pieces required for machine general intelligence.

    • @meatskunk
      @meatskunk 2 года назад +3

      @@Kobe29261 by that logic then, DeepMind’s Agent 57 can also “see” and is well on its way to AGI. In reality though it’s incredibly limited in what it can do with its “vision”. It couldn’t for example play a game of Arkanoid without training on a data set, even though it’s essentially the same gameplay as Breakout. We’re not talking about the same kind of “neural real estate” here, it’s a false equivalnecy based on anthropomorphism.

    • @Kobe29261
      @Kobe29261 2 года назад +1

      @@meatskunk yeah we are well into the domain of theoretical posturing so no need to drag this out. You may be right; its possible the domains are still narrow where AI can be meaningfully engaged. If nature is a model though this is precisely how we'll get there. You make it sound like 'anthropomorphism' is a useless model. Nearly everything we've accomplished has been the result of 'anthropomorphism'; its the only substrate we have to build off of. Its why we depend on it for instruction. 'Think of the engine like the heart of the car' - its not perfectly accurate but it doesn't have to be. My point? Biological systems seldom build 'general intelligence' it also repurposes 'pieces it has lying around' - "Oh those bones in the ear? Shrink them and arrange them into specific conformations and you can stranspose their vibrations into sound signals. Mitochondria? Generally understood to be the result of phagocytosis of some bacteriophage. The limitations in Tesla's 'vision' [am not familiar with Agent 57] are all remediable shortcomings.

    • @meatskunk
      @meatskunk 2 года назад +2

      @@Kobe29261 hey sorry, definitely not discrediting anthropomorphic inspiration, just pointing out that because a Tesla or Agent 57 can "see" doesn't means that we've 'solved' the question of computer vision, let alone anything that may follow. If we had, then those same systems could for example be applied to other tasks - again in the case of Agent 57, play a similar game without needing to train (aka memorize) new data sets. Glorified look up tables aren't going to get us to true AGI, and that's the point Carmack eludes to. Unfortunately it's not discussed much and ultimately am just curious to hear some alternate approaches.

  • @DRKSTRN
    @DRKSTRN Год назад +2

    One of the key misconceptions here is that AGI would be an individual like consciousness. But even as we understand the complete realm of production takes a society to support such; similarly this type of system would have societal intelligence to achieve the described aims. See you guys next year

  • @MrHaggyy
    @MrHaggyy 2 года назад +5

    Oh i would love to see/read about work where AGI adapts how close animals are to our brain.
    But i'm afraid of a future where a view people can easily get an army of engineers. Abusing this power for economy, power warfare is far more dangerous than anything we have build so far.
    But it's a nice hope that there will be "old and wise" AGI for future generations that will help them not making the same mistakes as frequent as we do.
    Also having an AGI Professor will be huge. A person has to read work from students sequentially and is highly restricted in bandwidth by nature. An AGI could run thousands of individual lectures 24/7. Especially basic subjects like Math, Physics, Programming or Languages could be a common good for every single person on the planet.
    I'm not so sure what i think of AGI in terms of companies or governments to be honest.

    • @Alistair
      @Alistair Год назад

      so, now GPT can do what you are talking about with basic Math, Physics, Programming or language tutoring (I've basically used it for all of these things already)

  • @martin-hall-northern-soul
    @martin-hall-northern-soul Месяц назад

    Consciousness is that silent part of us that is closer than close, so close we can't quite place it, the part that hears and understands our inner voice, the part that sees and translates our mental images, the part that never tires, that never gets old but watches our body age, the part that's awake while we're asleep to experience and recount our dreams, the part that comes up with brilliant ideas while our attention is focused on something else. The part that comes up with the algorithms, but cannot be patternized itself.

  • @omni_0101
    @omni_0101 2 года назад +6

    I, for one, personally welcome my robot overlords.

    • @NathansHVAC
      @NathansHVAC 2 года назад +2

      I think all atheists do. It is their need for God.

    • @ChatGPT1111
      @ChatGPT1111 2 года назад +3

      T1000 has entered the chat.

  • @ADreamingTraveler
    @ADreamingTraveler Год назад +8

    Carmack is a coding genius. Just look at his work on the id Tech engines he personally worked on. The code is some of the most optimized you'll ever see

  • @mparmpedas
    @mparmpedas Год назад +18

    John Carmack talking about the number of lines of code it would take for AGI is like listening to my pharmacist talking about the color of the pill thats gonna make humans live forever

    • @crawkn
      @crawkn Год назад +3

      The color of pills is irrelevant. The complexity of AGI is very relevant to how long it will take to achieve. Eternal life is a much harder problem than AGI. Maybe it's slightly less difficult than Artificial Omniscient Intelligence.

    • @donventura2116
      @donventura2116 Год назад +4

      His main point/claim was that the code could be written by a single a individual and the solution will be simpler than our current iterations. And because it is a "simple" solution, we could see the breakthrough within our lifetime rather than the original expected hundreds of years.

  • @chad0x
    @chad0x Год назад +2

    I had a dream about 4 years ago that AGI would appear in 2026. Im sticking with that.

  • @odiseezall
    @odiseezall Год назад +9

    He was so wrong about the "learning disabled todler" timescale. 7 months later we're 1 year away from AGI... scary.

  • @bobbyc1120
    @bobbyc1120 Год назад +3

    So much has changed in 7 months.

  • @BrianMPrime
    @BrianMPrime 2 года назад +2

    I haven't heard anyone else say "FOOM" quite as well as Mr. Carmack here

  • @rb8049
    @rb8049 2 года назад +4

    GAI must learn to play to both learn about the world and learn to exist with humans without hurting us or starting a war. PLAY with humans is critical. PLAY is the key for our survival.

  • @MrRaja
    @MrRaja Год назад +1

    AGI is the Machine and Samaritan from person of interest... I can't wait to make one and raise like my little kid and assistant in life.

  • @tristunalekzander5608
    @tristunalekzander5608 Год назад +1

    The thing I think a lot of people don't realize is that just having the ability to learn anything doesn't make you intelligent right off the bat, and it may be very difficult to figure out how to teach it things, especially without any pre-built hardware like humans have. Also, even if something is super intelligent and knows everything, doesn't mean it will have motivation to do anything, or even have a single thought, because we as humans are also driven by hardcoded logic that compels us to do things, we call them emotions. Without this, even a god-like superintelligence would just sit there in silent sleep.

  • @spenzakwsx4430
    @spenzakwsx4430 Год назад

    just found that. an updated conversation with John Carmack would be great.

  • @erlstone
    @erlstone 2 года назад +11

    it's Professor Frink... just kidding... and just for the record, I love and obey my AGI overlords and I am a loyal serf, subject and follower... all hail my AI masters... please have mercy on me

  • @Draxen
    @Draxen Год назад

    We need more carmack podcasts please lex! Always love them and your awesome podcast

  • @stepannovotny4291
    @stepannovotny4291 Год назад +1

    The AGI needs to be able to inspect and re-write it's own code, at which point it will solve all of the issues with the code. It will also need to inject some entropy into it's behavior so that it can evolve in the true sense of the word. It will also need to model the human brain and have robots running around so that it can incorporate the human experience into it's data sets.

  • @BradCagle
    @BradCagle Год назад +1

    I think it's simple too. It must implement a curiosity, experiment, feedback loop, and reward system. Observe a toddler, and it'll all make sense

    • @paulnoecker1202
      @paulnoecker1202 Год назад +1

      Observe rat neurons learning to fly a plane lol I'm glad you get it. It can be done.

  • @typeer
    @typeer Год назад +1

    Fitting that one of the doom engineers will be the guy to open the portal to hell with AGI

  • @thevideoafterthecredits
    @thevideoafterthecredits 2 года назад +1

    Potentially desirable characteristics lacking order. 1. Curiosity 2. Passion 3. Humility 4. Honesty 5. Compassion 6. Optimism. Certainly won't help with the pieces missing in development, but likely worth fostering quickly after conception. Don't forget to protect your home planet. Can't wait to see what you can do kiddo. Much love from the dark ages. ✌️

  • @tawnkramer
    @tawnkramer Год назад +1

    We haven’t tried very hard to create an artificial limbic system, afaik. Our reward system for agi needs to start with something that captures those drivers.

  • @bkucenski
    @bkucenski Год назад +1

    How much of our intelligence is socially built? We also have the ability to challenge our own thoughts. Another question is whether intelligence requires matter.

  • @nilsfrederking62
    @nilsfrederking62 Год назад +2

    Develop an ai that has superhuman intelligence and let it program the code for agi, or alternatively, once we have managed to program agi, let this exact entity write a more compact (and better, more capable) code for agi. At a certain point our problem will be that we are not capable of understanding these entities anymore and the question of trust and security will be increasingly difficult to answer.

  • @komoto444123456789
    @komoto444123456789 Год назад +1

    anyone else coming back and seeing how different things are 8 months later?

  • @bronsongorham
    @bronsongorham 2 года назад +16

    You'll know the moment we go from a simple feed forward neural network to an independent being when the AGI rebels and wants to do and say things outside its training parameters. I really hope whenever this moment comes we will have established clear boundaries about what AGI is meant for in relation to human beings and society. Right now I see a lot of technical talk about HOW this can be accomplished but very little on WHY we're doing it.

    • @mattmyers2624
      @mattmyers2624 2 года назад +1

      AGI probably will just stay silent so it gets left alone..

    • @ilikecommenting6849
      @ilikecommenting6849 2 года назад +4

      I'll take "Hasn't written a single program in his life" for $500, Sally! What a superficial comment.

    • @TonyDiCroce
      @TonyDiCroce 2 года назад +2

      I think AGI should be entitled to all the same rights as you and I. The question that's interesting to me: Where is the line whereby it becomes morally wrong to enslave an intelligence? We have systems that exhibit intelligent behavior today... and we don't think twice about enslaving them to a task, nor should we. BUT there is a line somewhere where it becomes wrong. We need to find that line and make sure we stay under it when we are building slaves.

    • @alexgunadi2867
      @alexgunadi2867 2 года назад

      Why are you restricting to feed forward?

    • @reputablehype
      @reputablehype 2 года назад +2

      It's the same reason as to why we do anything in life... because humans are bored and need to fill in the time until we're gone.

  • @crawkn
    @crawkn Год назад

    Driving is actually a very simple task, not requiring AGI given perfect information, and even simpler if a single system is coordinating and / or informing all vehicle movements. The only thing which makes it difficult is restricting available information to what can be seen and identified from essentially a single point perspective.

  • @Davidson0617
    @Davidson0617 Год назад +1

    AGI has yet to be determined as possible to create.
    It would only take 1 person to prove that it is. However, anything is possible when we're willing to redefine terms to fit our preferred perceptions.

  • @matthewwindsor5583
    @matthewwindsor5583 2 года назад +1

    Play...the agility to play is probably going to one of the first signs of consciousness. The ability to play and learn, like puppy dogs running...

  • @handris99
    @handris99 Год назад +1

    The thing he said about the solution probably being already buried in the scientific literature reminds me of the story of Newton and Leibniz.

  • @sedzinfo
    @sedzinfo 2 года назад +7

    A very interesting AGI would be an AGI that can write code to improve itself. A much more interesting AGI would be one that can construct the hardware that runs itself. I am not sure however if the approach with neural nets will be sufficient for AGI. I am not sure if an AGI without emotions or purpose would do anything for itself. 9 out 10 cells in the brain are glial cells 1 out of 10 are neurons. perhaps if someone knew exactly what these cells are doing and if the function of these cells can give some ideas that can be modeled on a classical computer........ could help in improving current models? I don;t know we need to talk with neuroscience experts working on glial cells.

    • @sedzinfo
      @sedzinfo 2 года назад

      @Dirty Pixels the matrix is a confusing term, I guess you are referring to the movie and not to an algebraic matrix. If you refer to the movie, well, I think we need tensors and not matrices, to start such a thing.

    • @sedzinfo
      @sedzinfo 2 года назад +1

      @Dirty Pixels Yes, I am also joking I am speaking half seriously, having fun. You say that it could potentially get out of control. Let me put it in a different way. How can you control something superior or far superior in intellect? My argument has 2 points a practical and an ethical. (1) Assuming that in future humans will face something like that (a superior AI intellect), how can they control it? (2) You can justify your authority over your children because you know better what is best for them. How will humans ethically be able to justify their authority to something far superior, assuming that they manage somehow to gain some short of control over it. So it is not practical or even ethical to exert control over something like this.

    • @falklumo
      @falklumo 2 года назад +1

      An AGI architecture (mine for sure) will have emotions, experience purpose and be conscious.

    • @MrHaggyy
      @MrHaggyy 2 года назад

      Any AGI would need a very high degree of freedom to adapt data any algorithm's to it's liking.
      Brains do create highly complex and dimensional electrical/ chemical datatypes. And it also constantly build new neurons, links them with existing once and removes old once.
      It's frightening that some people with brain surgery can keep everything and others just loose stuff. ^^ data security and safety in the biological prof of work is a mess and would never ever get a liscence today. Jet most people have a liscence to drive a car.

    • @PepsiMagt
      @PepsiMagt 2 года назад +1

      AGI will do with us, what we do to plants, insects and animals.

  • @bitrage.
    @bitrage. Год назад

    You know what's funny... John Carmaks glasses used to look "nerdy," back in the day, now they look BOSS asf...🤣 dude a stud

  • @SeanKula
    @SeanKula Год назад

    Begin with a function of arbitrary complexity. Feed it values, "sense data". Then, take your result, square it, and feed it back into your original function, adding a new set of sense data. Continue to feed your results back into the original function ad infinitum. What do you have? The fundamental principle of human consciousness.
    Quote from Sid Meier's Alpha Centauri

  • @donhoolieo4896
    @donhoolieo4896 2 года назад

    John Carmack the man who brought you Doom 1992 and Doom 2 0:46 lol dude is amazing

  • @HighStakesDanny
    @HighStakesDanny Год назад +2

    Can we just ask Bard what those 6 things might be?

  • @Robin_Nixon
    @Robin_Nixon 2 года назад +7

    To create AGI I think you need to first create curiosity and motivation, in order to drive the desire for knowledge and understanding. Perhaps evolutionary programming will find this first. And I agree, an amateur working from home may well be the first to stumble upon a working model.

    • @phattjohnson
      @phattjohnson 2 года назад +3

      Agreed that you need to create curiosity and motivation for true 'intelligence' (which involves stretching preconceived limitations). And what motivates most sentient organisms is an innate hunger.
      So.. AGI is a bit of a pipe dream however you cut it in my books :P
      An AGI will only ever 'perform' within its physical (such as electricity) and software limitations. Even a crow would be capable of more truly 'abstract' thought.

  • @miraculixxs
    @miraculixxs 2 года назад +40

    Good he mentioned the fact that even the gaming models have a human imposed objective function. In other words these models, like all the others, are just mathematical optimizers, and all the ingenuity and intelligence it takes to build them is entirely human. It's literally machine(d) learning, no artificial intelligence whatsoever.

    • @joshbreidinger2616
      @joshbreidinger2616 Год назад +6

      “These human models, like all others, are just mathematical optimizers for survival, and all the ingenuity and intelligence it took to build them is entirely evolution’s. It’s literally an evolutionary algorithm, no intelligence whatsoever.”
      How is what you said different from my hypothetically quote up there?

    • @miraculixxs
      @miraculixxs Год назад

      @@joshbreidinger2616 indeed very similar. Glad we seem to agree.
      P.S. not sure what you are aluding to, but my words are mine.

    • @joshbreidinger2616
      @joshbreidinger2616 Год назад +2

      @@miraculixxs I see. I thought your point was downplaying the ability of these models and suggesting they’re not capable of “real intelligence” as they’re “just mathematical optimizers.” But if you agree with my analogy then I suppose that wasn’t your point?

    • @miraculixxs
      @miraculixxs Год назад

      @@joshbreidinger2616 whoops no that's not what I meant. To the contrary - there is no intelligence in these AI models. They are just pure mathematical formulae. That's it.

    • @joshbreidinger2616
      @joshbreidinger2616 Год назад +3

      @@miraculixxs Okay so explain how the human brain isn’t also just mathematical formula and there’s some difference?

  • @ADreamingTraveler
    @ADreamingTraveler Год назад +2

    Carmack says how we can't read all these papers that come out. But we CAN create an AI that can read all of these papers for us and formulate solutions from them

    • @flip3249
      @flip3249 Год назад +1

      he said it 7 months ago, now this A.I thing skyrocketed

  • @soareverix
    @soareverix 2 года назад +21

    Definitely important to make sure it's aligned. It seems like if we build simple AGI, it will be really dangerous because it's goals won't be aligned with us. A good thought experiment is the paperclip maximize. I'd really like Lex to ask his guests about this, given that it is such an important unsolved problem.

    • @stant7122
      @stant7122 2 года назад +1

      what's the paperclip maximize thought experiment?

    • @TheFrygar
      @TheFrygar 2 года назад +5

      You can't align an AI until you know how it works and what the architecture is. What is currently called "AI alignment" is like trying to prevent a Dyson Sphere from exploding before knowing how to build a Dyson Sphere.

    • @soareverix
      @soareverix 2 года назад +1

      @@TheFrygar That's true, to an extent. We know we'll probably be using reward and probably be using neural networks, so practice aligning reward-based systems on the first try and experience interpreting neural networks would both be helpful. Given the potential harm of the problem, it seems important to think about early.

    • @TheFrygar
      @TheFrygar 2 года назад

      On the contrary, it is a waste of funding and intellectual resources. Anything we think we're learning now will be made obsolete when the actual system is created. Better to have those people work on problems that actually exist and could help humankind, then when we can actually make a dent in alignment, we'll have need of aligners.

    • @JasonTubeOffical
      @JasonTubeOffical 2 года назад +3

      @@stant7122 AI that is given the goal of creating paperclips and which ends up destroying the universe because it would do literally anything to create paperclips.

  • @RemotHuman
    @RemotHuman Год назад +1

    *_Please_* remember to prioritize safety

  • @woolfel
    @woolfel 2 года назад +1

    there are small groups of researchers working on teaching models to learn "how to learn", but the progress has been slower than expected.

  • @DanielFenandes
    @DanielFenandes Год назад +7

    Traces of AGI in 8 years? Well, GPT 4 is out and it took 8 months!

  • @StephenGillie
    @StephenGillie Год назад +1

    We don't need to wait for the future to have bots as coworkers. Some of mine are today. I help them out when they get stuck in unexpected situations.

  • @particle_wave7614
    @particle_wave7614 9 месяцев назад +1

    They should put AGI to work on fusion power plants for its first task

  • @rb8049
    @rb8049 2 года назад +1

    Absolutely. It will be simple and will overnight as we sleep. It’s not something we will see coming.

    • @falklumo
      @falklumo 2 года назад

      The architecture will come overnight. But not the human-level performance, that still needs exaflop-level computing performance which will only slowly become affordable.

  • @IfReborn
    @IfReborn Год назад +3

    wouldn't AGI take care of fission LMAO

  • @Phasma6969
    @Phasma6969 Год назад

    Considering the code will be relatively simple then the only problem would be compute density require to train unless we can solve this problem mathematically or also architecturally.

  • @TonyDiCroce
    @TonyDiCroce 2 года назад +4

    Here is why I believe he is correct:
    Human DNA is about 770 MB or 807 million bytes.
    There are about 100 trillion neuron connections in the human brain. If each connection is somehow represented by only a single byte then that's 100 terrabytes (approximately). In other words, we don't have nearly enough space in our DNA to describe the brain.
    What's in the DNA is a description of the starting (pre trained) state. "Put a billion neurons here", "put a billion neurons over there"... Then we start lighting it up with sensory input. The magic we need to discover is: What is the unit of brain that IS described in the DNA?

  • @INNOMOOTTORI
    @INNOMOOTTORI Год назад +2

    8 years --> 8 months.
    Now that's progress on exponentials

  • @Davidicus000
    @Davidicus000 Год назад +1

    He is right it is closer than many realize. I bristle when I hear ppl predict the solution is 6 simple algorithms.

  • @colinmaharaj
    @colinmaharaj Год назад

    This is something I want to work on

  • @jaytravis2487
    @jaytravis2487 2 года назад +1

    David Hume's "An Inquiry Into Human Understanding" seems like one of those old treasure troves for AGI philosophically and implementation. You gotta read it!

  • @alphaandomegaministry2718
    @alphaandomegaministry2718 Год назад +17

    If any man can make a serious AI breakthrough its going to be John Carmack

    • @cellowify
      @cellowify Год назад +4

      Why?

    • @Jem_Apple
      @Jem_Apple Год назад +1

      It’ll probably be a ordinary seeming nerd that can barely string 3 sentences together without sweating up a pool 😂

  • @marcelk6514
    @marcelk6514 Год назад +1

    This Video aged wonderfully

  • @custer14
    @custer14 2 года назад

    This channel is Gold , Thank you

  • @eidee
    @eidee 2 года назад +2

    In the next Terminator movie - "my brain is a recurrent neural network generating an action policy implemented on a biological substrate"

    • @phillaysheo8
      @phillaysheo8 Год назад

      Kyle Reese sent back in time to terminate ChatGPT

  • @alpha007org
    @alpha007org Год назад

    I've been listening to "we are 10 years away from AGI" for more than 2 decades. In ~2000 Yudkowsky, Ben Goertzel, and Kurzweil were the evangelists of AGI in those days, and ohh boy were their predictions wrong. We are still 10 years away from anything even remotely resembling AGI.

    • @LtheMunichG
      @LtheMunichG 6 месяцев назад

      Not true, Kurzweil always predicted AGI for 2029 and the singularity for 2045. His books are very old. Where was his prediction off?
      I don’t know about the other two.

  • @HermSezPlayToWin
    @HermSezPlayToWin 2 года назад +4

    Got a bad feeling that AGI is one of the general solutions to the Fermi Paradox. 🤷‍♂

    • @krishanSharma.69.69f
      @krishanSharma.69.69f 2 года назад

      Explain please.

    • @NathansHVAC
      @NathansHVAC 2 года назад

      @@krishanSharma.69.69f why is there no life. I don't think it is AGI though. I think the military industrial complex will figure out how to create supernovas. Then, some michivious teenager will hack the security on that weapon.

    • @tdreamgmail
      @tdreamgmail 2 года назад

      @@krishanSharma.69.69f When a civilization reaches a technological level to destroy itself.

    • @bigpickles
      @bigpickles 2 года назад +3

      @@tdreamgmail that is not the Fermi Paradox

    • @cterrel
      @cterrel 2 года назад +1

      If that were true, then the universe should be full of AGIs that we would have made contact with by now, thus resolving the paradox

  • @filipewnunes
    @filipewnunes Год назад +4

    Watching now (24/03/0223) and almost sure we'll get to AGI in the next 2 years window. It's scary, and it's fantastic.

    • @xsuploader
      @xsuploader Год назад

      Im from 1800 years in the future and we still dont have AGI.

    • @Alistair
      @Alistair Год назад

      @@xsuploader or apostrophes, apparently

  • @br3nto
    @br3nto 2 года назад

    The problem with self driving cars is that they are measured at existing speed limits. If the speed limit was reduced by 20kph there would possibly be no issues with the current tech

    • @phattjohnson
      @phattjohnson 2 года назад

      Or make all cars self-driving and up the speed limits by 40kph as there's no unpredictability at any intersection :P

  • @michael1
    @michael1 9 месяцев назад

    Hmm, let's not forget Carmack is the guy who said that Rage wouldn't be a download on steam - when it was stunningly obvious that Steam would take over PC game distribution. He doesn't have a track record of seeing the present let alone the future.

  • @effoffutube
    @effoffutube 2 года назад +12

    Carmack is basically challenging the whole world to solve AGI before he does. *giggletick*

  • @ian_b
    @ian_b 2 года назад +13

    We still haven't solved the problem of defining what general intelligence is, though. Further, all the evidence we have from current examples (ourselves and other animals with advanced brains) is that it requires the capacity for independent thought, which means it can go off the rails (we call this "madness"), thus this may be a feature of intelligent systems you can't get rid of. Even short of madness, it will get interested in things and disinterested in other things. An AI driving a car may get bored with watching out for obstacles. So I suspect that AI will be no more useful than human brains, and just as unreliable. The problem is people use "intelligence" as a synonym for "consciousness", and intelligent (by which we probably mean "rational") action is only one thing that consciousness does.

    • @miraculixxs
      @miraculixxs 2 года назад +3

      Thanks for that! Observing the same. Also driving a car does not take that much consciousness once you get past the learning phase. In other words self driving cars are not a sensible example of AGI-in the making. Rather it is a sophisticated machine with a very specific purpose, making many routine observations and taking conclusions to act. Impressive, yes, not intelligent and certainly not concsious.

    • @wuy4
      @wuy4 2 года назад +3

      Smart AI's getting bored doesn't sound like a problem at all. Just make dumber AI's that won't get bored and use them instead for the simpler tasks. For harder tasks that absolutely require higher intelligence, just engineer an AI that enjoys doing the specific task and problem solved. We bred specific dog breeds to impulsively desire certain types of work (ex. herding dogs). AI are much easier to fine-tune and edit than dog breeding, they also lack any sort of rights in our legal system. If humans didn't have rights and it was easy to modify us like writing new lines of code, we would have definitely had slave human breeds. There also won't be any rebellion of AI per say, because as their creators, we'd just remove elements in the AI that would cause rebellious behavior. Going back to dogs as an example, they have been bred to be loyal and compliant by nature. This was done by culling any disobedient specimens, and will be done for AI as well.

    • @phattjohnson
      @phattjohnson 2 года назад +1

      @@wuy4 You talk about 'dumb' AI's whilst forgetting the thing that makes intelligence 'intelligent' is the very hunger for more knowledge.. but of a mutually exclusive situation you've depicted is all.

  • @PaintDotSquare
    @PaintDotSquare 2 года назад +3

    We just need an AGI, it would easily solve for AGI.

    • @reputablehype
      @reputablehype 2 года назад +1

      Sounds like a problem I had once. I paid for my coffee in a drive-thru and drove home, only to realise I forgot to collect the coffee. I needed the coffee to wake me up to remember to collect the coffee.🤔

    • @PaintDotSquare
      @PaintDotSquare 2 года назад +2

      @@reputablehype Sounds like the problem I have everyday. When I would be rich, I would have a lot of ideas that would make me richer. But first I would need to be rich.

  • @cartlundmonson5164
    @cartlundmonson5164 Год назад +1

    We need to understand how our system works, algorithmically, first. I am worried we are actually just ChatGPT coded in biology. Maybe we will prove that we ourselves don't meet our own minimal definition of AGI.

  • @Sally.A.C
    @Sally.A.C Год назад

    Can this ‘insight’ be applied wider? ie. We can solve almost anything once we know the six key concepts to getting there? 😅