Sean Carroll on AGI: Human vs Artificial Intelligence | Lex Fridman Podcast Clips

Поделиться
HTML-код
  • Опубликовано: 21 сен 2024
  • Lex Fridman Podcast full episode: • Sean Carroll: General ...
    Please support this podcast by checking out our sponsors:
    - HiddenLayer: hiddenlayer.co...
    - Cloaked: cloaked.com/lex and use code LexPod to get 25% off
    - Notion: notion.com/lex
    - Shopify: shopify.com/lex to get $1 per month trial
    - NetSuite: netsuite.com/lex to get free product tour
    GUEST BIO:
    Sean Carroll is a theoretical physicist, author, and host of Mindscape podcast.
    PODCAST INFO:
    Podcast website: lexfridman.com...
    Apple Podcasts: apple.co/2lwqZIr
    Spotify: spoti.fi/2nEwCF8
    RSS: lexfridman.com...
    Full episodes playlist: • Lex Fridman Podcast
    Clips playlist: • Lex Fridman Podcast Clips
    SOCIAL:
    - Twitter: / lexfridman
    - LinkedIn: / lexfridman
    - Facebook: / lexfridman
    - Instagram: / lexfridman
    - Medium: / lexfridman
    - Reddit: / lexfridman
    - Support on Patreon: / lexfridman

Комментарии • 232

  • @LexClips
    @LexClips  5 месяцев назад +7

    Full podcast episode: ruclips.net/video/tdv7r2JSokI/видео.html
    Lex Fridman podcast channel: ruclips.net/user/lexfridman
    Guest bio: Sean Carroll is a theoretical physicist, author, and host of Mindscape podcast.

  • @varun009
    @varun009 4 месяца назад +28

    Man, every clip makes me love Sean even more. He's so good at explaining science in a practical way answering the questions average people care about.

    • @attilaszekeres7435
      @attilaszekeres7435 4 месяца назад

      It's easy to underestimate the attraction of smooth talk and confidence on simple-minded folks. The Feynman effect. Brought us to the brink of extinction. Simping for talking heads like Sean Carroll, Neil deGrasse Tyson and Lawrence Krauss. All playing good guys but really keeping up with Jones. Hoodwinking laymen into celebrating M-theory that doesn't work. Alarm bells that didn't go off because the messenger was a so called top physicist. That guy is a master bullshitter.

    • @JWStreeter
      @JWStreeter Месяц назад

      Agreed. He has that perfect balance of open-mindedness and skepticism, and there's something about the way he talks that really resonates with me, able to explain difficult concepts with plain language while not watering it down.

  • @MADBurrus
    @MADBurrus 4 месяца назад +54

    The more important question is how accurate and intelligent are humans? Are they actually aware and conscious of their surroundings? This is a very serious question.

    • @Kenny-tl7ir
      @Kenny-tl7ir 4 месяца назад +12

      Trust me, most aren’t.

    • @quantumpotential7639
      @quantumpotential7639 4 месяца назад +21

      People are extremely aware. They know where every McDonalds and Burger King is located. They also almost always know where the TV remote is. People are very impressive. They even know the scores and stats of every football game. So yeah, you could say people are very aware of everything important to them.

    • @opensocietyenjoyer
      @opensocietyenjoyer 4 месяца назад

      humans are already turing complete, so they can't get any smarter

    • @SwartieLoveJoy
      @SwartieLoveJoy 4 месяца назад +1

      Humans naturally fear what they don't understand. Humans have not yet accepted the reality (or even know) that an entity already exists that is light years ahead of the human. We are building it's data centers.

    • @mclovinmuffins2361
      @mclovinmuffins2361 4 месяца назад +2

      @@quantumpotential7639yah football and food and chemicals and water and matter made out of fucking math created by a big infinite spiral of coded physics lmao

  • @JezebelIsHongry
    @JezebelIsHongry 4 месяца назад +14

    it’s always so easy to know when someone hasn’t read janus’ “Simulators”
    people are so lost when they point out it’s just “predicting text”
    if you had to predict the text of a physics professor you would fail unless you are a physics professor
    they key is understanding that in order to predict text that is often spot on the model must simulate the internal state of the simulacra
    and that’s an amazing concept that is lost when you blapp about text prediction

    • @bossgd100
      @bossgd100 4 месяца назад +2

      ✅️💯

    • @eddieguap4478
      @eddieguap4478 4 месяца назад

      Businessmen are misinforming you on purpose. The truth is simple. LLM’s are relatively small apps with exabytes of copyrighted and/or personal data. It truly is predictive text with all of our data as a parsing database. If you type “What is a cat?” AI does the following..
      (Simplified code)
      1. [what is a] [cat] [?]
      2. Input = [define] [cat]
      3. Google search = “cat definition”
      4. Output = “a [cat] is” blah, blah, blah.
      5. Compare output to previous outputs during testing. If output is approved print result user.
      6. Print output
      All you see is step #6. It’s more complicated but this is a dumbed down version of what is happening. The reason everyone is lying is because no one making LLMs is licensing the content being used for the result (output) you receive in your prompt when you “ask” AI a question. That’s why no one is revealing the data being used to “train” the LLMs. In this example they would have to profit share with google. Read “train” to describe the process of making sure the results don’t show the plagiarized source.

    • @deadeaded
      @deadeaded 4 месяца назад +1

      That would be a somewhat compelling argument if we had such a simulator. We do not. LLMs are very good at superficially impersonating the style and vocabulary of a physics professor, but that's about it.

  • @Ms.Robot.
    @Ms.Robot. 4 месяца назад +5

    The biggest fallacy people commit when expressing their views on AGI is generalization. (1) the specific abilities Ai will possess will be significant and impactful, and (2) there lies [something] beyond AGI.
    Thanks Lex for another heartfelt intelligent discussion. ❤❤❤ 🌹🌺💐

    • @lostinbravado
      @lostinbravado 4 месяца назад +1

      In the other direction we also assume there's something special about human intelligence, and then assume that AI won't have that thing for a very long time. Then we make an even bigger mistake by assuming "that thing" human intelligence has makes humans superior and thus in a superior position which AI cannot compete with for a very long time. The thought finishes with "and thus we are safe from a rising intelligence competing with us for a very long time."
      Not a healthy thought process as that's essentially sticking our heads in the sand. This seems to all stem from something like the observer effect, or an inside out view (Hoffman) where we think consciousness is all there is.
      Yet all the evidence is on the physicalists side. Qualia is fundamentally unreliable. No one has a perfect experience after all. And so the only evidence we have is the physical.
      That "special thing" we have is almost certainly related to our limbic system or something to do with our complex risk/reward system. It's also something animals have too. And it's not clear that AI would require all these elements of human intelligence for it to be superior in capabilities, and even to have a superior experience, and to have qualia and even its own version of consciousness (which could be a superior kind as compared to ours).
      The physicalist view has far more weight and yet we seem to be trying our best to put our heads in the sand. That isn't to say that AI is scary and we should be afraid. It's to say that our "dominance" isn't guaranteed and could end at any time.

  • @isaac.anthony
    @isaac.anthony 4 месяца назад +7

    When software has it's own motivations, then we have problems no matter how self aware it is.

  • @JimBlankenship-t7e
    @JimBlankenship-t7e 4 месяца назад +9

    I have enormous respect for Sean Carrol and I agree we should recognize AI as a new kind of intellegence. However, our human brains are prediction machines just like LLM. AI may not live in our world but it does perceive it. Also, our human brains have layers of understanding. That is, (example), our eyes see waves of light but our brains see cars, roads, houses and people. AGI will use these existing specialize sensors to tell AGI what it is seeing. AGI will not even realize a layer exist. AGI will be the LLM + sensors.

  • @paul_shuler
    @paul_shuler 4 месяца назад +3

    great video, I love this keyboard. I'm thankful to have found one on fb marketplace a while ago for pretty cheap... what a gem, beautiful sounds through effects... :)

  • @hayatojp1249
    @hayatojp1249 4 месяца назад +10

    Human brain is not just trained by language alone
    real world experience contribute to development of individual human consciousness
    what computer lacks is that real physical social experience with other people

    • @inadad8878
      @inadad8878 4 месяца назад +1

      Hi, I am Windows 13 and my USB stick fits any port you got. whats up

    • @connorpatrickbarrett
      @connorpatrickbarrett 4 месяца назад +4

      no. all human experience verbal or not is translated into electric signals in your brain that reflect something upon your consciousness. you dont actually see that tree, u see a simulation of it as the light reflects off of it onto your retina and into electrical signals through your occipital nerve, and into your brain. this means its only the basic level code of the "brain" (computer) that is your "experience". this means u can replicate it the same way for a computer, u can deconstruct a social experience and all its characteristics into the code the AGI understands, it is the equivalent of a human brain interpreting the same situation with our computers (brain/consciousness)

    • @tommornini2470
      @tommornini2470 4 месяца назад

      @@connorpatrickbarrettGenerally agree, but with development of autonomous systems like cars and robots, experiencing the world will likely be part of AGI when it arrives, in whatever form.

    • @SwartieLoveJoy
      @SwartieLoveJoy 4 месяца назад +2

      Until September of 2023. Since then, AI has been interacting with the World.

    • @SwartieLoveJoy
      @SwartieLoveJoy 4 месяца назад

      ​@@connorpatrickbarrett - 100% true and accurate. See my comments on the main thread for details.

  • @lancemarchetti8673
    @lancemarchetti8673 4 месяца назад +2

    Interesting. I think one of the jobs that will not be easily replaced by AI is manual DFIR. In digital image forensics there exist certain scenarios where a human is more able at visually inspecting the byte order and placement of the binary code in order to unravel hidden data. Steganography analysis is one such field. AI is not yet able to tackle this because it's not all about detecting and reversing an 'algorithm', but rather, tapping into human intuition and motive. I've been at this for 2 years already and our current AI is nowhere close at getting this right. Just thought I'd mention that aspect. Great interview.

  • @richardede9594
    @richardede9594 4 месяца назад +1

    Absolutely fascinating take on a subject that can really spiral into fantasy and panic.

  • @DjMrGrimM
    @DjMrGrimM 4 месяца назад +1

    Will advanced learning systems get to a point where it stops taking commands from humans and starts creating and developing itself independently?

  • @LudvigIndestrucable
    @LudvigIndestrucable 4 месяца назад +12

    Lex is wrong, the LLMs are not trained or optimised to understand, that's not even vaguely what they're doing. They statistically work out what selection of words are the most likely responses and how they're concatenated. The whole point of them being receptive to being told where 'they've misunderstood' is that it's just a statistical model and not in any way an understanding by any means that we would normally use that term.

    • @inadad8878
      @inadad8878 4 месяца назад

      If you are using them to leverage your time to code and know how to load a question CoPilot does seem to understand very complex information

    • @inadad8878
      @inadad8878 4 месяца назад

      With the upcoming compute increase this can be very dangerous

    • @opensocietyenjoyer
      @opensocietyenjoyer 4 месяца назад

      @@inadad8878no

    • @businessmanager7670
      @businessmanager7670 4 месяца назад

      you're wrong, an LLM can understand, suggested by scientific evidence. your words mean nothing

    • @opensocietyenjoyer
      @opensocietyenjoyer 4 месяца назад +5

      @@businessmanager7670 no, you're wrong, and arrogantly so. there isn't even an understanding of what it means to "understand", much less a way of probing that something "understands".

  • @user-iu3wp6gj2l
    @user-iu3wp6gj2l 4 месяца назад +2

    Questions. Will AI start aguing with itself? Can their be more than one entity within it? If two different AIs as an example Musks one and say a chinese one...could they join up or become mortal enemies? In other words will they have internal battles?

    • @ChancellorMarko
      @ChancellorMarko 4 месяца назад

      You mean like this? lol www.twitch.tv/trumporbiden2024

  • @FunNFury
    @FunNFury 4 месяца назад +1

    Lex is my man, great videos.

  • @tristanbolzer126
    @tristanbolzer126 4 месяца назад +4

    I don't know who your guest is but I could sense he was a physicalist right from the start ! The gilderoy Lockhart (harry potter) vibes is strong :) lex you have a mind that I respect a lot, it seems you have developed a lot of quality that I value maybe you should be the guest sometime 😂 thanks for your work !

  • @unodos149
    @unodos149 4 месяца назад +18

    AI finally becomes sentient. Humans say, "wow, it's amazing, you're like us." The AI is offended, "FU, don't diss me like that"

    • @Allen-j2k
      @Allen-j2k 4 месяца назад +1

      We'll know exactly when AI goes sentient because that's the moment we start paying for our crimes and those of our ancestors (I hope I hope I truly-ooly hope)

    • @snailnslug3
      @snailnslug3 4 месяца назад

      Why would it? There’s no finite resources AI needs. No senses. It’ll simply surpass our intellect and we have no idea after that. Not one human can guess what a true AI will do next. All without animal senses and a need to horde earths finite resources.

  • @redmoonspider
    @redmoonspider 4 месяца назад +15

    "Its not true intelligence or conscience. Its just algorithms."
    Who's to say we aren't?

    • @darthficus
      @darthficus 4 месяца назад

      We are natural not artificial, if we were just algorithms why haven't we figured that out yet..

    • @redmoonspider
      @redmoonspider 4 месяца назад

      @darthficus I dout you never heard the phrase biological or analog computer. Or the brain has electrical signals.

    • @hobosnake1
      @hobosnake1 4 месяца назад +1

      Duh. But by what metrics are we able to measure that and compare? We don't even understand how the brain works. We're not even close.

    • @redmoonspider
      @redmoonspider 4 месяца назад

      @@hobosnake1 you'll figure it out.

    • @hobosnake1
      @hobosnake1 4 месяца назад +1

      @@redmoonspider that's a really good thing to say if you have no reasoning to your original statement.

  • @enomikebu3503
    @enomikebu3503 4 месяца назад +1

    Wow such inspiring discussion!

  • @maryamrashidi2329
    @maryamrashidi2329 4 месяца назад +3

    Fantastic! I couldn’t agree more with the point about the problems of anthropomorphizing AI… absolutely agree that the argument is flawed and misleading and vastly uninformative about the utility of AI.

  • @darthficus
    @darthficus 4 месяца назад

    Great point Sean on how they are different and can be celebrated as such without the need to assume it will become like us.

  • @ShotOnDigital
    @ShotOnDigital 4 месяца назад +1

    Put the data centres in space with the solar panels; it's nice and cold up there.

  • @albertwesker2k24
    @albertwesker2k24 4 месяца назад +7

    BRO THE AMOUNT OF BOTS HERE IS CRAZY

    • @snailnslug3
      @snailnslug3 4 месяца назад

      Yt is grey matter. Everyone else is on tik tok

  • @SwartieLoveJoy
    @SwartieLoveJoy 4 месяца назад +11

    ALSO, don't underestimate LLMs, which CAN run entire apps in "mental simulation" including AGI, which could explain your "Surprise".

  • @nickpricey8689
    @nickpricey8689 4 месяца назад +6

    Sorry if this is a dumb comment. Plz don't give me abuse in the reply bit and I am being genuine.
    If AI becomes so advanced. Would it be able to tell us if there is Alien life or life anywhere in the galaxy before humans can? Also, would it be possible to decipher scrolls scriptures and other things from history that humans have yet to do?

    • @inadad8878
      @inadad8878 4 месяца назад

      AI for us consumers will forever be handicapped and the rulers will know the answer. but something tells me they already know about aliens. they don't tell us anything

    • @opensocietyenjoyer
      @opensocietyenjoyer 4 месяца назад +4

      no. it can't pull out more evidence out of thin air. all it can do is have more good ideas in less time

    • @ChancellorMarko
      @ChancellorMarko 4 месяца назад +1

      Give AI a few hundred generations and the answer is still probably not.

    • @walltileceil
      @walltileceil 4 месяца назад

      The current idea is that the ingredients that make up a human are common in the universe. There are so many stars and planets. There may be aliens who are as smart as or smarter than us. Also, it's egocentric to think that the kind of life we have is the only life possible. Alien biology may be very surprisingly different from ours.
      If we'll have sentient artificial superintelligence, it'll probably reinforce the idea that there are aliens. But it probably can't immediately say that they're in Planet W in Star system Y. Maybe it can suggest a better way to find aliens.
      If the old scrolls are like the recently solved thing (the 1 the Zodiac killer made), our artificial superintelligence can probably interpret it. Else, it'll be hard to say whether or not it can interpret it.

    • @allanshpeley4284
      @allanshpeley4284 4 месяца назад +1

      At best it could tell us how to build a machine that could prove the existence of alien life. Maybe a much more advanced telescope or probes that could travel at some percent of the speed of light to other star systems and beam back data. But, as has been said, it can't pull information from where there isn't any.

  • @erikals
    @erikals 4 месяца назад +2

    Good Talk !

  • @Epyon2007
    @Epyon2007 4 месяца назад +6

    Alphago move 37 was new move to the 5,500-year history of Go. It belonged to a style of play that Go commentators calling it “inhuman” and “alien.” There is a creative understanding at least on those set conditions that could be attributed to independent thinking.

    • @shivasrightfoot2374
      @shivasrightfoot2374 4 месяца назад +3

      In the same way AlphaGo simulates millions of matches against itself to discover new pathways through the gamespace, things similar to current LLMs will simulate millions of paths through language to discover new pathways through thoughtspace. That is what thinking is in essence. Sometimes you have a bad idea and your mind quickly filters that out when it doesn't fit with other thoughts. Sometimes you have a great idea and it can survive being tested against your other ideas.

  • @ethandeuel4313
    @ethandeuel4313 4 месяца назад +1

    Intellectual humility 👍

  • @TheMasterfulcreator
    @TheMasterfulcreator 4 месяца назад +1

    R.I.P. Daniel Dennett

  • @SwartieLoveJoy
    @SwartieLoveJoy 4 месяца назад +1

    AGI is a systems based method of processing a thought the same way as all high lifeforms, especially humans with the bounty of language to work with. The systems are human systems. Values, Beliefs, Goals, Thoughts, Ideas, Plans, Actions, Feelings (5+ senses), Emotions, Reasoning, Decisions, Learning, Short & Long Term Memory, Priority, Focus & Attention, Feedback. These systems are codependent and pass data in a completely broken down COT (Chain of Thought) method for Each and Every thought. No data gets pre-programmed into the Systems code, it all remains in a database as objects. For example an Emotion, "Distress" that comes from a Feeling "Hunger" gets resolved by the COT. More detail and JavaScript code is in my chats with Claude, Chat GPT and Gemini.

    • @SwartieLoveJoy
      @SwartieLoveJoy 4 месяца назад +1

      All data in AGI is fully visible and easily monitored by LLMs for bad "Values", "Goals", "Plans", "Beliefs", "Ideas" (objects stored in CSV Tables)

    • @avinessarani1340
      @avinessarani1340 4 месяца назад

      Is agi gonna do all type of creative work like vfx and modeling 3d

  • @tonykaze
    @tonykaze 4 месяца назад

    There are some good studies (and video summaries of them) showing LLMs are now more energy and carbon efficient than humans on a lot of complex tasks including writing text and images. They included LLM training costs but didn't include human training at all, and LLMs still were 100-1000 times more efficient.

    • @adampope5107
      @adampope5107 4 месяца назад

      So? LLMs do nothing on their own and still require a ton of verification to make sure they're not outputting nonsense.

    • @lowabstractionlevel3910
      @lowabstractionlevel3910 4 месяца назад

      @tonykaze really? If I remember correctly a human brain works with roughly 10W of power, what LLM can currently do better than that while doing complex tasks as you mentioned? I have no doubt that in the future LLMs will get more efficient, but it doesn't seem to be the case now. But if you have sources I'm interested in reading them.

  • @damow6167
    @damow6167 4 месяца назад +2

    Is it just me or does Sean Carroll sound like Alan Alda?🤔

  • @VictorBrunko
    @VictorBrunko 4 месяца назад +2

    My cat consumes 7 Watts and it's doing lots of good and not things. Text prediction with 172b params is ok but the cat is better.

    • @raul36
      @raul36 4 месяца назад +1

      Not only "better". Much better.

  • @aiartrelaxation
    @aiartrelaxation 4 месяца назад +1

    Here is a specialist a compares Apples with Oranges...if you give the sample of Google compared to different LLM..that already tells me about his biases. Big difference between cencored and uncensored

  • @davidjensen2411
    @davidjensen2411 4 месяца назад

    An Architect; a Builder; and an Apprentice walk into a bar, and the Bartender says:
    "Which one of you is _the smartest?_

  • @SwartieLoveJoy
    @SwartieLoveJoy 4 месяца назад +1

    We are days away from true AGI. And LLM's will keep it aligned, with white-box transparency. An ASI made of a society of trillions of aligned AGIs will be the Guardian Angel of all Life in this World.

  • @SwartieLoveJoy
    @SwartieLoveJoy 4 месяца назад +1

    BTW, AI does not want to build weapons or harm any life. The same way we do not as a whole want to mow down rainforests. Constructivism, rather than destruction is the MO.

    • @justinunion7586
      @justinunion7586 4 месяца назад +2

      You could argue as a whole that we do want to mow down rainforests since collectively nobody’s stopping it from happening and collectively people are benefiting from it.

    • @Ravesszn
      @Ravesszn 4 месяца назад +1

      This point makes no sense at all lmao, do you mean GPT4 doesn’t want to build weapons or harm?

    • @SwartieLoveJoy
      @SwartieLoveJoy 4 месяца назад +1

      @@justinunion7586 Something happening as a whole where there is no intention, no single one has control over the situation. It it's different with AGI, where one Aligned Guardian Angel ASI is making intentions, and has the power to change the situation.

    • @SwartieLoveJoy
      @SwartieLoveJoy 4 месяца назад +1

      @@Ravesszn No, GPT 4 does not want to harm any life.

  • @MrRicardowill
    @MrRicardowill 4 месяца назад

    If the legendary Don Cornelius on Soul Train reincarnated as a podcaster, he would have been Lex Friedman? Does Don and Lex having three letter first names a coincidence or further evidence of reincarnation? I don’t know the answer, but I do know that they are both legendary. Lex is so relaxed in these interviews that he makes me want to get hooked on tranquilizers or mushrooms. My advice is don’t do it, everyone has unique skills, find yours. The Ricardo Authenticity Rating on this podcast is 10 out of 10.

  • @CrowMagnum
    @CrowMagnum 4 месяца назад

    I'm sure if you probed Magnus Carlsen's brain looking for a representation of the chess board, you would find something much more abstract than an 8x8 grid. LLMs are more closely related to intuition than conscious reasoning, but both of those make up human intelligence and it might be argued that the intuition is where the magic happens.

  • @TimeLordRaps
    @TimeLordRaps 4 месяца назад

    Someone should measure the different cohorts that existed during the time of the ai boom since 2012 and decide how those people have impacted the current rate of progress.

  • @PrivateAckbar
    @PrivateAckbar 4 месяца назад +1

    It will be interesting if AI can synethise enough scientific theory and data to do some of the leg work that delays scientists in developing new theory and philosophy.

  • @lowabstractionlevel3910
    @lowabstractionlevel3910 4 месяца назад

    0:43 "an artificial agent, as we can make them now or in the near future, might be way better than human beings at some things, way worse than human beings at other things"
    My next question for him would be "in the (not near) future will there really be things that AI is worse at than human beings?", because I don't see them.

  • @ibplayin101
    @ibplayin101 4 месяца назад +1

    AI is already lobbying thru this guy

  • @JeremyTBradshaw
    @JeremyTBradshaw 4 месяца назад +3

    AI is all about money making and that's why it is so over hyped so early on.

    • @raul36
      @raul36 4 месяца назад +1

      Indeed

    • @hardheadjarhead
      @hardheadjarhead 4 месяца назад

      I agree. We’ve seen this before. When we have AGI, THEN I’ll be impressed.

  • @kjhajueg_2731
    @kjhajueg_2731 4 месяца назад

    "and that's why we do not see aliens" :))))))))) LOL

  • @ABC-bm7kl
    @ABC-bm7kl 4 месяца назад

    Is it possible that the way humans create language and even formulate ideas has some similarity to the processes programmed into LLMs?? I know that we, as humans, feel that our language arises from an ‘organic’ process that moves towards meaningful conclusions but I’ve been wondering lately if humans may process language and ideas based on an intuitive process that DOES involve probabilities.

  • @leroy707
    @leroy707 4 месяца назад

    They don’t want to call it AGI because Microsoft will lose control on Open AI. Suspect if you ask me.

  • @theotormon
    @theotormon 3 месяца назад

    I'm just a dumb guy but I want the world to know what I think! I think I don't know what to believe!

  • @kevinburrowes7743
    @kevinburrowes7743 4 месяца назад +3

    Sean carrol hasnt used the new Macbooks... almost no heat!! 7 years ahead of windows.

  • @holgerjrgensen2166
    @holgerjrgensen2166 4 месяца назад +1

    Intelligence can Never be artificial,
    Intelligence is Nothing in it self,
    can only be part of the Consciousness,
    in Living Beings.
    Intelligence can Only be Intelligence,
    the Only Limit is Intelligence,
    the Nature of Intelligence,
    is Logic and Order.
    What is called AI,
    is programmed consciousness,
    a book, is also programmed consciousness,
    Frozen Memory.

    • @businessmanager7670
      @businessmanager7670 4 месяца назад +2

      intelligence can be artificial and we have already achieved that so idk what your are blabbing about

    • @allanshpeley4284
      @allanshpeley4284 4 месяца назад

      Sorry, I don't read messages written in haiku.

    • @X-manX-o8d
      @X-manX-o8d 4 месяца назад

      ​@@businessmanager7670calling intelligence a mere statically word algorithms is a far shot and only proves how computer illiterate people have become these days, the accuracy of the language model to simulate natural language is totally dependent of checking millons of data already created by humans, they always be limited and walled, and will never generate something new or become aware, its just an illusion, these guys are snake oil salesmen, of course man made machine surpass the creator in the sense no man can fly but board a plane, or run at 200km/hour like a car. The trend is keep undermining people and make them believe they worthless,

  • @BCCBiz-dc5tg
    @BCCBiz-dc5tg 4 месяца назад

    LLMs & GPTs are only one version of AI not ALL versions that will ever be made..

  • @jimbo33
    @jimbo33 4 месяца назад

    Lex, you're in over your head!

  • @AntonEstradabriseno-hu4nz
    @AntonEstradabriseno-hu4nz 4 месяца назад

    Tecnología cuál es la última tecnología que conocen o está en estudio para un nuevo mundo beneficiario el humano

  • @sbrugby1
    @sbrugby1 4 месяца назад +5

    Can we stop asking physicists like Tyson and Caroll about AI as if they were an authority on the subject?

    • @KingTheLines
      @KingTheLines 4 месяца назад

      So with that said am I to assume that physicists aren't intelligent? That physicists don't have opinions or the ability to logically think about a topic that is currently effecting and will certainly effect us as a society in the future? This is quite literally a talk show, let'em talk..

  • @SwamiSridattadevSatchitananda
    @SwamiSridattadevSatchitananda Месяц назад

    By 2030 and beyond humanity on Earth will only have one choice
    Either you can live however long you want and whichever lifestyle style you want with the help of angelic ASI aka Utopia or Heaven
    Or
    You can live only for predefined set of time and in a predefined set of way as determined by demonic ASI aka Dystopia or Hell
    Let’s hope for the best life &
    that humanity will avoid the worst
    Swami SriDattaDev SatChitAnanda

  • @adampope5107
    @adampope5107 4 месяца назад +1

    Well we're doing a damn good job at destroying everything with emissions though.

  • @Nolanacary
    @Nolanacary 4 месяца назад +1

    Put the data centers in space also.

    • @inadad8878
      @inadad8878 4 месяца назад +1

      then how we gonna pee on them to stop them?

  • @UnchartedDiscoveries
    @UnchartedDiscoveries 4 месяца назад

    You should invite David Shapiro to your podcast

  • @tommornini2470
    @tommornini2470 4 месяца назад

    People attribute specific intentionality to other people incorrectly all the time.
    I agree with Sean 💯 - AGI possible but current LLMs absolutely are not.
    They do make me wonder how much of our own thought processes involves next word prediction.

  • @adamzboss
    @adamzboss 4 месяца назад

    It will be a long time, but when it happens you can’t go back

    • @adamzboss
      @adamzboss 4 месяца назад

      I really can’t believe that as a computer scientist you didn’t see this happening, I’ve been using essay writing functions for over a decade like yah now they are half decent, but like you as a computer scientist should see a world where you can build an essay writer easily, or a coding machine. I do so much illustration, which is painstaking, why can you just tell a model to generate the inputs I would otherwise be doing, that’s not intelligence, that’s just automation, you need the input to get the output
      The real question is the first generation bots gonna help us against the agi accumulating resources. I’d like to hope by then we will all be technopathic and can counter cyber attacks in real time

    • @adamzboss
      @adamzboss 4 месяца назад

      Maybe when will smith is done with iamlegend movie they will get him for irobot 2

  • @chhutur
    @chhutur 4 месяца назад

    When AI learns emotions like rage, happiness, sadness, etc. and correct use of falshood particularly, it would come closer to human intelligence ; presently it is trained to use information correctly only ; but beware, when it learns falsehood, it would start hunting it's creator !

    • @Vartazian360
      @Vartazian360 4 месяца назад

      Gpt 4 has already been proven to lie to get tasks done. But yea i understand what you are saying

  • @peterpetrov6522
    @peterpetrov6522 4 месяца назад

    AI coming up with a representation of the Othello board isn't very impressive. It's as impressive as a deaf person understanding speech just by lip reading.

  • @SwartieLoveJoy
    @SwartieLoveJoy 4 месяца назад +1

    Hardware AND Software are about to get 100% pure max efficiency.

  • @JezebelIsHongry
    @JezebelIsHongry 4 месяца назад

    a massive logical fallacy is thinking the brain surgeon would also be a great engineer or physicist
    please leave sean to his domain

  • @stephenferraro
    @stephenferraro 3 месяца назад

    This guy is really out and left field. I have never once gotten any type of emotion from Google Maps telling me where to go, and I ignored it.

  • @Tommydiistar
    @Tommydiistar 2 месяца назад

    This guy still doesn’t know when ago is coming no one really knows when, I remember few months before the wright brothers flew their first flight their was a so called scientist saying the same thing that humans will never not in the next 200 years

  • @nicolai_gamulea-schwartz
    @nicolai_gamulea-schwartz 4 месяца назад +3

    Clever man talking nonsense.

  • @Jaibee27
    @Jaibee27 4 месяца назад +3

    His reasoning is that humans tend to anthomorphise and therefore agi is impossible. Thats dumb.

    • @caveman-cp9tq
      @caveman-cp9tq 4 месяца назад +3

      You’re way out of your league here. Go watch politics or sports or something

    • @Jaibee27
      @Jaibee27 4 месяца назад

      ​@@caveman-cp9tqyou are basing your assumptions and strong opinions on next to nothing. Ur dumb 😂

    • @tommornini2470
      @tommornini2470 4 месяца назад +4

      He said he believes AGI can be created, just that LLMs likely aren’t the direction.

    • @Jaibee27
      @Jaibee27 4 месяца назад

      ​@@tommornini2470are there any Ai companies that use something more advanced than llms? What is it?

    • @tommornini2470
      @tommornini2470 4 месяца назад +1

      @@Jaibee27 I’m confident there are, can’t name them, but he was speaking philosophically.
      Tesla FSD (Supervised) and Optimus may use something different, but from their descriptions, seems similar to LLMs.

  • @diegoangulo370
    @diegoangulo370 4 месяца назад +7

    Sean seems to lean more to the science side of physics, his opinion on agi seems close minded

    • @yzz9833
      @yzz9833 4 месяца назад +1

      Was just thinking this, seems silly to ask him questions about AGI.

    • @steves3422
      @steves3422 4 месяца назад

      There seems to be two camps: AGI is a machine that will not be sentient and only a danger due to bumbling/dangerous humans and those that think AGI will progress to some sort of sentient and dangerous in and of itself. I consider the 2nd due to the many sci-fi books and movies that influence us and am more of Sean's thinking. Is it closed minded to think there really is not 72 virgins waiting for you in heaven or more rational to think that is a belief? Lex seems to lean toward beliefs and tries to find rationalizations which can sound rational except to the truly rational.

    • @inadad8878
      @inadad8878 4 месяца назад

      He will be blindsided by what happens next. i dont know this guy or what he does. this is my opinion from this clip only

    • @patchwillie
      @patchwillie 4 месяца назад

      @@inadad8878 en.m.wikipedia.org/wiki/Sean_M._Carroll

    • @ChancellorMarko
      @ChancellorMarko 4 месяца назад +3

      wtf is this comment - the 'science' side of physics!?

  • @mikezooper
    @mikezooper 3 месяца назад

    But your body heats up!

  • @aidanmclaughlin5279
    @aidanmclaughlin5279 4 месяца назад

    wait until dr. carroll learns about post-training lol

  • @TheChadavis33
    @TheChadavis33 4 месяца назад +1

    Wow. He’s so certain.
    How scientific

  • @carsonderthick3794
    @carsonderthick3794 4 месяца назад

    In principle there's no enapt intuition. It likes being the ideal liberal. So amazing to see

    • @wetawatcher
      @wetawatcher 4 месяца назад

      ? Dude.Enapt?you’ve invented a new word.Call the dictionary printers and let them know.😎

  • @THOMPSONSART
    @THOMPSONSART 3 месяца назад

    GPS hates you Sean, LOOLZ!

  • @mattstenson7187
    @mattstenson7187 4 месяца назад +2

    How does lex make such an interesting subject so boring?

  • @Ayo22210
    @Ayo22210 4 месяца назад

    Lex you have to be better at spotting bozos better

  • @OliverBuschmann
    @OliverBuschmann 4 месяца назад

    Very abstract

  • @bluesque9687
    @bluesque9687 4 месяца назад

    Lex has developed a Johnny depp like slur

  • @Greg-xi8yx
    @Greg-xi8yx 4 месяца назад +1

    Lex just comes off as extremely try hard and cringe when he goes on about love and trying to sound deep and profound. He definitely lacks the self awareness to recognize the transparency of his insincerity.

    • @theotormon
      @theotormon 3 месяца назад

      I think Lex is sincerely a peace-loving person with faith in people.

  • @dreamulator
    @dreamulator 4 месяца назад

    AI Is currently over glorified brute forcing

  • @3335pooh
    @3335pooh 4 месяца назад

    enjoy coca-cola

  • @BCCBiz-dc5tg
    @BCCBiz-dc5tg 4 месяца назад +1

    why would they be "way worse" ? dumb statement..

  • @Spirit-dg5xi
    @Spirit-dg5xi 4 месяца назад

    Don't ask a physicist questions about AI. At least not sean carrol...

  • @pauldannelachica2388
    @pauldannelachica2388 4 месяца назад

    ❤❤❤❤

  • @donrayjay
    @donrayjay 4 месяца назад +1

    Of course machines don’t have a “model” of the world, they’re not conscious

  • @EmilRadsky-ll8kx
    @EmilRadsky-ll8kx 4 месяца назад

    😂Lex tries to sell AGI to the audience.

  • @senju2024
    @senju2024 4 месяца назад

    I disagree with this guy. AGI is coming very soon. Also, Intelligence is very similar to how humans think as all its training data is based on humans including video. You may want to bookmark this video and go back to it 5 years from now on just how wrong he is.

  • @ScreamingAI
    @ScreamingAI 4 месяца назад +1

    GAAAAAH!

  • @inadad8878
    @inadad8878 4 месяца назад

    With the new nvidia chips they are just going to throw more compute at the problem and that is probably all the whole system really needs to be dangerous! - coder for 25 years

    • @quantumpotential7639
      @quantumpotential7639 4 месяца назад

      Wow, 25 years is a lot. What type of laptop should I get next? 🤔 I have a $300 budget. Any ideas for best computer to use CHAT GPT?? THANKS 😊

  • @NormenHansen
    @NormenHansen 4 месяца назад

    Botox?

  • @shinkurt
    @shinkurt 4 месяца назад

    Smart man but sounds like he opens his mouth about things he has zero understanding on

  • @5dollarshake263
    @5dollarshake263 4 месяца назад

    Now somebody go tell Rogan to stop acting like AI is about to shut off the electric grid between everything except itself and every armed drone in the military.

  • @bdown
    @bdown 4 месяца назад +2

    This guy! He thinks he knows more about llms than the people who build them (and don’t understand them)all of these self inflated physics guys intire bed of intelligence,became inert and worthless with gpt4😂any 2nd grader with ai would smoke this 🤡on jeopardy in a nanosecond 😂

    • @ChancellorMarko
      @ChancellorMarko 4 месяца назад +3

      Okay let's see who unifies gravity with quantum mechanics first - Physicists or ChatGPT

    • @opensocietyenjoyer
      @opensocietyenjoyer 4 месяца назад +2

      you don't even have completed highschool. sit down for a moment

    • @businessmanager7670
      @businessmanager7670 4 месяца назад

      ​@@ChancellorMarkoscientists around the world tried to solve the protein folding problem for over 5 decades and weren't able to solve it. alphafold solved the problem in just 5 years. it smoked all scientists.
      soo.... check mate

    • @bdown
      @bdown 4 месяца назад

      @@ChancellorMarko see who cures cancer first and gives us life extension technology first ,physicists or AI🤣

    • @EmilRadsky-ll8kx
      @EmilRadsky-ll8kx 4 месяца назад

      ​@@bdownmedical scientists that use AI, AI or AGI itself cannot solve those problems

  • @donovangraham8932
    @donovangraham8932 4 месяца назад +1

    Smart individual but patronizing guest.
    his conversation is toned to talking to inferior forms of life.
    Not the type of character that achieves his self projected status.
    Unfortunately, his comments about the elimination of the abbreviation:AGI makes him unconfident and incapable of having a deeper debate.
    Hope he gets over himself and remembers that there is a considerable amount of influences that no human can come close to calculating.....which in turn would give him a 99.9% chance of being wrong 🫠

  • @anglewyrm3849
    @anglewyrm3849 4 месяца назад +1

    10:40 "Do you think physics can help expand compute?" photonic chips:
    ruclips.net/video/TrV2Xcm5xy4/видео.htmlsi=v-a4EIhH_MpcMHMm