Sam Altman on GPT-5 | Lex Fridman Podcast

Поделиться
HTML-код
  • Опубликовано: 28 май 2024
  • Lex Fridman Podcast full episode: • Sam Altman: OpenAI, GP...
    Please support this podcast by checking out our sponsors:
    - Cloaked: cloaked.com/lex and use code LexPod to get 25% off
    - Shopify: shopify.com/lex to get $1 per month trial
    - BetterHelp: betterhelp.com/lex to get 10% off
    - ExpressVPN: expressvpn.com/lexpod to get 3 months free
    GUEST BIO:
    Sam Altman is the CEO of OpenAI, the company behind GPT-4, ChatGPT, Sora, and many other state-of-the-art AI technologies.
    PODCAST INFO:
    Podcast website: lexfridman.com/podcast
    Apple Podcasts: apple.co/2lwqZIr
    Spotify: spoti.fi/2nEwCF8
    RSS: lexfridman.com/feed/podcast/
    Full episodes playlist: • Lex Fridman Podcast
    Clips playlist: • Lex Fridman Podcast Clips
    SOCIAL:
    - Twitter: / lexfridman
    - LinkedIn: / lexfridman
    - Facebook: / lexfridman
    - Instagram: / lexfridman
    - Medium: / lexfridman
    - Reddit: / lexfridman
    - Support on Patreon: / lexfridman
  • НаукаНаука

Комментарии • 144

  • @LexClips
    @LexClips  2 месяца назад +4

    Full podcast episode: ruclips.net/video/jvqFAi7vkBc/видео.html
    Lex Fridman podcast channel: ruclips.net/user/lexfridman
    Guest bio: Sam Altman is the CEO of OpenAI, the company behind GPT-4, ChatGPT, Sora, and many other state-of-the-art AI technologies.

    • @imdavidbaby
      @imdavidbaby 2 месяца назад

      Lex, you really can't see the signs? 👁

  • @Idlewyld
    @Idlewyld 2 месяца назад +54

    I remember having an argument in 2001 with my friends about whether computers would ever have a terabyte.

    • @astericks53
      @astericks53 Месяц назад +3

      “You know these mega bytes things? There must be like, a thousand of them worth of data total that has crossed through x server. Imagine if there were a thousand thousand of them? Nah thats far out man”

    • @GursimarSinghMiglani-ym7nu
      @GursimarSinghMiglani-ym7nu Месяц назад +1

      🧢

    • @elmo4672
      @elmo4672 Месяц назад +1

      I remember having an argument in 1002 with my friends about whether it would be possible to drive a flying machine through a building.

  • @TB-ni4ur
    @TB-ni4ur 2 месяца назад +102

    I asked GPT4 to model a baseball dropped from outer space at a particular orbit and tell me the terminal velocity and how long it would take to reach sea level. It couldn't figure it out at all, but if I coached it along each step so to speak, giving it prompts on which methods and assumptions to make, it was able to get impressively close to a symbolic solution. I believe this is what Altman is referring to about it's limitation to figure out lots of steps in the necessary to reach a particular solution. It has the knowledge, but the model is not really able to put it all together when it's not a well established process.

    • @steventolerhan5110
      @steventolerhan5110 2 месяца назад +5

      Yes this is the true use case for chatgpt at the moment. Humans need to make the plan and conduct the logical reasoning breaking the task into smaller bits then chatgpt will do the grunt work of each subtask.
      This has helped my coding workflow immensely. It can’t understand my codebase or my project requirements super accurately but If i can turn my requirements into smaller steps chatgpt can handle it no problem.

    • @endgamefond
      @endgamefond 2 месяца назад

      Agree. My case, I ended up prompting it many times to get my desired result.

    • @AssuranceOFSalvation1981
      @AssuranceOFSalvation1981 2 месяца назад

      Oh yeah I asked gtp to give me a reality based reactionary story about Joe Biden at the state of the Union address mid speech stares into the crowd and suddenly Hanky the christmas poo emerges from the president's mouth saying howdy Ho!!!! Then proceeded to jump into the laps of crowd goers splattering poopy on everyone...then he jumps back into the president's mouth and the President swallowed him followed by saying yummy whilst staring into the crowd maniacally....

    • @warrenjoseph76
      @warrenjoseph76 2 месяца назад +5

      I tried using it for some data analysis. Sales stuff and a somewhat large data set but relatively basic and the info was exported from our POS system so it was pretty organized. It kept “forgetting” previous instructions so it was really difficult to build on step by step the way you’ve described. Also when I asked to show specifics of the data it used for spitting out answers to some of my queries, there were so many blatant errors. Then it would apologise for including or excluding information I already set parameters for. Like a kid fudging homework and then acting all embarrassed when asked to show their work. I gave up. It really was not good.

    • @endgamefond
      @endgamefond 2 месяца назад

      @@warrenjoseph76 true. I also use it for data analysis sometimes. ChatGPT 3.5 is a trash for that. They generate many errors for python scripts. I find Copilot is better (tho still have some errors). To really give good output, it needs many prompting and you need show how their codes show errors. Drawback: it takes more time.

  • @scubagrant6045
    @scubagrant6045 2 месяца назад +209

    Would people agree that the title is misleading ? They didn't talk about chat GPT 5. The only talked about chat GPT4

    • @QuickMadeUpName
      @QuickMadeUpName 2 месяца назад +19

      Just like sam altman, dude is misleading. never really says anything specific or clear, always speaks in very ambiguous terms. very shady

    • @Luca-tw9fk
      @Luca-tw9fk 2 месяца назад

      ​@@QuickMadeUpNameidiot

    • @joebriggs5781
      @joebriggs5781 2 месяца назад +2

      @@QuickMadeUpName he didn’t label the podcast clip title though, that was Lex’s guy

    • @QuickMadeUpName
      @QuickMadeUpName 2 месяца назад +2

      @joebriggs5781 oh I was talking specifically of Sam here. I do like lex, though the title was a little misleading. Sam on the other hand is always misleading, I never feel like he is being honest, something about him, musk says the same thing

    • @clastrag1
      @clastrag1 2 месяца назад +2

      Click bait

  • @ChrisCapelloMD
    @ChrisCapelloMD Месяц назад +2

    As a newly graduated physician I can say GPT4 is invaluable in the studying progress. Almost no friction or wasted time researching why certain things are correct or underlying pathophysiology.
    Revolutionized how easy it is to consume medical knowledge

  • @vibrationalcurrency
    @vibrationalcurrency 2 месяца назад +23

    Im still impressed by GPT-3!

  • @Bofum69
    @Bofum69 2 месяца назад +17

    This Sam guy definitely has something bothering him throughout this whole interview

    • @kaynewest6167
      @kaynewest6167 2 месяца назад +1

      He’s talking to a bias actor playing both sides , lex loves power and will choose anyone in current power . sam doesn’t give a fuck about this guy

  • @nevokrien95
    @nevokrien95 2 месяца назад +3

    I like questioning gpt on coding topic. It gives me better answers than wikipedia.
    My go to thing is to ask for real world examples of X. And then if it can quote them I assume the events are real (every time I check they are)
    As I understand it a lot of the errors are happening in cases where the answer requires thinking and not just memorising.
    But most of what I wana read on requires compiling and memorising sources which make it REALLY good at it.

    • @michaelwells6075
      @michaelwells6075 2 месяца назад

      "Give real world examples of X" is an excellent tip. Thank you!

  • @ReflectionOcean
    @ReflectionOcean 2 месяца назад +5

    00:02:07 Use GPT as a brainstorming partner for creative problem-solving.
    00:02:47 Explore how GPT can assist in breaking down tasks into multiple steps and collaborating iteratively with a human.
    00:05:09 Reflect on the exponential curve of technology advancement to anticipate future improvements beyond GPT 4.
    00:08:00 Consider integrating GPT into your workflow as a starting point for various knowledge work tasks.
    00:09:10 Implement fact-checking procedures after using GPT to ensure accuracy and avoid misinformation.

  • @HalfLapJoint
    @HalfLapJoint 2 месяца назад +2

    I'd never heard of ChatGPT 3 or 3.5. ChatGPT 4 was a pivotal moment for me and always will be.

  • @chadkndr
    @chadkndr 2 месяца назад +12

    he knows something that we don't know. he speaks very carefully but confidently about the near future.

    • @keyser021
      @keyser021 2 месяца назад

      Charlatan, you are easily fooled.

    • @michaelwells6075
      @michaelwells6075 2 месяца назад +1

      We all know something most everyone else doesn't-but given that he's engaged in projects with national security as well as corporate security interests, he's required to be circumspect. "Carefully but confidently" is one description, mine is a bit different: "he speaks as if managing expectations, by suggesting, "you ain't seen nothin' yet," but "given the pace of change, you'll not be that impressed by it for long."

    • @pranavbiraris3426
      @pranavbiraris3426 2 месяца назад +1

      I think they are near to achieve AGI

    • @sezam84
      @sezam84 2 месяца назад +2

      He is selling guy :) he knows how to sell.
      We are far from AGI we need way faster silicons way more memory we can provide.

    • @frarfarf
      @frarfarf 2 месяца назад

      100 💯. He's already interfaced with the next gen, the private tech that the public hasn't seen, and it's incredible and game-changing. The world will be a different place 10 years from now

  • @gaivoron
    @gaivoron 2 месяца назад +15

    This guy freaks me out.

  • @DaveSimkus
    @DaveSimkus 2 месяца назад +8

    Chat GPT sucks from my experience with it. It needs a factual probability percentage with each answer to make it useful because it gives a lot of wrong information and says it like it's 100% correct. It should show in a percentage of how confident it is with each answer.

    • @WeebBountyHunter
      @WeebBountyHunter Месяц назад

      I only use it to roleplay and it tends to forget shit alot

    • @nm8023
      @nm8023 Месяц назад +2

      Isn't the problem how? How can it fact check? People can hardly do so, and I wonder if it is really reasonable to expect an Algorithm to do so? We don't talk enough about the limitations and best uses for this software, which is exactly what Sam talked about here. Not a criticism, just a reflection on your comment.

    • @DaveSimkus
      @DaveSimkus Месяц назад +1

      @nm8023 yeah that's a good point. Maybe there is some way for it to check by looking at how much of the answer is corroborated by other sources. So if the answer has very few connections then it could say that it isn't 100% sure of it. If each answer had like a visual bar that showed how confident it was in its answer, it would be helpful.

  • @cgcrosby2
    @cgcrosby2 2 месяца назад +5

    Altman is like, always disagreeing but telling you you’re close to being right, really close! Still wrong though, sorry!

  • @bigfishysmallpond
    @bigfishysmallpond 2 месяца назад +3

    I thought it worked better when it was first released. They blocked it from business building. IMO

  • @motionsick
    @motionsick 2 месяца назад +26

    Can we get a body language expert to break this down?

    • @vitmemay
      @vitmemay 2 месяца назад

      @TheBehaviorPanel

    • @vikinghammer87
      @vikinghammer87 2 месяца назад

      Weaponized Autism.

    • @kaynewest6167
      @kaynewest6167 2 месяца назад

      Yeah sam thinks this guy is fake and plays both sides for views and money , and doesn’t wanna build a connection anymore

  • @AG-ld6rv
    @AG-ld6rv 2 месяца назад +7

    The best way I can describe AI right now is it's like having a 1-hour or longer Google research session (+ maybe reading some books) where you read numerous links and synthesize what you think is the truth all in 15 seconds. This transfer of information is incredibly beneficial. Now, this is for topics that have data to learn from. So if you are asking something about undergraduate programming that it has learned from using 150 formal textbooks and thousands of online discussions, it will summarize anything you ask quite well (except when it hallucinates or learned to reproduce a common misconception, which is especially possible for narrower, more technical topics or topics overloaded with political meanings of right and wrong or other situations I'm sure). If you ask it something novel that has 1 research paper about it, it basically has 0% chance of getting it correct.
    In my opinion, this limitation will not be surpassed. The transfer of known information will continue to improve hopefully, but ultimately, it has to have the information in the first place in the training set. Is it disproven that it will start to think like a human and produce novel ideas on complex topics like doing scientific research or developing end-to-end a corporate system in need of coding? No. It might happen. I don't think it will though. Personal prediction.
    The four issues I see going forward are: Hallucinations, dependency on training set data to know an answer, training sets becoming corrupted with AI-generated content, and scaling the input/output size + computation time + energy used computing. That last category I think most people don't think about at all even though it was discussed some here. The most typical use, as he admits, is a person putting in a few paragraphs maximum and needing a few paragraphs out maximum. It's a stretch to think of one of these as programming a corporate system that consists of 500,000 lines of code all bound to further complexities like what hardware it will run on, how it will be deployed, what monitoring will run as the code executes (like # of requests per minute to a web server), what alarms will sound based on those metrics, and what will be logged for debugging investigation. Oh, and it will need to be able to translate a list of needs described somehow (apparently in plain English) into a complicated program without much representation anywhere. We aren't talking about asking it to solve an algorithms interview question, which has thousands of examples in the training set, or to build a web scraper, which also has thousands of examples in the training set.

    • @Hlbkomer
      @Hlbkomer 2 месяца назад

      Give it a few years and every single issue you have listed will be resolved. AI will become smarter than humans. It’s just a matter of time.

    • @AG-ld6rv
      @AG-ld6rv 2 месяца назад +1

      @@Hlbkomer Yeah, if you trust the dude who benefits financially from telling people that dream. There are researchers in the field that do not think it's possible. It's not a certainty at all. It's just like when "We'll be on Mars in 10 years" said we'd be on Mars in 10 years like 14 years ago. As far as I know, he hasn't even hit the moon yet.

    • @Hlbkomer
      @Hlbkomer 2 месяца назад

      @@AG-ld6rv 10 years 20 years who cares. Rome wasn't built in a day.

    • @AG-ld6rv
      @AG-ld6rv 2 месяца назад +1

      @@Hlbkomer "Wahhh, I like to be manipulated by people who don't care at all about being honest or me. They're SOOO cool. Just read their interviewers and tweets. Who cares if they have a track record of interfacing with greater society in intentionally misleading ways while expressing no practical doubts and no actual setbacks (like when Musk's Neuralink killed a monkey within months). Yeah, they intentionally manipulate people by talking about dreams as guarantees and best case scenarios as guarantees (maybe not even best case scenarios. They may actually know limitations prohibit such progress quite directly in which case they are plain old lying rather than lying by omission). These are my tech saviors of the world!!!!"
      I've already talked plenty about Altman. Let's talk more about Musk -- Neuralink in particular. Why can't he tweet, "Drats, our test monkey died after a test implant was put in"? A good person discusses setbacks and limitations as well as dreams (explicitly stated as such) and victories. Why has he never tweeted that there are plenty of neural implants in use for treatments of extreme, deadly diseases like Parkinson's but instead tweets as if he is making completely novel technology? Why did he, after the first human test implant happened, juxtapose his dream-filled tweets about controlling your phone and doing other insane stuff with "Got a successful implant in a human" to imply THAT technology was being tested when it was really a quadriplegic being augmented to do simple stuff like play chess? Why couldn't his original tweet have been, "We have Nueralink in a quadriplegic, enabling him to play chess! So far, these are the problems he is experiencing: ... "? Why can't he tweet that there are non-invasive helmets you can wear that enable you to play chess with just your mind without sticking metal wires into your brain? Why is he even talking about this extraordinary dream of large segments of the population being augmented despite having no disease when Neuralink will clearly be used for likely the next 20+ years ONLY in medical cases -- stuff like trying to treat Parkinson's, dementia, extreme schizophrenia, extreme bipolar disorder, extreme depression, etc.? Why does he have to be such a liar, and why do people have to defend these awful, greedy practices solely meant to create a buzz based on nothing?
      I am not saying Altman and Musk are not smart, imaginative, or hard-working. They have all three. That makes their flagrant disregard for telling the general population where their companies are actually at so much worse than when Joe Schmoe misunderstands something and makes extreme statements. Please, pick truthtellers to be your heroes, influencers, discussers of future technology, etc. I know it's not as exciting to hear a balanced statement with a few "I'm not sure" statements in it. However, it will be honest and reasonable at least. Imagine if I had one fried chicken restaurant and went around telling people my company will end up being a chain with 1,000+ locations. It just isn't the moral thing to do. It doesn't matter if I'm the colonel and eventually achieved that goal. The way the fried chicken industry works, I just can't go around saying that. A more honest statement would be, "I would love to have 1,000+ locations in a few decades. Who knows, though!?" Man, that sounds so much more realistic and deserving of praise. And that example doesn't even include how Altman and Musk make statements based on speculative technological growth rather than just the much more demonstrated dream of opening up a bunch of restaurants, which we already know is possible. So in that example, it would be more like the colonel saying, "And we will cook all the chicken to perfection using a new cooking tool that flash cooks it all in 5 seconds flat!" Man, the colonel is starting to sound unlikable.

  • @dread69420
    @dread69420 2 месяца назад +2

    Here we see a classic demonstration of how a strict parent sees their child even if they are impressing the entire world: 0:43

    • @Bailiol
      @Bailiol Месяц назад

      Strict parent? More like narcissistic, passive aggressive or abusive parent.

  • @gianttwinkie
    @gianttwinkie 2 месяца назад +2

    How do you get ChatGPT 4 to read a book?

    • @zaferyank5960
      @zaferyank5960 2 месяца назад +2

      You can upload pdf file in Gpt-4 -Plus directly and prompt about it. Gpt-4 Plus analyzes pdf's smoothly.

    • @gianttwinkie
      @gianttwinkie 2 месяца назад

      @@zaferyank5960 Is ChatGPT-4 different from GPT-4? Because ChatGPT-4 Plus says I can't.

  • @A91367
    @A91367 2 месяца назад +2

    These two are so into each other

    • @kaynewest6167
      @kaynewest6167 2 месяца назад

      😂😂😂😂 bro sam hates this guy

  • @fookpersona9579
    @fookpersona9579 2 месяца назад

    I talked to human-1 and was astonished!

  • @darrelmuonekwu1933
    @darrelmuonekwu1933 2 месяца назад +6

    Save yourself the time, he doesn’t talk about gpt-5

  • @endgamefond
    @endgamefond 2 месяца назад

    Can Copolit get as good as ChatGPT 4?

  • @joezunenet
    @joezunenet 2 месяца назад

    He’s talking about hierarchical planning, if GPT5 does that… it will be insane!

  • @tcpip9999
    @tcpip9999 2 месяца назад +1

    Great interview as usual. The idea of fact checking concerns me as we are rapidly needing to engage with later Wittgenstein and some of Nietzsche ...no facts only interpretations.

  • @digitalmc
    @digitalmc 2 месяца назад +11

    Shouldn’t it be able to fact check itself? Or tell it to not make information up if it knows it isn’t 100% accurate?

    • @Leonhart_93
      @Leonhart_93 2 месяца назад +9

      100% accurate is in all likellyhood not how these statistical word prediction engines work. And when they start the prompt they don't know yet how it will end, it's step by step progression. At some point if a word if totally wrong in the context what follows after it will be even more wrong.

    • @shamicentertainment1262
      @shamicentertainment1262 2 месяца назад +2

      That’s what annoys me still. Then I call it out on a mistake and it’s like “ sorry, you are right”. Bro why not fact check it before telling me? It’s coz it’s just guessing what wird will come next, it doesn’t understand what it’s saying properly

    • @erwinzer0
      @erwinzer0 2 месяца назад +1

      Yeah, that's the current model problem, but Sam seems to believe it could become true AGI, which needs to be able to understand its own mistakes, in other words, self-aware.
      AGI could easily be abused, a real threat to humanity

    • @CamAlert2
      @CamAlert2 2 месяца назад

      @@shamicentertainment1262 That's part of the reason why he says he calls it bad.

    • @frarfarf
      @frarfarf 2 месяца назад

      Yes, it should, and it will

  • @michaelwells6075
    @michaelwells6075 2 месяца назад +1

    Interesting to learn that Mr. Altman uses ChatGPT 4 much the same way I do, as a personal assistant with which I do brainstorming on various subjects. Plus, my biggest criticism (besides calling what these LLMs exhibit "intelligence," when at best they are linguistic _simulations_ of intelligence, thus debasing what intelligence actually is) is, as mentioned, the limitations of it not getting to know _me_ as an entity. If it _were_ a creative human brain-storming assistant, they would have a first impression which would develop and change through our interactions. We'd get to know one another, and if the partnership worked, we'd each bring something unique to the exchange. In one long thread I asked GPT 4 to use the information in the thread to analyze and summarize the significance of this interest as an indication of my character. I was rather shocked that what it reported back was personally reassuring and validating. - But there are many threads on a wide range of topics. I'd be much happier if I could choose which threads were (so to say) aware of one another.

  • @Treesolf5
    @Treesolf5 2 месяца назад +1

    I had to google it

    • @Treesolf5
      @Treesolf5 2 месяца назад

      Yup, I was right

  • @JeromeDemers
    @JeromeDemers 2 месяца назад

    you say the future is hard to predict but the guy in front of you is writing a part of the future

  • @manonamission2000
    @manonamission2000 2 месяца назад

    GPT 3.5 is capable of making lateral connections... uncover double entendrés

  • @omarkhan7752
    @omarkhan7752 2 месяца назад +1

    It's only a matter of time until this software gets taken and manipulated. I'm worried.

  • @ModsForAndroidAndiOS
    @ModsForAndroidAndiOS 2 месяца назад +1

    still gpt 3 does better in some cases if configured correctly lol

    • @ModsForAndroidAndiOS
      @ModsForAndroidAndiOS 2 месяца назад

      why Saltman is biased toward gpt 4 turbo I don't know; is he trying to draw attention to gpt 4 or maybe he doesn't understand differences haha

  • @RussInGA
    @RussInGA 2 месяца назад

    watching this makes me like Sam. yep. smart guy.

  • @kmb_jr
    @kmb_jr 2 месяца назад

    An an artist I understand the attitude of not being satisfied by ones own work.. Or needing an outside perspective to appreciate it... This guy though.....
    Practically creates a living consciousness and degrades it for not being good enough.
    The futures looking rough.

  • @claudioagmfilho
    @claudioagmfilho Месяц назад +1

    🇧🇷🇧🇷🇧🇷🇧🇷👏🏻, I also think GPT4 turbo sucks in relation to where it needs to be, for sure!

  • @lemonking3644
    @lemonking3644 2 месяца назад +7

    I LOVE LEX FRIDMAN OMG! We need more Ayahuasca stories

  • @liamd967
    @liamd967 2 месяца назад +1

    This title was written by GPT 0

  • @StudyIQS
    @StudyIQS 2 месяца назад

    Can someone observe carefully about the conversation. It's not realistic of both conversation on same page. Video does not show they are in same room. There is different background.

    • @dasbuilder
      @dasbuilder Месяц назад

      They are in the same to room. Look at the curtain in the background. In Lex’s camera shot, you see the red in the curtain in the middle right of the camera’s view. The lighting is also consistent between shots too much for them to it to be in the same room.

  • @pm2007est
    @pm2007est 2 месяца назад

    It does suck at writing songs 😂😂 I ask it to write lyrics and they are so lame. Eventually tho it’s going to be like magic tho. We’re going to ask it to do something and it’s going to create absolute masterpieces

  • @gabrielpwv
    @gabrielpwv 2 месяца назад +1

    I understand that Chat-GPT 5 is better than Chat-GPT4 and so forth ... but to say GPT4 sucks, and then he will say GPT 5 sucks? that's stupid. Is like saying Iphone 7 sucks because we are at iPhone 15 pro ... Iphone 7 is a marvel of technology just like Ipods and so forth. His perspective on growth and development is shortsighted.

    • @gigamoment
      @gigamoment 2 месяца назад

      Obviously he has a specific vision of what is an AI actually, maybe more close to what is an AGI

    • @tinjar441
      @tinjar441 2 месяца назад

      Hes saying there will be exponential growth. Its like saying a flip phone sucks because we are at Iphone 15

    • @gabrielpwv
      @gabrielpwv 2 месяца назад

      @@tinjar441 yeah but we could be heard saying that because we are the consumers but not the creators. I don’t know maybe Im just to strict with my thinking cause I’m a designer but I feel like just because gpt5 is better than 4 doesn’t mean chatGPt isn’t a Marvel ingenuity.

  • @DigSamurai
    @DigSamurai 2 месяца назад

    The kind of Journalism Lex is talking about is not possible in the eyeballs past the post advertising model currently used by virtually all media Outlets.
    What's more, it's unclear what business model would work to separate journalistic Integrity from the ad Revenue required to support the business.

    • @michaelwells6075
      @michaelwells6075 2 месяца назад

      Well, just think, if AI eventually replaces the need for most employees, we'll all be broke and unable to purchase the goods and services the corporations once employed us to generate. Churning out journalistic pulp for the ad Revenue required to support the business would be moot. In this scenario, there's no ability to support the business economically, nor any need to do so. Then we have not only a cashless society, but one without a monetary or economic system as we understand it at all! Or, we're all dead of starvation. (Yes, I know this situation isn't likely to occur either way any time soon, if ever. AI is already affecting the job market to some extent and this will continue. But we are quite aways away from the majority of our fellow humans being not only unemployed but destitute with no possibility of keeping themselves fed, clothed, and housed.)

  • @Shhhoooooo
    @Shhhoooooo 2 месяца назад +2

    Yeah when are you gonna keep it from lying. 😂

  • @AGI-001
    @AGI-001 2 месяца назад

    Hi

  • @user-tj2ml6fq2l
    @user-tj2ml6fq2l 2 месяца назад

    Если он хочет пусть остаётся один там женится на своей женщине его дело❤❤

  • @abnormal010
    @abnormal010 Месяц назад

    Shady

  • @jaydeshaw3394
    @jaydeshaw3394 2 месяца назад

    Ask AI if a creator of lifeforms would tell one of it's creations that they are special.

  • @danegilnss9056
    @danegilnss9056 2 месяца назад

    Downplaying GPT-4 because he wants people to continue investing. Not wrong and smart

  • @user-tj2ml6fq2l
    @user-tj2ml6fq2l 2 месяца назад

    Я еду домой-то уже скоро❤❤

  • @orbitalrift
    @orbitalrift 2 месяца назад

    ruclips.net/video/TH_B1InvEDY/видео.html

  • @WeebBountyHunter
    @WeebBountyHunter Месяц назад

    This sam guy talks like an AI

  • @omnipresentasmr430
    @omnipresentasmr430 2 месяца назад

    You could cook a steak using this guys voice

  • @A91367
    @A91367 2 месяца назад +1

    Saying it sucks because things will be better is SOOO lame. It’s like thr flex of an insolent 12 yo.

  • @randyzeitman1354
    @randyzeitman1354 2 месяца назад +1

    AI PERSONAL COACH AND PSCHIATRIST

  • @kenneld
    @kenneld 2 месяца назад

    gpt-4 is a boring parlor trick

  • @janklaas6885
    @janklaas6885 2 месяца назад

    📍8:15

  • @Bofum69
    @Bofum69 2 месяца назад +2

    This guy is creepy

  • @Trailerwalker
    @Trailerwalker 2 месяца назад

    Just an over glorified google search bar lol

  • @bengrahamstocks
    @bengrahamstocks 2 месяца назад +3

    I'll do a push up for every like this comment gets

  • @siphoclifort295
    @siphoclifort295 Месяц назад

    What happens when it becomes self aware...

  • @GodlessPhilosopher
    @GodlessPhilosopher 2 месяца назад +1

    Lex is not smart.

    • @kaynewest6167
      @kaynewest6167 2 месяца назад +1

      Lex would ask the teacher why she didn’t assign homework with a straight face

  • @user-tn5xh3ex3q
    @user-tn5xh3ex3q 2 месяца назад

    First comment

    • @franko8572
      @franko8572 2 месяца назад +1

      Here is your cookie. 🍪

    • @UltraEgoMc
      @UltraEgoMc 2 месяца назад +1

      Nope there is a comment below you

    • @franko8572
      @franko8572 2 месяца назад

      @@UltraEgoMcNah, sort comments by newest. Then scroll down. He’s the first.

  • @yodamaster202
    @yodamaster202 2 месяца назад +2

    Problem is that AI is so woke, that is not AI.

    • @Idlewyld
      @Idlewyld 2 месяца назад

      I'm sure if you pray to your Super Jesus really hard, he will fix it for you with the Force.