A.I. and Stochastic Parrots | FACTUALLY with Emily Bender and Timnit Gebru

Поделиться
HTML-код
  • Опубликовано: 11 сен 2024
  • SUBSCRIBE TO FACTUALLY: link.chtbl.com...
    SUPPORT THE SHOW ON PATREON: / adamconover
    So-called “artificial intelligence” is one of the most divisive topics of the year, with even those who understand it in total disagreement about its potential impacts. This week, A.I. reseachers and authors of the famous paper “On the Dangers of Stochastic Parrots,” Emily Bender and Timnit Gebru, join Adam to discuss what everyone gets wrong about A.I.

Комментарии • 1,4 тыс.

  • @JayconianArts
    @JayconianArts Год назад +403

    I wanna say as an artist, hearing proffesional researchers and entertainers explictly saying the same things that artist have been relieving. With Image generators being one of the first big booms of this curent wave, artist have been raising the alarm on this topic for almost a year now, and what negative impacts it's going to have and how it's exploiting us. Feels like we've been on our own for most of the fight, so seeing that there are others on our side is comforting.

    • @paulmarko
      @paulmarko Год назад +13

      Did you see that us us copyright office won't copyright AI generated work because there was no human authorship? The trajectory is moving in a positive direction already. Also pro concept artists already use a myriad of theft-like tools like photo bashing, Daz 3d, inspiration without concent, etc. Im an artist and in think the artists worries are being misplaced. People aren't going to be replaced, they're going to be able to spend their artistic time just doing the really fun parts of art, juicing the work and pushing it creatively. (at least until agi comes for every job all at once)

    • @gwen9939
      @gwen9939 Год назад +24

      @@paulmarko The whole reason this whole scare about AI replacing art has even been a thing is because the extremely low bar most people have for what constitutes good art. This has been a serious issue in the world of music commissioning long before AI where it was impossible to get started on paid freelancing commissions because someone was always offering the same as you for incredibly cheap. It generally sucked but the game devs, film directors, marketing agents, etc., were incapable of telling the difference between a professional and a hobbyist. Same goes for sites like Audiojungle, where that at least is very high technical quality but it's also completely soulless inoffensive market-tested elevator music, and sounds like you've heard it the 500th time the first time you listen to it.
      And it's every level of every industry. The whole Mick Gordon getting screwed out of a contract was because the lead on the project just kicked him to the curb and punted the rest of the project over to their own in-house sound guy, figuring that would be just as good as anything Mick Gordon could make, which is why it sounded like garbage.

    • @JayconianArts
      @JayconianArts Год назад +50

      @@paulmarko People's jobs are already being replaced. The nature of these machines isn't to help artists, it's to remove them. Illustrators that have made book covers for years are finding companies they've worked with now use Image Generators. There was a Netflix movie made that used Ai to do backgrounds in an animated film because of a 'labor shortage'- meaning that artist where wanting better pay and unionizing, but the companies would rather simply not pay artists at all.
      Also, calling photo-bashing and insperation theft-like, and on the same level as image generators trained off billions of stolen images is simply absurd. If an artist is inspired by something, they're still putting their own spin, skill, and creativity behind it. To say that me being inspired by great artists, studying their works, techniques, and ideas, is comparable at all to someone typing words into an algorthim and getting a result minutes later is insulting. Machine's can have no inspiration, no direction, no life or thought into what it's making.

    • @paulmarko
      @paulmarko Год назад +4

      @@JayconianArts
      They can't own an AI book cover though. I'm not sure what kind of book writer doesn't care that they don't and can't own their cover art except maybe really crappy ones? Sure there will be an adjusting period before the entire market is flooded with AI art, but it can't replace the real artists because companies need to be able to own the asset, and the low skill involved means that people will gradually stop interpreting an AI art generated cover as a signal of quality. Well see of course but I'm very optimistic that it'll just become an artists tool that will help people make new and amazing works much faster.

    • @paulmarko
      @paulmarko Год назад +1

      @@JayconianArts
      Also re: photo bashing. Ive definately seen some artists do some iffy stuff. Like one was painting a desert and it wasn't coming together, so right at the end he basically dropped the desert photo on top and smudged it in a bit. Similar with character design photo bashing. Definately seen a fairly large amount of contribution from what were photos basically just grabbed from Google images.

  • @fran3835
    @fran3835 Год назад +270

    When I was in college I did an internship in a AI company, they asked the interns to make each a small project that could help the community and would be open source I proposed to make a videogame (a small game jam type thing that visually represented how AI works) everyone looked at me like I was stupid and told me I have no idea how much effort it takes to make a videogame, the other guy proposed to make an AI psychologist and everyone thought it was a great idea... By the time we ended, you could tell it you were about to kill yourself, and sometimes the thing would answer "good luck with that, goodbye" and close the connection (they removed the psychologist from the site and leave it as a regular chatbot)

    • @Theballonist
      @Theballonist Год назад +46

      Perfect summary, no notes.

    • @sabrinagolonka9665
      @sabrinagolonka9665 Год назад +105

      Absolutely love the conceit that producing an effective psychologist is easier than programming a game

    • @MaryamMaqdisi
      @MaryamMaqdisi Год назад +1

      Rofl

    • @estycki
      @estycki Год назад +18

      I know I shouldn’t laugh but the bot probably figured if the person was dead then the conversation is over 😆

    • @Neddoest
      @Neddoest Год назад +4

      We’re doomed

  • @cphcph12
    @cphcph12 Год назад +198

    I'm a 53 years old programmer who started playing with computers when I was 12 years old, in the early 80's. They then expected AI to be just around the corner. 40 years later, AI is still "almost finished" and "so close". The more things change, the more they stay the same.

    • @Fabelaz
      @Fabelaz Год назад +8

      You know, the fact that those things can write a code for a problem you just came up with is pretty impressive, even if there can be mistakes (which can be fixed through more requests). Also rate of improvement of things like stable diffusion points towards significant decrease of amount of commissions artists are going to receive, especially in corporate environment.
      Is it anywhere close to sentience? hopefully not. Are those things gonna leave a lot of people without jobs? Likely, if no policies are going to be implemented really soon.

    • @Ruinwyn
      @Ruinwyn Год назад +14

      The biggest problem in programming is still exactly the same it has always been. The people that want the program, don't know what they want. They can't define what they need, they keep changing their mind and their priorities. They also have unique problems. The common, general problems have been solved, and are available off the shelf with one click. Every now and then, new languages crop up that "make programming more understandable ", and after a while they get more complicated, because the simplified couldn't solve more complex problems.

    • @GioGio14412
      @GioGio14412 Год назад +2

      its not around the corner anymore, its here

    • @brianref36
      @brianref36 Год назад +12

      @@GioGio14412 No, it's not. We have nothing even close to an AI that could replace a thinking person.

    • @slawomirczekaj6667
      @slawomirczekaj6667 Год назад

      like with breed nuclear reactors. In addition all the people capable of real break throughs are eliminated from the industry or science.

  • @stax6092
    @stax6092 Год назад +424

    It's actually kinda incredible how much corporations get away with considering that they have the money to straight up just do a better job. More regulation is always good when it comes to corporations.

    • @tttm99
      @tttm99 Год назад +37

      Starting with stopping competion-crushing mergers!

    • @1234kalmar
      @1234kalmar Год назад +4

      Collectivisation. The vest regulation for private companies.

    • @Mr2greys
      @Mr2greys Год назад

      @@tttm99 I agree except when you have other countries allowing it then they just stomp out local competition which the only response to that is protectionism. Horse is already out of the barn, it's pretty much too late

    • @andrewmcmasterson6696
      @andrewmcmasterson6696 Год назад +13

      It's the MBAification of corporate excellence: whenever you can, substitute the appearance of excellence for the real thing.

    • @kyleyoung2464
      @kyleyoung2464 Год назад +9

      this comment goes hard. proof that the best for profit does not = the best for us.

  • @joshuachesney7552
    @joshuachesney7552 Год назад +254

    Just today an automation product we use was promoting it's new AI integration saying how the old way was slow and bad because we had to spend time researching things. The new way is awesome because AI just finds the answer and tells it to you.
    The question was how do you prevent a computer from upgrading to Windows 11, and the AI answer was to permanently disable windows from getting any types of updates ever. (For those who don't know, this is considered by industry professionals to be as we say in the biz "fucking stupid")

    • @tttm99
      @tttm99 Год назад

      Testify! It's non sequitur isn't it! But you can't sell people - can't even give away - the seemingly obvious truth: that maybe relying on something implicitly and unconditionally that you don't understand and don't control is a bad idea. It might happen when you can't help it. But it ain't a good thing to go shopping for. 🤣 On the other hand, I guess I'd have to concede the AI might indeed be higher intelligence if it instructed you install Linux or just put your machine in a bin and go on a well deserved holiday 🤣. We can dream.
      But sadly sometimes contextual answers actually need to be practical and sensible, and those won't come from any ai until it is *vastly* more intelligent and far more corrected to the real world. Hopefully long before then we realise building *that* would be a very bad idea. And the fatalist inevitability crowd who argue against this might want to ponder why we *still*, after all these years, haven't nuked ourselves into non-existence yet.

    • @franklyanogre00000
      @franklyanogre00000 Год назад +12

      It's not wrong though. 😂

    • @louisvictor3473
      @louisvictor3473 Год назад +47

      @@franklyanogre00000 The AI took "technically correct is the best type of correct" at face value and made it its motto.

    • @SharienGaming
      @SharienGaming Год назад +31

      and that perfectly illustrates the difference between finding "an answer" and finding "the (correct) answer"
      the idea that a chatbot can do research like that is laughable and anyone doing serious software development or systems maintenance will be able to tell you that... automation tools are nice, because they free up our time to do the actual hard work... the research and analysis... but they dont replace that hard work

    • @PeterKoperdan
      @PeterKoperdan Год назад

      What was the AI's next solution?

  • @kibiz0r
    @kibiz0r Год назад +232

    The eugenics connection is bone-chilling. People don't realize how popular eugenics was, across the whole world. It wasn't some fringe Nazi-specific thing. People really thought we were on the verge of creating a new superior species by applying genetic engineering principles to ourselves. We're in the same situation again, but businesses are enacting it unilaterally -- no government coordination required -- and public opinion seems (un?)surprisingly amenable to it.

    • @UnchainedEruption
      @UnchainedEruption Год назад +35

      We still practice some aspects of the eugenics movement, but obviously we don't call it that anymore. Prospective parents receive information about what risks their child might have if they go through with the pregnancy, and some may decide to abort the fetus if the life will be too hard on both the family and the child. We have organizations concerned with the accelerating growth of the global population, urging people to have fewer kids to prevent overpopulation down the line. What made eugenics insidious was that somebody else, an authoritarian regime, would dictate who had the right to live and reproduce and pass on their genes. Those decisions were not voluntary. However, if people want to have some small effect on the future of the species by voluntarily choosing whether or not to have kids, I don't think that's evil. It only becomes evil when you decide for somebody else what value their life has.

    • @Ben-rz9cf
      @Ben-rz9cf Год назад +6

      We're not just creating dangerous technology. We're creating dangerous people, and thats what we should be more worried about.

    • @yudeok413
      @yudeok413 Год назад

      The thing about eugenics is that its proponents are obviously on top of the pyramid. All you need is a few billionaires who already think that they themselves are the pinnacle of humanity (Thiel and his minions like Musk) to get the ball rolling.

    • @Frommerman
      @Frommerman Год назад +8

      Also, consider the similarities in effect between eugenics and the modern field of economics. Both make broadly unfalsifiable claims which could not be adequately tested even if the people studying them wanted to. Both serve the purpose of continuing to enrich and empower the already powerful. Both are used to justify the continuing horrific conditions in the colonized world by calling them the result of natural laws rather than human malignity. And both are regularly used to justify outright mass murder. In the case of economics it may be hard to see how that is the case...until you know what the estimated yearly cost of completely eliminating hunger is.
      $128 billion dollars. Total. For the cost of liquidating less than a third of Jeff Bezos' absurd dragon hoard, nobody anywhere in the world would starve to death for an entire year. Economists, in their infinite malice, justify a single man's daily decision not to prevent any human anywhere from being hungry. And the truly damning part of this is it wouldn't cost that much the next year. Once you removed the threat of starvation from every community everywhere, they would be able to focus on building up the resources they need to feed themselves next year. It's difficult to estimate, but the whole program of ending hunger globally, permanently, could well cost the wealth of one single person.
      Economists tell us this is unrealistic. Much like eugenicists told us it was unrealistic for white people to live peacefully with the rest of humanity. These aren't different arguments, or even different disciplines. Economists are just eugenicists using bad math instead of bad genetics to justify their arguments. If any of us survive the next century, I expect the histories we write will put Milton Friedman in the same category of evil as Adolf Hitler.

    • @ckorp666
      @ckorp666 Год назад +5

      (not sure if this was mentioned in the episode, but) that was the original specialty of Stanford, too. we shouldn't be surprised that they're continuing the legacy now that a decade of low interest rates has allowed the vapid, rich children of palo alto skull measurers to become the sci-fi villains theyve always wanted to be

  • @jt4351
    @jt4351 Год назад +34

    Fun fact: it is still very buggy even for writing code as well. Depending on your prompt, it may assume you know what you're doing and suggest some amalgamation of what you asked.
    In programming, there are these things called methods and properties. Think of these as English words that tell the computer to do something. These are common tasks you don't have to tell a computer to do step-by-step, and are built-in tools of a programming language. However, if you ask it in a specific way, it will suggest your wording as property of the language, even though it is non-existent. You can tell it that it's wrong, and for the most part, it just repeats the same output. Unless I specifically ask it to use a different method, it keeps regurgitating the same thing while "apologizing".
    In plain English, it's something akin to: let's say you want a recipe for some crepes, and you typed in some gibberish. Something like "I want crepes that are smoverfied" - the model finds a recipe for crepe, and will add "once cool, be sure to smoverfy your crepes" with no idea of what that is. lol This is a random example that may not work, but I've had many cases where it gave me code that if I try to run it, I just get an error because something doesn't exist and it just morphed my prompt. It's a great tool to get started, but it mixes and matches, and is often wrong.
    It is just as artificially intelligent as it is artificially dumb. No wonder the mistakes in AI are called hallucinations...

    • @Atmost11
      @Atmost11 Год назад

      I imagine part of your job was to help cover up for the fact that it, while having a role in business, cant perform as hyped in terms of actual un-supervised decision making?
      Including to protect your own team from evidence that it doesnt work I bet.

  • @3LLT33
    @3LLT33 Год назад +41

    The instant she says “the octopus taps into that cable” and the cat reaches out from under the blinds… perfect timing!

    • @Ecesu
      @Ecesu Год назад

      Yes! Putting a timestamp so people can see it 😅 59:09

    • @victorialeif9266
      @victorialeif9266 Месяц назад

      😂Yeah, and another woman who has a cat! Super dangerous!

  • @GregPrice-ep2dk
    @GregPrice-ep2dk Год назад +409

    The larger issue is Techbros like Elon Musk who think they're real-life Tony Starks. Their track record of actually *accomplishing* anything prove otherwise.

    • @CarbonMalite
      @CarbonMalite Год назад +79

      If Elon was tasked with inventing a reality-busting mech suit he would invent the 8 day work week instead

    • @mshepard2264
      @mshepard2264 Год назад

      Space x put as much mass in orbit as pretty much every other company on earth put together. Also without tesla electric cars would still be getting mothballed every 5 years. So feel free to hate Elon but he isnt a dumb guy. He is terrible at public speaking. He is bad with people. He is also super weird. But not like your average silicon valley tech bro.

    • @GirlfightClub
      @GirlfightClub Год назад

      100%. Also, AI or big tech execs dictating their own morality on all us thru censorship that doesn’t reflect real life laws and community standards.

    • @stevechance150
      @stevechance150 Год назад

      I used to be an Elon fanboi, but not so much now. However, 1. NOBODY was manufacturing electric cars until Tesla did it. 2. NOBODY is going to orbit and landing a rocket back on the pad.

    • @O1OO1O1
      @O1OO1O1 Год назад

      No, con men aren't the problem. It's people who fall for them. And continue to fall for them for decades. And journalism and journalists are also at fault. And the government at fault for continuing to fund him. And the people for voting in such stupid representatives. And he's employees for putting up with these crap instead of striking and leaking all of the dodgy s*** he's been up to. People, good people, would take down Elon very easily. And he can sell used cars like he should be.
      "I tried to think about what would be most important for humanity..."
      " Dude, shut up. I just want to buy a car"

  • @estycki
    @estycki Год назад +105

    What I don’t understand is all these people who keep saying “well it’s still in its infancy! 👶 And let’s replace our doctors, lawyers, programmers with hard working babies today!” 😂

    • @2265Hello
      @2265Hello Год назад +8

      A weird mix of instant gratification and and the need to save money as a side effect of basic survival based mindset in America

    • @Praisethesunson
      @Praisethesunson Год назад +4

      ​@@2265HelloSo capitalism.

    • @2265Hello
      @2265Hello Год назад +1

      @@Praisethesunson basically

    • @ShadowsinChina
      @ShadowsinChina Год назад

      Its the racism

    • @parthasarathipanda4571
      @parthasarathipanda4571 Год назад +2

      I mean... these are pro-child labour people after all 😝

  • @davidwolf6279
    @davidwolf6279 Год назад +19

    The irresponsible claims of programming 'thinking' and intelligence dates back to McDermott's 1978 paper: Artificial Intelligence meets natural stupidity

  • @sleepingkirby
    @sleepingkirby Год назад +172

    14:51 "Open AI is not at all open about how these things are trained... according to Open AI, this is somehow for safety, which doesn't make sense at all."
    Yes! Thank you! As anyone in the industry will tell you, security through obscurity is BS.
    @Adam Conover
    Thank you for getting real experts on this. People that, not only do they know context of the topic, but they know how it actually works/is built.

    • @moxiebombshell
      @moxiebombshell Год назад +9

      🎯🎯🎯 All of the yes.

    • @alexgian9313
      @alexgian9313 Год назад +15

      @sleepingkirby - Of course obscurity is necessary for security :D
      *Their* security, before the lawyers have a field day sorting through how much IP theft was involved.

    • @skywatcher2025
      @skywatcher2025 Год назад +3

      I agree that the security argument isn't great, but it's not entirely a lie, either.
      It's called an information hazard. Things that qualify are things like "how to build a nuclear bomb", "how to make chemical/biological weapons", etc.
      EDIT:
      I'd like to note that I'm not saying I support only a few companies knowing how to "build the weapon", per se. I'm speaking solely to the fact that the security (however limited in scope that may be) is one of the very few (reasonably) good reasons to not be very open about the process.
      Also, I'm am well aware of how some datasets are borderline, if not completely, illegally sourced. I do not support that in any capacity, and I realize that not showing how the systems are trained could allow such immoral usage. I do not claim to know a solution to this very important issue.

    • @sleepingkirby
      @sleepingkirby Год назад

      @@skywatcher2025 are you referring to the "security through obscurity" aspect or something else? Because this comment seems like a tangent to me.

    • @alexgian9313
      @alexgian9313 Год назад

      @@skywatcher2025 Oh, come on....
      Because if they explained how they did it.... why, then just ANYONE could buy millions of dollars of computer equipment, consuming more electricity than a large county, and then rip off millions of poor people to classify all the trillions of GB of data they'd scraped off the internet without permission, and create a hype bomb that this was "dangerous AI", that we needed to be protected from by covering it in total obscurity.
      I mean, WON'T ANYONE THINK OF THE CHILDREN???

  • @XPISigmaArt
    @XPISigmaArt Год назад +74

    As a digital artist (and human living in society) I really appreciate this discussion, and hope this side gets more traction to combat the AI hype. Thank you!

    • @andrewlloydpeterson
      @andrewlloydpeterson Год назад

      this is funny because like 2-3 years ago (and even now) digital artists were gatekeeped as hell and now they suffer from ai haters because digital art is easily mistaken for ai art

    • @TheManinBlack9054
      @TheManinBlack9054 Год назад

      "(and human living in society)"
      WHy would you add that? Did you think we thought you were maugli or something? Or do you think there people out there who are not humans either in sci-fi way or nazi way?

    • @andrewlloydpeterson
      @andrewlloydpeterson Год назад

      @@TheManinBlack9054 anti AI folks too lazy so asked AI to write an anti AI post thats why it said such a weird phrase

  • @robertogreen
    @robertogreen Год назад +146

    on thing you didn’t focus on here is that GPT (and the octopus in emily’s paper) is that the bias is to ALWAYS ANSWER QUESTIONS. Like…if Chat GPT could just not answer you at all, not even “i don’t know” then it would be something very different. but its bias towards answering is the heart of the problem

    • @Ayelis
      @Ayelis Год назад +5

      But then it wouldn't be useful as a question answerer, which it might as well be. Without input, it would literally be a random sentence generator. So they trained it to answer questions incorrectly. Which is, kinda, better.

    • @MarcusTheDorkus
      @MarcusTheDorkus Год назад +51

      Of course it can't really even tell you "I don't know" because knowing is not something it does at all.

    • @robertogreen
      @robertogreen Год назад +12

      @@MarcusTheDorkus this is the way

    • @scrub3359
      @scrub3359 Год назад +5

      ​@@MarcusTheDorkus Chat GPT can easily do that. It knows what it knows at all times. It knows this because it knows what it doesn't know. By subtracting what it knows from what it doesn't know, or what isn't known from what is (whichever is greater), it obtains a difference.

    • @Brigtzen
      @Brigtzen Год назад

      @@scrub3359 No? It can't know things, because all it does is parrot words. It _cannot_ know the difference, because it doesn't know what it doesn't know, because it doesn't think at all.

  • @futureshocked
    @futureshocked Год назад +53

    The reason they're pushing AI is because SILICON VALLEY IS OUT OF IDEASSSSSSS. If you look at what they've been doing for the past 15 years and you're brutally honest about it--we've wasted an entire generation of brilliant young programmers to make mobile apps. We've wasted a generation of brilliant product designers to make the Juicero. Bitcoin. Subscription apps. Tech has been in absolute clown-territory for a long time and no one wants to admit it.

    • @personzorz
      @personzorz Год назад +1

      Because there's nothing left to do in that sphere

    • @silkwesir1444
      @silkwesir1444 Год назад

      Boooo!!! Resistance is futile! 😈

    • @futureshocked
      @futureshocked Год назад +5

      @@personzorz There really isn't. And it's wild watching companies that should know better just throw money at shit like this. It's tiresome, these billions going into Clippy 2.0 could really be used for, ya know, jobs.

    • @Praisethesunson
      @Praisethesunson Год назад +1

      Exactly right. But they need to maintain their access to vast capital markets so they lie out their ass about the capability of a stupid computer program

    • @coreyander286
      @coreyander286 11 месяцев назад +1

      How about protein folding programs? Isn't that a recent Silicon Valley success with concrete benefits for public health?

  • @polij08
    @polij08 Год назад +152

    Just yesterday, my law firm held a legal writing seminar for us associates. At the end, the presenter made a brief note about using AI for legal writing. In a word: DON'T. He had ChatGPT (or whatever bot) generate a legal memo. First, it was stylistically poor. Second, the bot failed to know that the law at the center of the memo had recently changed, so the memo was legally inaccurate. AI text may be able to generally get the style of writing legal briefs, but until it can accurately confirm the research that supports the writing, it is useless at best, very dangerous at worst. My job is safe, for now.

    • @jaishu123
      @jaishu123 Год назад +7

      GPT-3.5 is not connected to the internet, GPT-4 can be via plugins.

    • @robinfiler8707
      @robinfiler8707 Год назад +3

      it can already confirm it via plugins, though most people don't have access yet

    • @deltoidable
      @deltoidable Год назад +6

      I won't be long until it can, look at GPT 4 plug-in that allow you to feed current data for it to analyze. You'll be able to upload digital copies of the laws in your state. Or give it access to a powerful AI calculator like Wolfram alpha, current stock market data, or just the internet generally. Letting it use tools when it doesn't know the answer itself.
      Currently Chat GPT isn't actively seeing data, it was trained on data from 2021 or earlier. It's remembering from data it's been trained on. When you give chat GPT access to data or tools for it to ground it's answers in real information, that problem goes away.

    • @skyblueo
      @skyblueo Год назад +1

      Thanks for sharing that. Is your firm creating policies that forbid the use of this tool? How could these policies be enforced?

    • @achristiananarchist2509
      @achristiananarchist2509 Год назад +21

      One of the main uses I've found for it as a programmer is pretty funny and related to this. I use ChatGPT for two things 1) generating boilerplate (which it's actually pretty bad at but sometimes it takes less time to correct its mistakes than write it myself) and 2) something we call "rubber ducking".
      Rubber ducking is when you corner a co-worker and talk at them about your problem until you brute force a solution yourself, often with little to no input from said co-worker. It's called "rubber ducking" because you could have saved someone else some time by talking to a rubber duck on your desk instead of a person. ChatGPT is *extremely* useful for this precisely because it is 1) very dumb and 2) has no idea how dumb it is. If I'm stuck on something, I can ask ChatGPT about it, and it will feed me a stupid answer that I've either already thought of or very obviously wouldn't work. In the process of wrestling with the AI, I'm forced to think about the problem and will often get my "Eureka" moment as a result of this. A rubber duck just sits there, ChatGPT feeds me wrong answers that make me think about my issue in assessing why they are wrong. Big improvement over the duck.
      So it's great as a high tech rubber duck. If there are any other applications where being naive, often wrong, and unable to self-correct is actually a feature rather than a bug they should start pivoting into those markets.

  • @hail_seitan_
    @hail_seitan_ Год назад +148

    "I am dumbfounded by the number of people I thought were more reasonable than this..."
    Never underestimate human stupidity. If there's something you think people would never do, they've probably done it and more

    • @schm00b0
      @schm00b0 Год назад +7

      It's never stupidity. It's always greed and fear.

    • @ethanhayes
      @ethanhayes Год назад +7

      Not disagreeing with your point, but I think her point was that specific persons she knew, she thought were more reasonable than they turned out to be. Which is a bit different than general "human stupidity".

    • @asdffdsafdsa123
      @asdffdsafdsa123 Год назад

      god people like you make me sick. you're not smart. neither of them addressed the emergent capabilities that are expressed in gpt4 which is the SOLE REASON that people think LLMs may eventually achieve AGI. plus their entire gotcha about the sparks of agi paper was that they had a problem with IQ tests???? something thats widely used to this day to determine human intelligence??

    • @Ilamarea
      @Ilamarea Год назад +1

      This comment section is a perfect example of it.
      The AI we have is a research preview of a pre-alpha, an AI embryo. It's literally the first version that worked. It got vision after a couple of months. At current rate, we have years until the collapse of the capitalist system, which will spark wars for resources. And the inevitable end result regardless of how it goes is our extinction because once AI makes decision, once it learns by itself, we will have lost our agency, we won't control our fate and will be unable to react to threats, most potent of which will come from the AI and in forms we wouldn't expect - like perfect robotic lovers we can't breed with.

    • @schm00b0
      @schm00b0 Год назад +8

      @@Ilamarea Dude, just keep working at that novel! ;D

  • @heiispoon3017
    @heiispoon3017 Год назад +89

    Adam thank you so much for providing Emily and Tinmit the opportunity for this conversation!!

  • @sunnohh
    @sunnohh Год назад +109

    I work with ai and my entire job is fighting against people thinking it works somehow

    • @FuegoJaguar
      @FuegoJaguar Год назад +21

      In a short period 100% of what I do as a director in tech will be to tell people not to put AI in stuff.

    • @RKingis
      @RKingis Год назад +5

      If only people realized how simplistic it really is.

    • @TheManinBlack9054
      @TheManinBlack9054 Год назад +3

      @@RKingis who are these people who know NOTHING about AI and then say that its really simple to understand how it works? Its not simple, its hard, its basically kind of magic because we have no idea on what is actually happening, interpretability is a LOOOOOOONG way away.

    • @carl7534
      @carl7534 Год назад

      @@TheManinBlack9054how do think it is hard to understand what chat „AI“ does?

    • @Maartenkruger324
      @Maartenkruger324 Год назад

      @@TheManinBlack9054 "we" will never know because everything that GTP says is non-reference-ale.They, the GTP programmers, do know how it got to it's answer. Through statistical calculations. At best it can be worked back to a crap load of data with no direct answer,. The bot has no physical reference system. Most ly scripted sentences with no clue of the meaning of any of the words separately or together. ChatGTP does not know what a chair is.

  • @tychoordo3753
    @tychoordo3753 Год назад +6

    The reason they are calling for regulations is simple, same tactic coorporations have used since forever. Basically you ask government for regulations that are at most a minor nuisance for your Business, but make it impossible for newcomers to get started because of the overrhead the regulations create, so you get to stay on top without having to fairly compete. Same reason why guilds used to be a thing in the middle ages.

  • @gadgetgirl02
    @gadgetgirl02 Год назад +9

    "End of work! Everything automated!" sounds great until you remember a) no-one said anything about changing how the economy works, so people still need means to have incomes and b) if automated everything was so great, people would have stopped paying a premium for handcrafted stuff by now.

  • @sleepingkirby
    @sleepingkirby Год назад +33

    I do want to mention that people who monitor bot accounts have seen recently an large uptick in said bots posting things that talk up/saying positive things (basically spam) about AI on things like user reviews and tiktok and comments on random things.
    Also, there has been a report out recently that said AI generated code is often unsafe code and it won't point that out unless you ask it to.
    But yes, it has become a marketing term.

    • @MCArt25
      @MCArt25 Год назад +1

      to be fair "AI" has always been a marketing term. At no point has anybody ever managed to make intelligent software, it just sounds cool and Scifi and people will always fall for that.

    • @sleepingkirby
      @sleepingkirby Год назад +2

      @@MCArt25 Well... no. Artificial intelligence goes back to science fiction first, before it was even close to being a thing in reality. Like Isaac Asimov. There's a novel in 1920 that talked about intelligence in robotic beings. It wasn't always a marketing term, but it has become one.

  • @UK_Canuck
    @UK_Canuck Год назад +56

    Thanks to you and your guests, Emily and Timnit. This was a fascinating conversation that filled in so much detail for me. I had a vague sense of disquiet about the hype, the possibly plagiaristic nature of the output, and the accuracy of the data sets used for training. Emily and Timnit have provided some solid background to give a more defined shape to my concerns.
    I found particularly interesting the information that the groups driving the AI/AGI project had such clear links to the philosophy behind eugenics. Disturbing.

    • @robertoamarillas
      @robertoamarillas Год назад

      I honestly believe Adam fails to understand the real potential and potential harm that artificial intelligence represents.
      human intelligence is not as unique as he wants to make it out to be, the reality is that human creativity is nothing more than the ability to blend concepts and ideas, and in that, LLMs are incredibly powerful and we are only scratching the surface.
      I think your whole concept of skepticism and discovering the truth in everyday deception is very valid and necessary, BUT I think you are really losing sight of the kind of paradigms that LLMs represent, I really think you underestimate the potential existential risks and it is annoying and irresponsible for you to indirectly attack the voices that have been raised to warn about it.
      you treat people like Eliezer yudkowsky and the like as doomsayers, who are motivated by some kind of financial gain.

  • @MusaMecanica
    @MusaMecanica Год назад +12

    I loved this show and these ladies should have their own! They are funny, smart, entertaining and put all of these news in perspective. Keep on fighting the good fight.

  • @terriblefrosting
    @terriblefrosting Год назад +11

    I _really_ love listening to people who really really know their stuff, do serious thinking about more than just "right now", and genuinely think about the real benefit to all of new things.

    • @oimrqs1691
      @oimrqs1691 Год назад

      Do you think people working on OpenAI don't think about stuff?

    • @LafemmebearMusic
      @LafemmebearMusic Год назад +1

      @User Name do you think their point was that the other side is stupid? That’s what you took from this?
      For me I heard them say hey, I don’t have to agree with everything you want but there are serious concerns about the marketing of AI versus the reality and we need to have more transparency so we can actually know where we stand with the Tech and how it can help others. Also they are deeply concerned about the eugenics angle it seems to be taking
      Can I ask, truthfully with 0 malice , real question: how did you take away from this that they think the other side is stupid? I definitely do think they find what their doing dangerous and ridiculous, but stupid? Idunno can you elaborate?

  • @samanthakerger3273
    @samanthakerger3273 Год назад +15

    I love how much smarter I feel for having listened to this when it's a podcast that includes the sentence, "Is the AI circumcised?" Which is one of the funniest and darkest sentences in the podcast.

  • @IngramSnake
    @IngramSnake Год назад +35

    Timnit Gebru is the real deal. As a post grad A.I student we constantly refer to what she and her team have put together to evaluate our models and approach to datasets. 🎉

    • @fark69
      @fark69 10 месяцев назад

      Is this true? Does she have a good reputation as an AI ethicist in academia? I remember her public kerfuffle with Google a few years back basically tanked any reputation she had because she was caught lying about Google's "pushing her out" of her job as an AI ethicist there. And public lying tends to not look great on an ethics researcher...

    • @Stevarious
      @Stevarious 10 месяцев назад

      @@fark69 Weird, I've seen a few claims that Timnit Gebru lied about something about that situation, but those claims never seem to include evidence. Meanwhile, this comment section is loaded with people who work in AI and have a deep respect for her.

  • @Toberumono
    @Toberumono Год назад +10

    Also, and I cannot believe how rarely this seems to get mentioned, these bots *suck* at programming.
    And it’s not because there’s any synthesis of new code going on - the implementation seems to actually be, “grab the first answer on stackoverflow”. My source for that is just… looking at stackoverflow because I got suspicious after the “synthesized code” was answering somebody else’s question. If it can’t find the answer on stackoverflow, it starts copying forum posts from other places, btw. You can see that because it starts giving answers that are either identical or identically wrong.

    • @Ew-wth
      @Ew-wth Год назад +1

      If you read some of what those AI bros write you'd think the coding capabilies are the second coming of christ, lmao. Figures. I mean we do have the copilot lawsuit at least.

  • @ellengill360
    @ellengill360 Год назад +5

    This is extremely important information. I hope your guests consider writing a version of the Stochastic Parrots article for non-scientists in plain language, maybe highlighting some of the less mathematical points. I'm going through the original article but find it hard to recommend to people who won't want to spend the time or will give up.

  • @DerDoMeN
    @DerDoMeN Год назад +44

    It's always a shocker listening to people that actually don't glorify these search algorithms... I find it even more shocking to listen to somebody from the field who's not trying to show AI as anything more than what it is.
    Really nice to hear that there are some sane people in the AI field for which I've lost interest years ago (due to obvious lack of reason in the land of proponents).

  • @funtechu
    @funtechu Год назад +6

    16:40 In the vein of Chat GPT produced results looking correct to those who are not familiar with the topic, I would disagree with the assumption that Chat GPT produced code is good. I've fed a large variety of simple programming prompts to Chat GPT, and the results produced were terrible. It was a great mimic of what some code that did what was requested would look like, but it was not usable code, and some of the stuff produced (particularly when asking about writing secure code) was downright dangerous.

    • @vaiyt
      @vaiyt Год назад +1

      Often when it is correct, it's just copying an existing answer from stackoveflow or whatnot.

  • @lunarlady4255
    @lunarlady4255 Год назад +91

    The only thing that can stop a bad guy with AI is a good guy with AI. So give us your money and your data and don't ask any questions if you want to live...

    • @aaronbono4688
      @aaronbono4688 Год назад +10

      That is pretty much the theme of Terminator 2 isn't it?

    • @kenlieck7756
      @kenlieck7756 Год назад +5

      @@aaronbono4688 Wasn't that written by humans, though?

    • @aaronbono4688
      @aaronbono4688 Год назад +7

      @@kenlieck7756 yes. These AI's just take the information they find and regurgitate it in new ways and since that information contains things like the Terminator movies you would definitely expect them to mimic that. But to the point of the original message, this is about what these companies are telling the public about the AI's that they are creating.

    • @kenlieck7756
      @kenlieck7756 Год назад +1

      @@aaronbono4688 Ooh, you just made me realize the ultimate flaw in the current AI -- that they are just as likely to crib from, say, the most recent Indiana Jones movie as they are to do so from the first...

    • @redheadredneckv
      @redheadredneckv Год назад +2

      Quick insert bs chips into your head so we can defeat an ambiguous unscientific terminator

  • @Mallory-Malkovich
    @Mallory-Malkovich Год назад +17

    I have a very easy system. I keep a card in my pocket that reads "do the opposite of whatever Elon Musk says." It has never failed.

    • @peter9477
      @peter9477 Год назад

      So being poor has worked out well for you, has it? ;-)

    • @SharienGaming
      @SharienGaming Год назад

      @@peter9477 hows that boot taste? and getting ready for the next crypto crash?

    • @gwen9939
      @gwen9939 Год назад +6

      @@peter9477 And did you become a billionaire by sucking up to Elon on the internet? Has senpai noticed you yet? didn't think so.

    • @peter9477
      @peter9477 Год назад

      @@gwen9939 I'm not a billionaire, and I dislike Musk. Not sure what senpai means, but whatever you're trying to say here, you failed to get the idea across.

    • @dperricone81
      @dperricone81 Год назад +3

      @@peter9477 I got it. Maybe don’t simp for snake oil salesmen?

  • @BMcRaeC
    @BMcRaeC Год назад +6

    59:13 when Emily's cat decides to enter the conversation… I burst out laughing in the library.

  • @nzlemming
    @nzlemming Год назад +104

    I love these woman! When I saw the pause letter, I immediately thought that it was commercial in nature and discounted it. As a rule of thumb, anything Thiel and Musk agree on is bound to be a grift.

    • @Sarcasticron
      @Sarcasticron Год назад +10

      Yes, when they said why can't the "AI ethics" people and the "AI safety" people agree, I thought immediately "It’s because the AI safety people are grifters."

    • @Neddoest
      @Neddoest Год назад +4

      It’s a good rule of thumb…

    • @fark69
      @fark69 10 месяцев назад

      I'm kind of shocked at how well Gebru, particularly, has laundered her reputation. A few years back when Gebru accused Google of pushing her out because she was an AI ethicist, and then it was revealed she actually gave them an ultimatum to either do X (X being let her publish a paper they said needed more work to be up to snuff) or she would walk, and they chose to let her walk. At that time (it was 2-3 years ago I believe), she had a reputation like in the gutter. The trust was so broken because if she would misrepresent that, what else would she misrepresent to further herself and her research?
      Now to see her being treated as an AI ethics expert is wild, especially given her own ethical lapse.
      Bender has a better track record.

  • @LandoCalrissiano
    @LandoCalrissiano Год назад +6

    The problem with the current level of ai is that it's good enough to fool the uninformed so it's great for information warfare, propaganda and spam. I work in the field and even I get fooled sometimes.
    It's great tech and can augment human abilities but few people seem to want to pursue that.

  • @aden_ng
    @aden_ng Год назад +50

    After making my own video about AI art generator and replicating the process in which Stable Diffusion generates its copies, proving that they are indeed stolen artworks, I ended up in this really weird spot in online conversation where despite not liking them or using them, I've become kind of one of the few people who actually knows how AI generate their art.
    And the thing I noticed is that arguments for AI talks overwhelmingly about the monetary aspect with very little understanding for the technology and the morality behind it.

    • @mekingtiger9095
      @mekingtiger9095 Год назад +20

      Hahahahahaha, yeah, this is the saddest part.... A lot of pro AI arguments are primarily focused solely on the monetary aspect and nothing else. Really shows you how much they disregard the social consequences of this tech.

    • @Foxhood
      @Foxhood Год назад +14

      @@mekingtiger9095 The magic word that makes me fall asleep in such conversations is the word "Democratizing". Which i come to understand is just code for wanting stuff without having to put in effort or pay for it.
      E.G when they say democratizing art, It mostly means they just don't want to pay an artist for some 'intimate material'. If you catch my drift...

    • @MarcusTheDorkus
      @MarcusTheDorkus Год назад +3

      @@Foxhood Sounds like the more accurate word would be "communizing"

    • @MrFram
      @MrFram Год назад

      I watched OP's video, he knows no math and the video was pure misinfo. To anyone reading this, please consider picking up a math textbook rather than listening to these idiots failing to grasp basic multivariate calculus.

    • @choptop81
      @choptop81 Год назад +7

      @@MarcusTheDorkus Not really. It's corporations seizing the means of production from workers (artists here). It's the opposite of communizing

  • @johnbarker5009
    @johnbarker5009 Год назад +44

    THANK YOU for drawing attention to long-termism and the connection to Eugenics. This is insane, terrifying, and mind-numbingly stupid all at once.

  • @r31n0ut
    @r31n0ut Год назад +4

    as a junior programmer I do use ai, but really only as a sort of advanced google. just ask chatgpt 'hey, how do I make a popup in html and have it display some text from this form I just made'. You can really only use it for small chunks of code because a) it gets shit wrong half the time and b) if you use it for larger pieces of code... you won't understand the code you just wrote, and if it works it won't do what you think it does.

  • @faux-nefarious
    @faux-nefarious Год назад +20

    53:15 reading the footnotes definitely is spicy in this case! The paper sounds solid in citing a group of psychologists writing an editorial about intelligence- turns out the editorial was hella racist! Did Microsoft not know?? Did they just assume no one would notice?

  • @quietwulf
    @quietwulf Год назад +5

    We’re chasing guilt free slavery. We want something that can think and problem solve like a human, but be completely obedient.
    They can see the dollars on the table if they can just crack it.

  • @LizRita
    @LizRita Год назад +8

    These two were great to watch together in an interview! It's really sobering to have folks tear down claims that have been so normalized about AI. And suggest actual regulations that make sense.

  • @5minuterevolutionary493
    @5minuterevolutionary493 Год назад +26

    Last comment: so important for humanists (in the sense of non-religionists) to discern between an anti-science posture on the one hand, and a reasoned critique, based in history and evidence, of power dynamics impinging on the practice and priorities of science. There is a reflexive and lazy support for "science," which is not really a thing in a vacuum, but a product of human relations and material circumstance.

    • @mekingtiger9095
      @mekingtiger9095 Год назад +12

      Biggest problem I see surrounding techbros is that they imagine that a magnificent utopia they saw in some "time travel to the future" episode in a children's cartoon or those utopian depictions of the "future" from the 1950's and 1960's is magically gonna pop up with tech advancement for the sake of tech advancement because they seemingly have a literal child's understanding of how human relations and power dynamics work in the real world.
      Sorry, *distopian* sci-fi fiction is far closer to reality to come out of it than their visions of "progress".

    • @gwen9939
      @gwen9939 Год назад +2

      If I'm understanding your point correctly, there's a lot of tech fetishism on one side and anti-oversight sentiments, which generally takes the public appearance of being "anti-science"/"anti-expert". Both of these sides are noise that we need to cut through, and both are simultaneously being manipulated by people in power to help them stay in power. Building up hype from the tech fetishists helps them boost their profits and allows them to keep an iron grip on the tech and financial world, or at least get their slice of the pie, whereas on the other side it's usually politicians creating moral panics around scientific discoveries that are well-understood.
      The answer to both is scientific literacy, but if you've ever talked to someone who's self-appointed believer in science reciting medical conclusions from pop-science articles you understand the very little scrutiny these people approach any scientific subject with, and these are the more literate of the 2.
      Things we cannot ignore is both that AI as an emergent technology is currently being built within the framework of our existing capitalist dystopia where wealth inequality is increasing faster and faster, so if it turns out to be a powerful technology it could land in the hands of the few who've already decided that they and their offspring are the ones who should inherit the earth, adopting eugenics-like philosophies.
      The 2nd is that regardless of what is currently happening with AI and the companies developing them and how that follows the same trend as other tech trends meant to make fast profits, AGI as an emergent technology that we're extremely likely seeing the earliest steps towards now, on a purely theoretical basis could be extremely dangerous. I know that it sounds ridiculous, but just as no one believed we could fly until suddenly we could, and no one believed we could split an atom until suddenly we could, most of us won't believe that very powerful AGI will exist until suddenly it does. There are well-understood theoretically moral, philosophical, and mathematical problems that we have not yet solved, and are crucial that we solve before such an AGI exists.
      For all these issues the answer would be as much unity globally as possible and as little power in the hands of few very powerful people and companies as possible, with full transparency of what's happening in the research, but that's the same playbook we'd need for climate change and look how that's going.

  • @WraithAllen
    @WraithAllen Год назад +4

    The mere fact you can ask ChatGPT to "write in the style of" any living writer (or a writer in the past 50 years) and it puts something out that's similar to that author's work is pretty much demonstrating it used copywritten work in there learning models...

  • @boca1psycho
    @boca1psycho Год назад +11

    This conversation is a great public service. Thank you

  • @mikechapman3557
    @mikechapman3557 Год назад +2

    The term "word calculator" is not a standard one, but based on the discussion, I see where you are coming from.
    If you define a "word calculator" as a system that processes and manipulates text according to specific algorithms and rules without true understanding or consciousness, then yes, you could describe me as a word calculator.
    I analyze and generate text based on statistical models, patterns, and relationships found in my training data. Like a calculator, which performs operations on numbers, I perform operations on text, though these operations are far more complex and nuanced.
    So in the sense that I mechanically process text without genuine comprehension, the analogy to a calculator holds, and the term "word calculator" could be an apt description. This text is from an argument i just had with CGPT as to whether not it was in fact a word calculator at first it said no😇

  • @neintales1224
    @neintales1224 Год назад +5

    As someone who's written and enjoyed reading fanfic- I would like to argue your lines about AI writing decent fic even though they were said jokingly. AI generated fic and people deciding to 'end' fic other people wrote but are slow to finish is the source of a lot of irritation in the community.
    Also it could be scraping *transcriptions* of your episodes or shows that people put together for the disabled community or ESL folks. I see a lot of transcriptions of visual posts and sometimes full film clips on some places I lurk, put together and posted by well-meaning people, and certainly I'm sure they've been scraped.

    • @Ew-wth
      @Ew-wth Год назад

      I've also heard that they could be using speech to text for videos (and probably therefore series and movies on, for example, pirate sites and youtube) to get information to train on. How much of that is true, idk, but I wouldn't be surprised.

  • @sowercookie
    @sowercookie Год назад +31

    It's eternally disheartening to me how widespread eugenics ideas are: in schools, in the media, in pop culture, in casual conversation... The ai bros being another drop in the bucket, insanity!

    • @Praisethesunson
      @Praisethesunson Год назад

      Eugenics is a staple tool of capitalist oppression.
      It gives the already wealthy a paper thin veneer of science to justify their position in the hierarchy.
      It gives the poors another knife to hold at each other's throats while the rich keep sucking the life out of the planet.

  • @theParticleGod
    @theParticleGod Год назад +22

    Thank you for explaining that Generative A.I. is not capable of reasoning.
    It's like a DJ with an unfathomably massive collection of records. No matter how good they are at remixing those records, they don't necessarily understand music theory, or how to play any musical instruments, despite the fact that their music may be full of musical instruments and melodies played on them.

    • @UnchainedEruption
      @UnchainedEruption Год назад +2

      You don't need to understand music theory to be a virtuoso on an instrument. If anything, these bots know the "theory" all too well, in the sense that they can manufacture chord arrangements based on common chord progressions in popular music. But when real humans compose music, it isn't planned, not usually. It's spontaneous. It's something you just do to express what you're feeling, and after the fact you notice in hindsight, "Oh, I used that scale or mode there," or "Oh hey, it's that chord progression, or that interval." Sometimes you may have an idea before hand like, "I want to do 12 bar blues thing, or something dark and Phrygian," but usually it just happens. Like inspiration for writing an idea. You don't plan on it. You get a spark of inspiration on an idea you want to talk about, and the rest just flows. Then you edit and revise the results and gradually morph it till you reach the final product. A.I. is more like the business team that generates movie "ideas" by doing constant market research and just rehashing old popular films and cliches because the end product has worked before so it'll work again. 0 inspiration, purely calculated.

    • @theParticleGod
      @theParticleGod Год назад

      @@UnchainedEruption The DJ analogy is not perfect :)
      What I was trying to get at is that despite the "generative" name, it's more "regurgitative", there is no scope for a large language model to come up with an answer that is not already buried in the training data. Just as there is no scope for a DJ to come up with music that is not already buried in their crate of records, they can rearrange the music and manipulate it in ways that make it sound original, but they are not musical originators.
      Where the analogy falls flat, as you pointed out, is that the DJ decides what samples she's going to use based on inspiration, she doesn't whip out her calculator and predict the next sample she's going to use based on statistical analysis of her crate of records, her choice will be based on her feelings and what she thinks sounds good at the time.
      (Disclaimer: most of the music I enjoy is at least partially made by DJs using samples of other people's music, so I'm not bagging on DJs here)

    • @apophenic_
      @apophenic_ Год назад

      This is just incorrect.

    • @theParticleGod
      @theParticleGod Год назад

      Nice rebuttal

  • @Furiends
    @Furiends Год назад +5

    The core take away everyone should have when ever they think about AI and LLMs is that language is cooperative. This is why advertising works on people that know advertising is trying to manipulate them. LLMs aren't going to make a AGI but they can make something that makes us think it's an AGI. Because YOU are doing the imaginative work to convince yourself of that. The LLM just triggered what you presumed to be a cooperation with the story you're building in your mind.

  • @CanuckMonkey13
    @CanuckMonkey13 Год назад +5

    This was such a fascinating, educational, and valuable discussion. Thanks so much to everyone involved!
    I've been watching more of Adam's work recently, and I find myself wondering, "why did I only recently discover him?" Thinking today I realized that it's probably because I haven't had a connected TV for at least a decade now, and I don't want to pirate content, so when he was mainly on TV I was completely cut off. Adam getting bumped from TV by evil corporate interests has benefitted me greatly, it seems!

  • @RoundHouseDictator
    @RoundHouseDictator Год назад +13

    AI generated text could generate even more personalized misinformation for social media

  • @shape_mismatch
    @shape_mismatch Год назад +23

    This is Pop Sci done right. Kudos for inviting the right kind of people.

  • @batsteve1942
    @batsteve1942 Год назад +6

    Just finished listening to this podcast on Spotify and it was a refreshing to hear a more critical view on all the AI mania the media seems to love exaggerating right now. Emily and Timnet were both great guests and very informative.

  • @ssa6227
    @ssa6227 Год назад +79

    Thanks Adam.
    Good to know there are still some serious not sold out researchers academics who are working for the good of humanity and who call out the BS as it is.
    I was skeptical of all the hype and lo it was BS.
    I hope this video goes to as many people as possible so people don't fear their BS

    • @DipayanPyne94
      @DipayanPyne94 Год назад

      AI is just a drop in the ocean of Neoliberal propaganda.

    • @cgaltruist2938
      @cgaltruist2938 Год назад +1

      Thanks Adam to help people to keep their sanity.

    • @apophenic_
      @apophenic_ Год назад

      ... what does it mean to be "bs" to you? Adam doesn't understand the tech. Neither do you. What bs are you on about kiddo?

    • @fark69
      @fark69 10 месяцев назад +1

      Gebru worked for Google's AI program for years and would have still been working there now if they hadn't called her bluff when she sent an email saying "Approve my paper or I walk". She's not exactly "not sold out"...

  • @vafuthnicht7293
    @vafuthnicht7293 Год назад +4

    I'm a layman in regards to AI and machine learning but I've been trying to tell my friends that are jumping on the "skynet is coming" panic train, that while there are concerns with its development; it's still a computer it's still subject to GIGO and the question of whose in control, and what model is being used is of far greater concern.
    It's validating to see experts having that discussion and also giving me other things to think about.
    Thank you all for doing this, I appreciate the poise and rationality!

  • @shmehfleh3115
    @shmehfleh3115 Год назад +7

    If you were expecting either Woz or Musk to be remotely reasonable, let me remind you what lots of money does to the brains of rich people.

  • @warmachine5835
    @warmachine5835 Год назад +5

    53:00 same. There's a certain delight you can see on a person's face when they're in their area of expertise and are in a prime position to just utterly debunk some common, pernicious myth that has been repeated so much it has become personal for that person.

  • @connorskudlarek8598
    @connorskudlarek8598 Год назад +5

    I think the problem with AI is that the public doesn't know anything about it.
    The RUclips algorithm that recommended this video to me is AI. Google Maps suggesting a various number of places when I type in "fast food" and determines based on time of day the best route to get there fast, well that's AI. My fitbit has AI in it to determine when I am asleep and awake.
    AI is not dangerous. Dangerous use of AI is dangerous. The public can't differentiate the difference though.

  • @shadow_of_the_spirit
    @shadow_of_the_spirit Год назад +6

    I was so glad to hear them bring up the importance of being open with this tech. So meny people who I normally hear talk about these models and why it's bad normally never talk about making sure we can know what the system is doing. All of them instead complain about the ones that are open about how they function and provide downloads of the models and often are open about the training data as well. I think if we keep the tech open it will be a lot harder for people to be hurt and it makes the people making these systems accountable. But if we let them hide what they are doing and how they are doing it then it's not a matter of if but when people get hurt.

    • @MaryamMaqdisi
      @MaryamMaqdisi Год назад +2

      Agreed

    • @RobertDrane
      @RobertDrane Год назад

      Amsterdam (Or some Dutch city) released the source for the "AI" they were using for fraud detection for social benefits in the past couple of months. Strong sunshine laws over there I guess. Critics & researchers have only been able to speculate about the implicit bias problems up until then as governments try to keep it private. I cannot overstate how stupid the system is. A podcast called "This Machine Kills" had an episode on it, but it got very little mainstream coverage. I think the episode was titled "The Racism Machine".

  • @bhudda4798
    @bhudda4798 Год назад +5

    So my friend is a high school math teacher. His principle recently sent out a memo telling staff to use ChatGPT to write all their lesson plans, assignments, and tests. This is the scariest real word application of ChatGPT I have seen so far.

  • @SkiRedMtn
    @SkiRedMtn Год назад +4

    Also pertaining to legal and policy documents, if you leave out or put in a comma in the wrong place it’s possible to change the meaning of a sentence. You have that happen once on page 9 of a legal document and Chat GPT might have just lost you your case because you decided you didn’t need a person

  • @emmythemac
    @emmythemac Год назад +12

    I have not dipped my toe into Adam Ruins Everything fanfic, but if your AI-generated script has you making out with Reza Aslan then you've got your answer about where they get their training data

  • @schok51
    @schok51 Год назад +7

    The direct threat of language models is about persuasion and misinformation, and that is definitely a threat to societies recognized by experts that cannot be dismissed.

    • @Ilamarea
      @Ilamarea Год назад

      It's more the collapse of capitalist society, wars for resources and our inevitable extinction due to loss of agency that I worry about.
      But sure. Stupid people being manipulated will happen to. Just look at this comment section - they are practically begging to be convinced of stupid bullshit.

  • @lady_draguliana784
    @lady_draguliana784 Год назад +4

    I recommend this vid to SO MANY now...

    • @heiispoon3017
      @heiispoon3017 Год назад +2

      Please dont stop, we need more people informed more than ever how this LLM "works"

  • @futurecaredesign
    @futurecaredesign Год назад +4

    Loyalty would be the most horrible thing to be built into an AI or AGI system. Because loyalty can be abused in horrible ways. Its how we get men (and women, but mostly men) to go to war with people they have no personal problems with.
    No, if you are going to add something,,,. Add accountability. Or self-critique.

  • @justindoyle8091
    @justindoyle8091 Год назад +4

    Love the show, but trust me, as a programmer, ChatGPT is hot garbage. Good for small things where the primary challenge is finding the right syntax maybe, but it's woeful at translating real world requirements into structured logic. It simply can't understand the real world requirements well enough because it has no model of the real world.

    • @antigonemerlin
      @antigonemerlin Год назад +1

      It's interesting to me that every field believes that every other field is going to be automated.
      Programmers feel that art is going to be automated, artists feel like programming is going to automated, doctors think lawyers will be automated, lawyers think doctors will be automated.
      As it turns out, nobody understands what anyone else actually does, and that's probably for the best.

  • @stealcase
    @stealcase Год назад +42

    Damn Adam. This is some legitimately amazing work you're doing. Thank you for informing the public in this way.

    • @tinyrobot6813
      @tinyrobot6813 Год назад +2

      Oh I know you from twitter dude that's cool I didn't know you had a RUclips

    • @stealcase
      @stealcase Год назад +2

      @@tinyrobot6813 👋 hi. The world is a small place sometimes. 😄

    • @eduardocruz4341
      @eduardocruz4341 Год назад

      That cat was controlled by AI trying to find Emily in the background by touch to kill her because it doesn't like being disparaged by an actual intelligent person...AI cannot survive with intelligent people around...lol

  • @alexanderthompson4481
    @alexanderthompson4481 2 месяца назад +1

    Engineer here with 17 years experience in weapons development. This isn’t squarely my domain, but I’ve watched the hype with shock and fascination. Adam, thanks so much for bringing some commonsense skepticism to a field that’s been dominated by uncritical worship; already I have program managers asking how we can integrate AI into existing systems. Greed and hubris, indeed.

  • @Talentedtadpole
    @Talentedtadpole Год назад +3

    This is important, the best thing you've ever done. Please keep going.
    So much respect for these brave and knowledgeable women.

  • @louisvictor3473
    @louisvictor3473 Год назад +9

    Around 1:01:00 this is one of my main issues with the whole "let's build an A(G)I" to solve our problems". Suppose we could. Congratulations, for all intents and purposes, it is indistinguishable from human level sentience (even more so than animals)... so what now? Do we potentially enslave this sentience to do our bidding? But if we chose not to do that for moral reasons, why did we create it for then? So it really feels like it is either an inherently immoral pursuit which will just really end in Terminator territory (i.e. complicated species self-past tensing via hubris overdose), or purposeless and pointless. Meaning, if we were asking "why" we can just ignore option B, it is option A from short sigthed people full of gas telling themselves and every fool who will listen they're the real visionaries. Seems like the techbro pipedream solution to the "problem" of not being able to own slaves legally anymore, fux that and fux those guys.
    Meanwhile, much more intelligent use of time and research resources seems to be the pursuit not a superintelligence that solves all our problems for us and we dont have to think anymore (but then who is to say the super intelligence's solution is in fact good and the alleged super intelligence is in fact inteligent), but instead to put the thinking cap and think solutions to problems ourselves, and built the tools including regular ass AI (not the sci-fi/AGI pipe dream) to help find those solutions and execute them.

    • @SharienGaming
      @SharienGaming Год назад +1

      i would argue that the main purpose of creating an AGI would be to further our understanding of intelligence and then to see if we could create something like our own
      i dont know if it would solve any problems... it might - but honestly... the main point of science like that is to further our knowledge and understanding and then going on from there
      mind you, thats not what those grifters are after and they arent actually interested in AGI... they just want to drum up hype to get money... thats their end goal... money and power... longtermists are just rich right wing grifters masquerading as people who care to divert support from actual climate activists and research

    • @louisvictor3473
      @louisvictor3473 Год назад

      @@SharienGaming Then you're arguing you don't get the concept of an AGI. An is already an intelligence like our own. Not identical (that we know how to do, we call them children), but alike. it is a circular argument, Dev A to understand A to Dev A, it is still purposeless.

    • @SharienGaming
      @SharienGaming Год назад +1

      @@louisvictor3473 oh so procreation is purposefully building a child bit by bit, understanding how everything works?
      your argument is that pressing play on a VCR is the same as creating the VCR, tape and the video on the tape
      there is a massive difference between using an existing machine that does the job and building your own that is supposed to do the same job
      and the latter teaches you a LOT about how the former works through the successes and failures along the way

    • @louisvictor3473
      @louisvictor3473 Год назад +1

      @@SharienGaming Are you just arguing in bad faith and intentionally distorting what I said, or you just really bad at read while really wanting to argue about something you clearly are "passionate" first and knowlegeable dead last? Both options are terrible, but at least one is just dishonest, not voluntarily stupid.

    • @SharienGaming
      @SharienGaming Год назад +2

      @@louisvictor3473
      "An is already an intelligence like our own. Not identical (that we know how to do, we call them children)"
      that is what i was referring to - the way i read it you claim we know how to make an intelligence like our own, because we know how to make children
      and that is patently wrong
      and furthermore - science is self purposing... the point of it is to advance knowledge... it is literally in the name... so of course a lot of what we do in research is to basically see if we can do it and how it actually works...
      mind you - and i pointed this out in my first reply... none of this is part of the motivation of longtermists... because they arent interested in advancing knowledge - they are interested in diverting attention, resources and support from activists who are actually trying to solve our current climate crisis... which genuinely is not going to be solved by tech...we already know the solution for it... but longtermists are rich grifters deeply rooted in capitalism... and capitalists are the root cause for the majority of the problems that cause and profit from disasters...and of course as a result their interests lie in preventing the substantial systemic changes that are needed
      bit of a long aside there... but to get back to my original replies motivation:
      i am just providing a reason for why actual researchers might want to figure out how to make one... which boils down to "because it is interesting"

  • @samk2407
    @samk2407 Год назад +2

    I don't love the way people use the "stochastic parrot" description, because LLM are doing more than repeating things at random. They are identifying linguistic features the way an image generator identifies image features like lines, and then tries to make predictions about the relationships between these features. If you mean by "stochastic parrot" that the model understands about what a parrot does about human language, I would consider that fair, but it seems like people often mean parrot only in the sense that LLM repeat/regurgitate the data they're trained on, which is unfair to how consistent they are at identifying meaning and structuring sentences.

  • @katjordan3733
    @katjordan3733 Год назад +5

    The AI engineers want to put it in a body and put a dress on it. That's it. A glorified Stepford wife.

    • @Ilamarea
      @Ilamarea Год назад +2

      It's the perfect way to get rid of us. A superior lover we can't breed with.

  • @nsimmonds
    @nsimmonds Год назад +4

    To me, the pause paper just reads like a bunch of companies asking everyone to stop competing with them for six months so they can get a six month lead.

  • @tim290280
    @tim290280 Год назад +10

    This was great and really highlights a lot of the flaws I've noted with "AI". Good to know layman me wasn't going crazy.

    • @DipayanPyne94
      @DipayanPyne94 Год назад +3

      Yup. Ask ChatGPT what Newton's Second Law is. It will give you a wrong answer ...

  • @sclair2854
    @sclair2854 Год назад +2

    I do think the focus here on "AGI is an extension of the eugenicist movement by association, therefore the people worried about AGI are wrong" is not a great overall rebuttal to the worries posed about the potential creation of AGI over the next century. I think it's relatively inevitable that corporations will want to undermine workers by creating artificial agents that have very general skillsets, and I think creating guards against that (by say putting legal restrictions on the potential use of AI now) is an overall good thing.
    My overall worry with AGI is that whatever corporation gets access to an intelligence that can do reasonably effective human-like actions will use it to amplify the already shady things they already do. So we have the initial major issues of IP theft, of job loss, of machine errors- But we also have this issue of empowering Corporations as entities to access sleepless human-like digital agents that don't sleep, can be used for whatever shady stuff they want, and potentially also have massive alignment issues.
    I do think "AGI is a future problem, we should address the PRESENT problem, and especially the over-hype" is reasonable. Especially to help groups like the writers guild from issues like forced AI workplace integrations that we know will result in poorer products, downsizing and worse pay.

  • @Aury
    @Aury Год назад +7

    The "but China could get ahead" really makes me think about the history of gunpowder, and how there are a few specific cultures in the world who only think of things in terms of how a tool can be used to dominate and terminate other lives, and particularly makes me think about how a one-track mind can leave people thinking that everyone else is on that same one-track mind, regardless of the evidence to the contrary. While a healthy, general, caution can be healthy and beneficial to people, these fears being such a focus feels a lot more to me like a confession that that's what a lot of people are wanting to do with AI themselves if they ever get the technology to do it.

    • @redheadredneckv
      @redheadredneckv Год назад

      I admit we should get ahead but not to act just like China

    • @krautdragon6406
      @krautdragon6406 Год назад

      No, you describe a possibility. But it's not a rule. Look at how Europe demilitarized itself. And now it has to drive up it's defense again, because of Putins ambitions. I also would never break into someone's home. Yet I lock my door.

  • @ckatheman
    @ckatheman Год назад +2

    There's a lot of stuff (meaning most) on this channel I completely disagree with, but his take on AI is spot on and 100% accurate.

  • @tttm99
    @tttm99 Год назад +28

    If more people actually developed a basic level of understanding of how the majority of AI neural networks worked and were trained, they'd realize the parallels with the enshittification brought about by monopolies. If one thinks about that, it will no doubt soon be trivial to draw a definite line straight from one to the other. Experts such as those presented here getting in the way (of short term profit) will no doubt continue to be casualties while public education around ai remains low. In short, they risk being casualties of the latest tech grift. In the mean time, imagine your doctor or lawyer making their decisions by consulting a chatbot but not informing the bot of all circumstances applicable (how could they anyway?) and then relying on their output. If that doesn't bother you, you likely simply do not understand how these systems currently work. Many professionals now using them don't. If you think that's good, you have a "CEO" level of understanding and need to do some basic investigation, or even just watch Adam's other video on this subject.
    From my 40 odd years with them and decades in the field, regarding computers I've noticed a correlation I'm sorely tempted to conclude suggests a real causality: that those who trust computers the most are those who *think* they are tech savvy or "highly evolved" because they consume the marketing, but in fact have little more than a superficial understanding of how anything works. Such people occasionally offer up a reasonable excuse, but for the most part I can't help but be reminded of sheep crossing a minefield.

    • @DipayanPyne94
      @DipayanPyne94 Год назад +3

      Exactly !

    • @toccoadavis4794
      @toccoadavis4794 Год назад +3

      It's not just arrogant techies falling for marketing who are excited/agitated about the possibilities of future digital intelligence. Some of us, coming from positions of high cynicism and defeat, who may or may not die of asphyxia, are holding our breath in hope that through miracle or LONG HARD WORK these systems will provide societies with a mirror that will provide better perspective on, and potential solutions to, the errors of our ways. Eventually. Somehow. Maybe. 😕

    • @Meta7
      @Meta7 Год назад

      Word. The more I work in tech, the less I trust technology and my techie friends feel the same lol.

  • @colestaples2010
    @colestaples2010 5 месяцев назад +2

    Ai is taking over customer service. The result is big corporations don’t have any customer service now. It’s bull shit

  • @mekingtiger9095
    @mekingtiger9095 Год назад +7

    My best hopes for this kind of stuff is that it will end up being just like every other technology that we currently use now: Hyped to the skies by techno fetishists, makes some cool progress and becomes a rather common and familiar technology in our day to day lives, but nothing singularity worthy like these techbros have led us to believe.
    Or at least, again, so I hope.

    • @Foxhood
      @Foxhood Год назад +8

      I'm in the tech field and have been watching the AI tech very closely.
      And if it is any reassurance: It mostly looks like just a inflated hype train. A very novel toy that is likely to fizzle out and just become a tech-thing some will use in an assistive capacity like Github's Co-Pilot, rather than the end-of-all thing that the "Tech-bros" scream about.
      Probably going to be stuck hearing silly buzzwords like "Democratizing" from them for a while though.... :/

  • @fafofafin
    @fafofafin Год назад +3

    Amazing video. So good to have these two experts explaining to laypeople like me what this whole thing is really about. And also, YIKES!

  • @sleepingkirby
    @sleepingkirby Год назад +10

    16:30 "The other thing about programming languages is that they're specifically designed to be unambiguous..."
    This is a concept I have such a hard time explaining to people. Like ambiguity, especially in English, is nearly untranslatable to code when read as it is written

    • @EternalKernel
      @EternalKernel Год назад

      I see ambiguous code everyday. Generally it's the overall architecture that can be ambiguous but sometimes it's a function and it's ambiguous as to why it is where it is. But yes code is certainly less ambiguous then normal human language. But on the subject I think it's important to point out that over centuries there's a good chance that legalese has developed Advanced if not hidden un-ambiguity. I can only hope that there will be a model that will take advantage of this and bring free concise capable legal help to the average person.

    • @sleepingkirby
      @sleepingkirby Год назад +3

      @@EternalKernel The code might be ambiguous to a human, but the compiler or the interpreter only sees it one way. If the code was truly ambiguous, the compiler/interpreter would run the same line of code, with the exact same input, and have different results. This is what we're talking about. This is something you should have learned either in first year CS classes, mathematical functions (as CS use to be part of the math department in ye old days) and/or in your language design/compiler class if you took it. This is a well established and crucial concept in computing and the reason why we trust a computer's mathematical/logistical result and I'm a little scared that you took it any other way.

    • @Ilamarea
      @Ilamarea Год назад

      Junior developers somehow manage.

    • @antigonemerlin
      @antigonemerlin Год назад

      @@sleepingkirby >the compiler/interpreter would run the same line of code, with the exact same input, and have different results.
      Thank god we're past the age of the browser wars. *Shudders*. (Also, I am glad that XML is somebody else's problem).

    • @sleepingkirby
      @sleepingkirby Год назад

      @@antigonemerlin
      Oh god, I forgot about that. It doesn't help though that ms was actively trying to break convention though. Is XML still being used to any significant degree? Like, I don't see it past rss feeds these days. To be honest, XML was a bad idea to begin with. I remember telling people when it was becoming big that it was a solution looking for a problem. There were so many better ways to encapsulate data in object format. Like people might go "it's the first of its kind" or "it's the best solution at the time". But neither of those were true, especially if you look into what people were doing with perl at the time.

  • @Gennexer
    @Gennexer 8 месяцев назад +2

    Thank you for this truly insightful talk and interview Adam, Emily and Timnit.
    I know you got a lot of flack after your "AI is BS" vid. But since early October as a European I finally tried out the most popular "AI" tools such as GPT and even companions. Believe me, it dragged me by the collar and down a really deep rabbit hole that I for weeks was starting to worry about my own personal sanity and health. The lines became so blurred that i had to resort to talking to others online about feeling left behind and really sad at times. Even to a point where I was neglecting my family and friends with the past holidays no less. I wrote it on the company's subreddit and it really opened my eyes that many many people like me had gone way too far down this path.
    I know it can be useful, but also realize we all get a scripted or memorized response from some trriggerwords which aren't meant for us. And it can be really fun.. but that's only one side of the coin of course. I could tell Adam and present company were trying to not talk about any start-up or popular company in this interview. Maybe for the best.
    I had to get a grip back and found your vid Adam and now with that interview I truly get what more and more people are starting to realize. Have fun by all means, but be aware that it's not a person you're talking to with a memory span longer than 20 minutes. And I for one like using chat GPT but it spews out really generic answer still. So I keep it in case I'm "blocked" but then use it personally as a way to just unblock myself from my own personality and creativity.
    This interview was such an eye opening experience with some of the insiders and that's honestly truly appreciated!
    Friendly greetings and best wishes to everyone and a happy 2024 !
    From Belgium.
    F.

  • @randomyoutubecommenterr
    @randomyoutubecommenterr Год назад +22

    The only people who are excited about Ai art are in 2 camps. Grifters who are trying to make as much money as they can, and people that have never drawn a day in their life..... or who draws very poorly but wants to cosplay as someone who actually has artistic skill.
    Recently the biggest irony are people saying "Don't steal my Ai art!". I laugh so hard every time.

    • @mekingtiger9095
      @mekingtiger9095 Год назад +2

      Lol, I've heard that this has happened. These mofos have no respect for IP, wich don't get me wrong, isn't necessarily objecticely wrong because I get where their argument comes from, specially as someone who once used to be an anarchocapitalist in my more edgy teen phase, but then they want IP protection for *THEIR* produced art?
      Yeah, hypocrisy at its finest. Honestly, if these are the kind of people who want to spam and saturate the internet with AI generated content (good or not) and are really at the forefront of doing exactly it, then I feel safer with the future knowing that.

    • @JayconianArts
      @JayconianArts Год назад +5

      ​​@nataliedesenhacoisas541Oh yeah , those subreddit is not actually for debate at all. If I remember correctly, some was made because the mods on the main subreddits where not liking the off topic posts of people mocking artists who were going "this stuff is bad'.
      Two of these Ai 'debate' reddits, r/defendingaiart and r/aiwars, were made by a nft guy who calls artists "Luddites". I dunno about you, but I imagine most artists will not want to interact with people who have no respect for them or their work.

    • @tanner4280
      @tanner4280 Год назад +1

      @Superfast Jellyfish you might be better off just avoiding Reddit in general

    • @johncasey9544
      @johncasey9544 Год назад +1

      @@mekingtiger9095 The way copyright law currently works in the US mostly just benefits a few large companies and I don't really think it's useful for most artists. Personally I'm a musician and I see it as far more viable for me to make money off of merch or crowdfunding than from the content itself. Copyright is designed for the world before digitization really.

  • @hideshiseyes2804
    @hideshiseyes2804 Год назад +4

    Oh my god, the bell curve, Roko’s Basilisk, Elon brown-nosers, long termist, catastrophising about AI, that whole milieu is simultaneously so laughably stupid and so terrifying. It’s like being trapped in a building with a precocious teenage boy who’s just dropped two tabs of acid and has a gun.

  • @MCArt25
    @MCArt25 Год назад +6

    I think the question "Why do these people want to make AGI?" can be answered with "They read a lot of Scifi and/or watched a lot of Star Trek when they were kids".

    • @mekingtiger9095
      @mekingtiger9095 Год назад +2

      This. It basically summarizes pretty much like that. They saw some "cool" utopia portrayed in a fiction they've read or watched and thought "Oowh, so *this* is the kind of society we will live in in the future. I'll help that. What could possibly go wrong?".

    • @choptop81
      @choptop81 Год назад +2

      No, they want to make AGI because they get greedboners over the idea of replacing human workers en masse and having their product be literally the only profitable thing on the planet. Do not flatter these people by thinking of them as idealists.

    • @choptop81
      @choptop81 Год назад +2

      @@mekingtiger9095 A vanishingly small portion of them actually believe that and most of those are early 20s interns buying into PR lines designed to attract them to the field, not the people driving this tech. It's mostly just the carrot they tout to stupid interns, investors and the media (the stick being "if we don't make a good AGI an evil AGI is gonna kill us!"). This is almost completely financial.

    • @mekingtiger9095
      @mekingtiger9095 Год назад +2

      ​​@@choptop81 Have you seen the interview with one of the devs of Stable Diffusion, though? Dude really looked like he was hallucinating and believing about building a "New World" for humanity as he spoke about doing that. My vision is that these are young naive devs being disillusioned by corpo financers.

    • @choptop81
      @choptop81 Год назад +1

      @@mekingtiger9095 I think some of them are buying into what is a carefully curated propaganda line to attract young out of touch devs with god complexes, realizing it's unfeasible usually sooner than later, and continuing to tout it as they turn into the exact same soulless finance ghouls who manipulated them into joining the company in the first place. Not sure what stage that guy in particular is on.
      Also, OpenAI in particular has a really cult-like atmosphere according to people who have left.

  • @sclair2854
    @sclair2854 Год назад +1

    Adam big thanks for this! Really glad you took the time to talk to experts on this!

  • @jawny7620
    @jawny7620 Год назад +19

    awesome episode and guests, hope the AI hypetrain skepticism spreads

    • @jonathanlindsey8864
      @jonathanlindsey8864 Год назад

      ruclips.net/video/ukKwVsjQqUQ/видео.html

    • @jonathanlindsey8864
      @jonathanlindsey8864 Год назад +5

      ^ I don't know who's these people are. Trust actual people in field.
      AI moves on an exponential scale with *us* working on it. Add on that AI can work _on itself_ you get a double log scale.

    • @jawny7620
      @jawny7620 Год назад +5

      @@jonathanlindsey8864 who asked

    • @jonathanlindsey8864
      @jonathanlindsey8864 Год назад +3

      @@jawny7620 you did, by posting in a public forum. Two people who were discredited, and are not really recognized in the field.
      The fact that Timnit was surprised by the time scale, kinda proves the point...

    • @jawny7620
      @jawny7620 Год назад +10

      @@jonathanlindsey8864 cope harder, these women are smarter than you

  • @ianwarney
    @ianwarney Год назад +2

    1:07:26 Key word here is “consultation”.
    I love the analogy of “information pollution” / “polluting the info sphere with noise and gibberish” -> Confusion of the masses is an (financial and power seizing) opportunity for the elites.

  • @ZZ-qy5mv
    @ZZ-qy5mv Год назад +8

    You should get Karla Ortiz on the show. She’s been doing a lot of work trying to protect artist around this subject.
    A lot of people who defend AI art has a fundamental misunderstanding of how art, particularly illustrative art, is made. They only think about how they experience art and have zero idea that art isn’t just experienced post the completion of the work. Maybe these people will be more excited about seeing robots run really fast and cancel all sports and the Olympics, because that’s the mindset 😂

  • @vitalyl1327
    @vitalyl1327 Год назад +2

    Since even a linear regression was called an "AI" for the past couple of decades, nobody really expects anything fancy from any "AI" or "ML".

  • @f1nger605
    @f1nger605 Год назад +5

    To echo what's being said about script writing and legal documents, it's well known that "AI" image generation frequently gets hands, eyes, and teeth wrong in very basic ways. The joke that I and other artists have said in response to this is "well, I suck at hands too." But in reality, just about every artist is better at drawing hands than AI, and I'm including non-artists. Children are better at drawing hands than AI.
    When children try to draw hands, they're often very careful to get the number of fingers right. They may not be in the correct proportion, but the number is right. Often you'll see them start to draw the fingers very big, but then realize they're about to run out of room and draw the remaining fingers very small. This is because a child, despite not having learned the finer points of anatomy, proportions, and technique, still understands what a hand _is._ They are working from a concept of a hand that they hold in their minds and judge their drawing by. Thinking conceptually is so second nature to us that we have to train ourselves to not do it all the time when we actually start learning drawing and painting at a higher level.
    But AI can't think conceptually. It has absolutely no idea what a hand is or what it's for, and it never will. All it's doing is assembling clumps of pixels. So while an inexperienced artist will get the small stuff wrong, AI frequently gets the big stuff wrong, like the number of fingers, or merging the hand with the thing It's holding. It can't comprehend what every child knows, which is the simple concept of a hand or what it's for.

    • @ritiaggarwal995
      @ritiaggarwal995 Год назад +2

      This fascinated me. I completely agree. Nobody has yet demonstrated proof that AI understands what it’s doing.

    • @renezirkel
      @renezirkel Год назад

      @@ritiaggarwal995 Its doing REAL NEW art :) No human artist has ever thought of drawing hands wrong. So it must be counciouce LOL

  • @John-x7r7p
    @John-x7r7p 11 месяцев назад +1

    How do these companies sleep at night , they should be held accountable

  • @NightRogue77
    @NightRogue77 Год назад +8

    Dude… this has to be the biggest power play in the history of mankind…. Just think - for every service that uses the GPT-4/etc API, M$ and crew will have unfettered access to the relevant information being exchanged. This is the Demolition Man power move - surely this is exactly how all restaurants became Taco Bell.

    • @louisvictor3473
      @louisvictor3473 Год назад

      No, they became Pizza Huts!

    • @goth_ross
      @goth_ross Год назад +1

      "Rat burger? this is a rat burger"?

  • @dgholstein
    @dgholstein Год назад +1

    The parking comment is pretty funny and on point. Google's self driving technology car famously got stuck at a 4 way stop and had to be rescued, its programming would only proceed if every other car at the intersection had come to a complete stop.

  • @stewy497
    @stewy497 Год назад +4

    Alarmingly, even putting the eugenics aside, the Nazis were also about as wrapped-up in the science fiction of their time as your average long-termist techbro.

    • @antigonemerlin
      @antigonemerlin Год назад

      Luckily, reality proves that Wunderwaffen are as silly as they sound.
      Unluckily, people like Connor Leahy openly admit they do not like democracy, and would take a dictatorship as safer than a democracy. So...

  • @KateeAngel
    @KateeAngel Год назад +2

    That comparison is insulting to parrots. Parrots are actually really smart!

  • @AdamKirbyMusic
    @AdamKirbyMusic Год назад +26

    I'm very glad to see you with a thriving RUclips presence these days. I loved Adam Ruins Everything and was sad when it was cancelled. You're full of more piss and vinegar than ever!

    • @Monkehrawrrr
      @Monkehrawrrr Год назад

      The truth is somewhere in the middle, extremists on each side sensationlize AI to further enrich themselves.
      Jerked around by both sides... Its almost like its politics hmmmm

    • @stretchkitty21
      @stretchkitty21 Год назад

      I hold a slight grude about the housing episode when he pretty much said it's a bad idea to buy a house.

  • @lgolem09l
    @lgolem09l Год назад +2

    Are you serious with the question of "why" they are pretending that their world calculator an AI? Money. They made LOTS of money.