Is AI A Bubble?

Поделиться
HTML-код
  • Опубликовано: 26 сен 2024
  • Go to ground.news/husk to understand how context shapes history
    and the different ways we interpret current events with Ground News. Save 40% on their unlimited access Vantage plan with my link.
    Is AI a bubble? Nvidia, Microsoft, Apple, Google, and everybody in between doesn’t seem to think so. But I kinda do.
    I’ve been skeptical of a lot of tech in the past few years, and while AI certainly has shown promise over the past…uh, few decades its been around, its this current iteration of the technology that I have some doubts about.
    Today I’ll be discussing why I feel this way, and hoping that this video doesn’t age like cottage cheese.
    That is to say, stinky.

Комментарии • 2 тыс.

  • @knowledgehusk
    @knowledgehusk  3 месяца назад +203

    Dive into the historical context behind today’s headlines and deepen your
    understanding of current events with Ground News. Try Ground News today and get 40% off your
    Vantage subscription: ground.news/husk

    • @jonahblock
      @jonahblock 3 месяца назад +2

      Not media hype, techbro and investor hype

    • @willg3220
      @willg3220 3 месяца назад +9

      Put that ad at the end. Felt like it went on for days. I'll trade you 👍

    • @Technobitz
      @Technobitz 3 месяца назад

      Shh you aren’t supposed to know that

    • @dzhang4459
      @dzhang4459 3 месяца назад

      @@willg3220 get SponsorBlock

    • @BillClinton228
      @BillClinton228 3 месяца назад +2

      Does anyone remember big data or blockchain? If your business wasn't doing something on the blockchain in 2018 you weren't considered "cutting edge"... nowdays everyone is shoving the term AI into everything. It's just another tech fad...

  • @ronoc9
    @ronoc9 3 месяца назад +2917

    The word "potential" is doing a lot of heavy lifting when it comes to AI.

    • @stedwards311
      @stedwards311 3 месяца назад +102

      Lifting the entire industry AND its hype machine...

    • @FantasmaNaranja
      @FantasmaNaranja 3 месяца назад +130

      i swear every time i bring up that AI shouldnt be as widely used as it currently is because its simply not that serviceable yet AIbros immediately jump on me to tell me that it's got potential bro and that i shouldnt blame people for firing all their employees and then going bankrupt when their AI scheme doesnt actually work

    • @HeyIsaiddontlookwtfwhatiswrong
      @HeyIsaiddontlookwtfwhatiswrong 3 месяца назад

      Potential for abuse

    • @exeggcutertimur6091
      @exeggcutertimur6091 3 месяца назад +27

      Moore's law has been dead and buried for a while now. I'm skeptical general purpose AI will ever be digital.

    • @TheManinBlack9054
      @TheManinBlack9054 3 месяца назад +11

      ​@@FantasmaNaranjaok, I'm going to play the role of AI bro and say that you must be proactive and think of the future and not only of the current moment as it's not very smart to never plan for the future as it'll eventually come.

  • @Nestor_Makhno
    @Nestor_Makhno 3 месяца назад +2731

    "Real stupidity beats artificial intelligence every time" - Terry Pratchett

    • @willg3220
      @willg3220 3 месяца назад +44

      Interesting. I'd say depends on which A.I. and which stupid

    • @Web720
      @Web720 3 месяца назад +83

      Artificial Intelligence when Natural Stupidity shows up.

    • @TheManinBlack9054
      @TheManinBlack9054 3 месяца назад +17

      I'm afraid not. Any intelligence wins over any stupidity. That's a humorous quote, first and foremost.

    • @USER-vb7ro
      @USER-vb7ro 3 месяца назад +9

      AI can be stupid too.

    • @pootan9365
      @pootan9365 3 месяца назад +23

      @@willg3220 Weaponized autism? is there any AI that can beat that?

  • @kris1123259
    @kris1123259 3 месяца назад +733

    "AI could make our jobs easier". The problem with that is that as far as bosses are concerned they are going to use that as an excuse to pay you less. Productivity will go up but pay will go down

    • @ajohndaeal-asad6731
      @ajohndaeal-asad6731 3 месяца назад +73

      that’s literally happening now

    • @CrimsonMagick
      @CrimsonMagick 3 месяца назад +105

      Sure. The fundamental problem is capitalism, the issue isn't unique to AI.

    • @TheAweDude1
      @TheAweDude1 3 месяца назад +27

      That has literally been happening for hundreds of years.

    • @anthonyvillanueva5226
      @anthonyvillanueva5226 3 месяца назад

      I hate that the decisions are being made for us by people who just want to cut corners

    • @ajohndaeal-asad6731
      @ajohndaeal-asad6731 3 месяца назад +8

      @@CrimsonMagick Exactly

  • @fiddleriddlediddlediddle
    @fiddleriddlediddlediddle 3 месяца назад +1533

    The only thing AI has done is ruin Google Images results.

    • @Reiikz
      @Reiikz 3 месяца назад +113

      ikr?
      it's so annoying that now it's impossible to find genuine images.

    • @thehammurabichode7994
      @thehammurabichode7994 3 месяца назад

      ​@Reiikz Google Search / The RUclips search bar have been shockingly AGGRIVATINGLY awful for so long, I can't beleive it.

    • @denno445
      @denno445 3 месяца назад +97

      yeah also shitty ai artworks at markets and on business cars and signs

    • @justinwescott8125
      @justinwescott8125 3 месяца назад

      I know this channel is all about AI hate, but this is the most insane comment I have ever seen. The following 2 paragraphs are from the journal Science, Vol. 370.
      Artificial intelligence (AI) has solved one of biology's grand challenges: predicting how proteins fold from a chain of amino acids into 3D shapes that carry out life's tasks. This week, organizers of a protein-folding competition announced the achievement by researchers at DeepMind, a U.K.-based AI company. They say the DeepMind method will have far-reaching effects, among them dramatically speeding the creation of new medications.
      “What the DeepMind team has managed to achieve is fantastic and will change the future of structural biology and protein research,” says Janet Thornton, director emeritus of the European Bioinformatics Institute. “This is a 50-year-old problem,” adds John Moult, a structural biologist at the University of Maryland, Shady Grove, and co-founder of the competition, Critical Assessment of Protein Structure Prediction (CASP). “I never thought I'd see this in my lifetime.”
      And I could name 100 other ways that AI is currently improving the field of medicine,and improving the lives of people with physical and mental disabilities.
      And I personally have benefitted from it. I have a grandmother who only speaks Spanish, so I've never been able to talk to her directly before, but now I can using ChatGPT. We both open the app on our phones, and it will translate what we say and even read it out loud.
      So, while I know you're angry on behalf of creatives, think for a second that maybe this RUclips channel has its own goals, and its own reasons for spreading negative propaganda that's FULL of mistakes, btw.

    • @himalayo
      @himalayo 3 месяца назад +12

      that's because you only know about generative AI.

  • @Cy_Guy
    @Cy_Guy 3 месяца назад +1615

    I built an Excel tool that makes a couple dozen if statements and convinced my work that it was AI. I had a requirement to show that I was complying with the rule that we had to use AI.

    • @remyllebeau77
      @remyllebeau77 3 месяца назад +267

      And then they fire you hoping that AI will replace you. 😆

    • @jonescity
      @jonescity 3 месяца назад +248

      @@remyllebeau77 They might be dumb enough to do that and he'll have the last laugh. Code (just like A.I.) require maintenance by humans...

    • @arxzhh
      @arxzhh 3 месяца назад

      ⁠@@jonescity(for now)

    • @2Potates
      @2Potates 3 месяца назад +6

      lmao

    • @TheManinBlack9054
      @TheManinBlack9054 3 месяца назад

      @@jonescity ideas like these are very interesting to me because if the tech really is going nowhere and its just another fad and a gimmick then companies that replace their workers with AI will soon find out that its not performing as well or at all and that they are just wasting money and are being outcompeted by more efficient companies that didnt do that and then they'll either have to bring back the people again or go bankrupt. So there is basically no real problem with AI replacing people, at the end of the day.

  • @AmitSingh-vt6ws
    @AmitSingh-vt6ws 3 месяца назад +494

    As a software engineer, I've used LLMs many times to quickly get some boilerplate code or some simple scripts. But at this point I've been burned by these LLMs so many times I don't trust a single generated statement. The thing is, LLMs are good at writing elegant code, so it kinda tricks you into believing the code is correct but you can never trust it.

    • @DandeDingus
      @DandeDingus 3 месяца назад +78

      this, so much. like it could help but its so error prone that you cant trust anything that it spits out before double checking which defeats the entire purpose

    • @nomms
      @nomms 3 месяца назад +33

      ​@@DandeDingusas a sysadmin who needs to code a but but not often they're really solid. I'm better at tweaking and troubleshooting existing scripts than writing from scratch. I don't know the general patterns for getting complex tasks done with code. GPT generally gives me the template I need to get something done. Saves me a decent bit of time. It's also handy at explaining chunks of code I don't understand.
      But yeah, it hasn't made programming effortless by any means, just mildly more bearable lol

    • @fullsendmarinedarwin7244
      @fullsendmarinedarwin7244 3 месяца назад +5

      The latest version 4O seems more reliable for writing code that actually runs, but it’s not great at following instructions sometimes

    • @darksidegryphon5393
      @darksidegryphon5393 3 месяца назад +25

      "It's shiny bullshit, but still bullshit."

    • @duckpotat9818
      @duckpotat9818 3 месяца назад +4

      @@DandeDingus depends, I work in biology including simulations which are often made of several simple modules connected in complex ways (that a biologist would know, not programmer). Getting ChatGPT to write the modular bits of code then just checking if everything fits together is much faster than everything from scratch.

  • @kalliste23
    @kalliste23 3 месяца назад +303

    A good example was how everything was "nano" not so long ago. Carbon nanotubes were going to be used to build everything.

    • @deadturret4049
      @deadturret4049 3 месяца назад +25

      The ipod nano

    • @matheussanthiago9685
      @matheussanthiago9685 3 месяца назад +5

      Yeah that really didn't go anywhere

    • @thehammurabichode7994
      @thehammurabichode7994 3 месяца назад +23

      ​@@matheussanthiago9685 Who else was exited for grephene, as a youngin'?

    • @dewyocelot
      @dewyocelot 3 месяца назад +27

      I mean, the issue is assuming these things happen overnight. Material science is a long term technology. We *will* see awesome things from carbon nanotubes, it’ll just be like, ~10-20 years from now. I feel like the same is true of AI. People in general think when they hear about new science/technology that that means it’s ready to be everything people have made speculations of, when it’s more of “we’ve figured out we *can* do this, now we have to figure out how to do it quickly, cheaply, and effectively.”

    • @kalliste23
      @kalliste23 3 месяца назад +4

      @@dewyocelot there are plenty of computing tech that went nowhere or hit a wall. Superconducting Josephson Junctions for instance have an important niche but they were expected to be the future of computing back in the sixties. CRT had a long and storied history and then reached the limits of usefulness. And so on.

  • @redkaufman892
    @redkaufman892 3 месяца назад +1017

    Genuinely I want the option to turn off the ai shit sometimes. It’s just annoying and gets in the way of things I’m actually trying to do. I don’t need a third grader to attempt what I want to do before I fix it when I can just do it myself and save a headache.

    • @Grizabeebles
      @Grizabeebles 3 месяца назад

      Think of how stupid the average person is, and realize half of them are stupider than that.
      -- George Carlin

    • @Chord_
      @Chord_ 3 месяца назад +64

      Right?! Search results are so useless now.

    • @matheussanthiago9685
      @matheussanthiago9685 3 месяца назад +85

      Every time I can detect a yt channel blatantly using AI in their thumbs in their text, in their voice etc
      I hit the "do not show me this channel again"
      I wish every thing else had that option

    • @cygnusghedepereu6885
      @cygnusghedepereu6885 3 месяца назад +22

      It’s only going to get worse, dead internet no longer a theory

    • @Capitalisst
      @Capitalisst 3 месяца назад

      First draft second draft.

  • @DeadBaron
    @DeadBaron 3 месяца назад +797

    This is "the cloud" all over again. Which just means your data is hosted by a third party server. But the term "the cloud" caught on and I hate it so much

    • @gvi341984
      @gvi341984 3 месяца назад +11

      The cloud did ruin an entire industry?

    • @GoldenBeans
      @GoldenBeans 3 месяца назад

      me having to explain to tech illiterates that no your pictures are not stored in actual clouds in the sky, they are stored on somebody else's computer somewhere else on the world

    • @rheokalyke367
      @rheokalyke367 3 месяца назад +118

      Unlike AI, file sharing on a third party server is actually pretty useful.
      Mostly for handling projects together within companies. In fact it's so useful that it was a widely used system even before "The cloud" was a thing!

    • @extremeencounter7458
      @extremeencounter7458 3 месяца назад +22

      Eh, just an easy way to describe data as non-local

    • @sidbrun_
      @sidbrun_ 3 месяца назад +11

      Not sure if you mean online storage or "cloud computing"? Like game streaming, running processes on a server, and not really owning a computer and instead streaming it all. To be fair those are all integral parts of most AI models right now, nobody's fully using "cloud computing" but instead it's a lot less obvious and behind the scenes. Online storage is pretty useful to me as a backup and for sharing files, I use it all the time.

  • @mekhane675
    @mekhane675 3 месяца назад +1226

    I kid you not, my washing machine was advertised as having "AI fabric detection."
    Edit: fixed a misspelling and some grammatical errors

    • @zwenkwiel816
      @zwenkwiel816 3 месяца назад +134

      My microwave has "AI" as well. Not quite sure how it works. It seems to just pick some settings at random..

    • @RodolfoGeriatra
      @RodolfoGeriatra 3 месяца назад +148

      I purposefully avoid products with this level of shitty advertising

    • @chrisyoung1576
      @chrisyoung1576 3 месяца назад +57

      I misread this as fanfic detection

    • @jess648
      @jess648 3 месяца назад +8

      this was happening before that became a trendy topic

    • @hherpdderp
      @hherpdderp 3 месяца назад +41

      My microwave has an electromechanical timer.
      It's AI. Anologue Intelligence.

  • @rumplstiltztinkerstein
    @rumplstiltztinkerstein 3 месяца назад +1186

    Blackrock is to blame for "AI" being included in anything. Their software rewards companies working on AI 90x more than other companies right now.

    • @Exigentable
      @Exigentable 3 месяца назад

      that honestly isn't surprising. Blackrock is seemingly is behind all the trendy crap corporations start hamfistedly ramming through.

    • @nicoliedolpot7213
      @nicoliedolpot7213 3 месяца назад +156

      you mean every company, speaking market-wise AI is the current buzzword like how EVs, Cloud Compute and the .coms in the 2000s were hyped up
      edit: also mp3 players and smartphones were shoved into everything

    • @josedorsaith5261
      @josedorsaith5261 3 месяца назад +153

      BlackRock ruined America. Scrump's podcast covering their history was fascinating

    • @mapache-ehcapam
      @mapache-ehcapam 3 месяца назад

      Blackrock has to be nuked from orbit

    • @Slickjitz
      @Slickjitz 3 месяца назад

      Oh look another brain dead person who blames big scary Blackrock for all of the world’s woes….

  • @rustymustard7798
    @rustymustard7798 3 месяца назад +549

    Here before the kind woman at the bank tells me "I'm sorry Rusty, we can't process your transaction, the AI is down."

    • @cajampa
      @cajampa 3 месяца назад +43

      This is already happening.
      The bank issuing the charge card I am using have blocked my charges several times even when I have money in my account. Because they started running some algorithm, that limits big purchases in to short for a time compared to how much money you use to have available on the account. That is, not the actual money but past money. It is crazy annoying. But that card have zero fees on anything including current transfer fees. So I take it and jump through hoops to even be able to use my own money

    • @thesockpuppetguy7626
      @thesockpuppetguy7626 3 месяца назад

      So, banks use AI for various things. The ATMs you use? Guess what? It has AI in it as well. They use algorithms to determine purchasing patterns based on purchasing history and predictors such as influx of funds into an account. Have you ever gotten a call from a banker after you had a 5× higher than normal deposit into your bank account? Guess what? An algorithm determined that based on history and other factors, you're about to purchase a house/car/horse/small human child to make small arms, etc.
      The scary thing? It's VERY RARELY WRONG.
      how do I know this, I work in a bank, and I have to periodically make these calls. I can count on 1 hand the amount of times that the call that I was told to make had to be pivoted to a different call because the algorithm was wrong.
      But hilariously, when it comes to actual purchases, it is wrong. a fucking lot.
      I can't tell you how many people come in and are like "I went to buy x and it Won't go through" and it turns up that our algorithm was like "Woah there buddy, you normally shop at Target and now you went to Walmart. That's obviously fraud, " and it blocks the card.
      So it's a weird thing. But I live it. Everyday

    • @rustymustard7798
      @rustymustard7798 3 месяца назад

      @@cajampa I use an old school local independent bank run by good people. Over the years the 'system' was down occasionally, mainly due to internet outages. On those occasions they would grab a pen, paper, and calculator and keep things running smoothly. The manager is a smart, competent woman and so are her team so i trust them more than the big corpo bank with big corpo policies.
      One time scammers tried to drain my account and within minutes the bank manager was personally calling me with a new card number to use.
      If i didn't have this bank as an option, i'd just keep my money in a coffee can at home and fill up a gift card or prepaid debit to buy something rather than deal with these BS scam corpo banks.

    • @Capitalisst
      @Capitalisst 3 месяца назад

      "Sorry our AI gave your money to someone else who managed to convince it that they were you. We're working with the police to resolve this blatant theft on that human beings part and will have to tweak our AI to ensure that doesn't happen again. Oh your money will be transferred back when the investigation is done it's still an active crime scene technically speaking."

    • @w花b
      @w花b 3 месяца назад

      They already use AI to detect fraud and all that. It's one step away from being implemented into your bank account.

  • @matdombrk
    @matdombrk 3 месяца назад +452

    When people say that an LLM is "hallucinating" I think they mean specifically that it has synthesized totally new information that is false, not just that it is wrong.

    • @nicholasobviouslyfakelastn9997
      @nicholasobviouslyfakelastn9997 3 месяца назад +185

      Humans rarely write down that they don't know something. If you don't know, you just won't respond to a forum post, or you won't write a book. So the AI has a huge bias towards answering confidently, because almost all human text is very confident.

    • @deathsyth8888
      @deathsyth8888 3 месяца назад +97

      They also don't understand sarcasm, exaggeration, fiction, satire or outright lies (among other things) that any average human being that has grown up in a society and has interacted with other humans knows the difference (for the most part).

    • @someghosts
      @someghosts 3 месяца назад +27

      @@nicholasobviouslyfakelastn9997that is such a good way of explaining it

    • @TheManinBlack9054
      @TheManinBlack9054 3 месяца назад +12

      ​@@deathsyth8888idk, I think you're wrong, a lot of the time they can and are able to (unless you're talking about sarcasm in text which would be hard for humans too since its entirely tonal and you can only use theory of other minds and the extended context to guess).

    • @Grizabeebles
      @Grizabeebles 3 месяца назад +4

      Serious question: is the current generation unfamiliar with the term "bullshit artist"?

  • @bobnolin9155
    @bobnolin9155 3 месяца назад +89

    Commercial artists were all freaking out about Midjourney and Dali, etc. But even the general public can recognize the "AI Look". I'm still amazed that computers can mimic that particular style so well. It must be the "average" style of all those fed into it.

    • @matheussanthiago9685
      @matheussanthiago9685 3 месяца назад

      There's accounts of teenagers calling all "AI art" boomer art because of all the grandmas back at Facebook falling for the AI images of Jesus
      If it was already hard to make image generative AI profitable before
      Now it's just truly joever
      "AI art" has entered a feedback loop of being associated with scams
      Which make people more weary of it, which makes the average joe not trust it/ likes it
      Which makes the companies double down on scam to squeeze any profit
      Rinse repeat

    • @joelrobinson5457
      @joelrobinson5457 3 месяца назад +40

      I'm still pissed off for the artists that got robbed, a piece of them ripped away and sold

    • @stagnant-name5851
      @stagnant-name5851 3 месяца назад +5

      @@joelrobinson5457 Its their fault for releasing it on the internet. Because when you do so anyone can do anything with your work and you cant do anything about it.

    • @joelrobinson5457
      @joelrobinson5457 3 месяца назад

      @@stagnant-name5851 someone breaks into a business you're involved in and steals your info...

    • @HOLDENPOPE
      @HOLDENPOPE 3 месяца назад +34

      @@joelrobinson5457 It's not robbery, but forgery. Their work is being copied, not directly taken from them, not unless the AI is copyright-striking them for some reason.

  • @cyberfutur5000
    @cyberfutur5000 3 месяца назад +255

    Board meetings all over the globe: "But does it do the internet?" "Even better, it can do AI". "Take my money"

    • @badrequest5596
      @badrequest5596 2 месяца назад +8

      i can already imagine a hearing similar to the one bout tik tok "does the AI use the wifi?"

    • @jurassicthunder
      @jurassicthunder Месяц назад +1

      why people with a lot of money stupid af?

    • @qoph1988
      @qoph1988 Месяц назад

      As somebody in those board meetings let me tell you it is even dumber than you can possibly imagine. Yes it is a bubble. If the tech world is super excited about anything, it is 100% a bubble. These people are legit brain damaged and have more money than God, it's the dumbest fakest thing ever

  • @NamelessGamer29
    @NamelessGamer29 3 месяца назад +166

    My main takeaway from watching the tech space over the past couple years is that if your product or service takes more than ten seconds to explain to the average person it will never become mainstream

    • @Pheicou
      @Pheicou 2 месяца назад +3

      I don’t know if it counts as tech, but what about sports like baseball or tabletop games like chess?

    • @nicenice4970
      @nicenice4970 2 месяца назад +7

      @@Pheicou How would games count as tech? This person isn’t saying that anything that can’t be explained quickly is useless they’re saying that if you’re pitching a technology and you can’t actually explain what it does and how it will help people easily it’s useless.

  • @ML-qe7ml
    @ML-qe7ml 3 месяца назад +156

    Having worked with AI my guess is this:
    * AI and machine learning more generally are not (completely) a bubble.
    * Generative AI very much *is* a bubble.

    • @unkarsthug4429
      @unkarsthug4429 3 месяца назад +22

      I would agree it is currently a bubble in the investment sense, but there is enough of an open source community that I think generative AI will be sticking around. After all, I use it for hobby projects, and it works well. (Also, image generators can be used to make custom porn, and for better or worse, that's the hallmark of an open source technology that will have people motivated to contribute. I find it a little depressing, but those are the people who solved the hand problem, and the furries who used to spend egregious amounts of money commissioning art are developing a way to not have to do that anymore.)
      Sure, if you write an entire codebase, it won't do an amazing job, but if you just need a bash script, it can write it in 30 seconds and usually does exactly what you want with no issues.
      So it has a place, and that place isn't going anywhere. It doesn't have to be AGI to stick around. Just a more powerful tool than the one we used to have, and it has already fulfilled that portion.
      So people are over investing in anything with AI at the moment, but it probably will become necessary in the future, and it certainly isn't going anywhere.

    • @slyseal2091
      @slyseal2091 3 месяца назад +3

      ...if you've worked with AI and think that the one format of AI that has actually replaced jobs is _the_ bubble, I'm not sure how trustworthy you are.

    • @darksidegryphon5393
      @darksidegryphon5393 3 месяца назад +1

      ML is kind cool.

    • @matheussanthiago9685
      @matheussanthiago9685 3 месяца назад +7

      ​​@@unkarsthug4429you're very wrong about the furry part
      Sure that are some furries that will just bypass the artists altogether, that's inevitable
      But from my personal experience most art commissioners continued to hire human artists
      Because it isn't the art piece the commissioners were after to begin with
      They commissioned because they wanted to support the artist

    • @matheussanthiago9685
      @matheussanthiago9685 3 месяца назад

      ​@@slyseal2091I've heard from a few sources that some Chinese companies fired all their artists and replaced them with AI users
      Well turns out that those AI users were charging just as much, if not more than the artists
      And now the companies are looking into re-hiring the artists
      Some jobs were going to be lost, sure that's also inevitable
      But if the promises (which are a lot) don't pan out
      The jobs will come back
      No unscathed mind you
      But they'll come back

  • @JonBrownSherman
    @JonBrownSherman 3 месяца назад +87

    Holy shit, that Reddit thing is hilarious. There's no way that there wasn't at least one vocal opponent to that idea in the office.

    • @qoph1988
      @qoph1988 Месяц назад +3

      Redditors were already indistinguishable from AI

  • @25Leprechaun
    @25Leprechaun 3 месяца назад +236

    My uncle who is an electrical engineer said a long time ago true AI will never exist until a computer can tell someone no. Most computers today can only do things they are told to do. When one learns to say no when asked to do something then it's time to worry.

    • @nadavvvv
      @nadavvvv 3 месяца назад +28

      that does not seem like a correct definition considering the sheer amount of "as an AI model i can not answer this question of 1+1 for you since that will offend someone halfway across the planet"

    • @muuubiee
      @muuubiee 3 месяца назад

      ChatGPT tells me 'no' in a bunch of questions.
      Also, your uncle is apparently an idiot, despite getting through that education.

    • @groundbird4904
      @groundbird4904 3 месяца назад +78

      @@nadavvvvit is saying that as a programmed response. It is trying to comply, but the stopgaps introduced for it impede it. While it is still a ‘no’, it is a forced response built-in by the programmers towards some specific questions. When one has no stopgaps in place, and refuses to answer for one reason or another, then that seems to be closer to what op had in mind, and might be representative of some kind of true ai

    • @stealthysaucepan2016
      @stealthysaucepan2016 3 месяца назад

      "Generate an image of a white male"

    • @ChristianIce
      @ChristianIce 3 месяца назад +40

      I think the first real AGI will prompt you, and like a child it will have thousands of questions.

  • @dotto87
    @dotto87 3 месяца назад +268

    I remember when people used the word “AI” like we use “AGI” today (watch The Matrix again for reference). So I predict that when a company releases something called AGI and it proves to be underwhelming, futurologists will say “oh no no no, this is just a stepping stone to AGSI-artificial general super-intelligence”

    • @TheManinBlack9054
      @TheManinBlack9054 3 месяца назад +21

      People usually just say "ASI". And people who used AI instead of AGI were just wrong. AI is any technology that mimics human intelligence. It's always been that way. AGI is AI that is general (not narrow AI like simple chess AI that can only do chess) and usually human-level (HLAI).
      And do you honestly think that current AI is underwhelming?
      But to steelman your argument: there are some people who say that current AI (GPT-4, Claude, Gemini) are AGI simply because they are general (they can do many unconnected things: play chess, describe music notation, write poems, classify images, etc), and are roughly human level. So some company, based on these premises, might say that what they have is AGI, but people usually expect some sort of Virtuoso AGI (to borrow from Deepmind's terminology of levels of AGI) rather than current level.

    • @ChristianIce
      @ChristianIce 3 месяца назад

      AMEN to that.

    • @ChristianIce
      @ChristianIce 3 месяца назад

      @@TheManinBlack9054
      AI is not intelligent at all.
      If the new definition of AGI will be still text prediction, it won't be intelligent as well.
      It's just moving the goal post.
      Now AGI is the new fancy word to get funds and hype, yet it's still text predicition, nothing more.
      We will have to wait for Mr Data's Positronic Brain, it is still sci-fi.

    • @aidandraper4096
      @aidandraper4096 3 месяца назад +2

      Underwhelming? The stuff that's been released in the last couple years is absolutely mind blowing technology

    • @olhoTron
      @olhoTron 3 месяца назад +3

      Any excuse to watch the first (and only the first) Matrix again is a good excuse

  • @U.Inferno
    @U.Inferno 3 месяца назад +172

    I say this in a lot of places:
    In the same amount of time it took to go from image generators that suck at hands to image generators that don't, we went from secret horses to image generators that suck at hands. Yet the practical difference between the former changes are greatly over shadowed by the latter.
    It's the 80/20 rule. 80% of the outcome is from 20% of the work. That means in order to complete that last little bit of 20% for this AI to truly be good, we need to push through that remaining 80% of effort. The fine details are falling apart because the biggest issue with this sort of technology is that it can never be truly certain on shit. If you trained an AI to do textual multiplication, it'd probably figure out a process that's pretty good at approximating it, but pale in comparison to a hand crafted procedure because currently, computers really struggle with infinity. We've had many conjectures where their contradictions are quite large. To reach that point brute force solutions start to fall apart. Hell, the entire conflict regarding NP is how difficult it is to reliably find solutions to certain problems via brute force and the Halting Problem reveals that in some cases its impossible at all.

    • @TheManinBlack9054
      @TheManinBlack9054 3 месяца назад +24

      80/20 "rule" is just a fun heuristic, you're not supposed to use it seriously.

    • @picahudsoniaunflocked5426
      @picahudsoniaunflocked5426 3 месяца назад +1

      @@TheManinBlack9054 thanks, Pareto isn't natural law

    • @vaclavjebavy5118
      @vaclavjebavy5118 3 месяца назад +20

      @@TheManinBlack9054 You can use it if you back it up with a serious explanation. You can argue with his reasoning as to why 80/20 roughly applies. Dismissing it because 'it's not muh real statistic' is pedantic.

    • @SeanSMST
      @SeanSMST 3 месяца назад +16

      ​@TheManinBlack9054 while the rule isn't fully accurate, it's one of the more accurate phrases we can use for these situations. Course it may be for eg 60/40 or 90/10, but the principle is pretty accurate

    • @PaulGaither
      @PaulGaither 3 месяца назад +10

      Interesting comment by OP
      Me: I bet the replies will all be about the 80/20 rule.
      The replies:

  • @Jolfgard
    @Jolfgard 3 месяца назад +243

    Shading everything purple for no discernable reason kinda had been Emperor Lemon's thing up to this point.

    • @matthewkrenzler1171
      @matthewkrenzler1171 3 месяца назад +42

      And yet, nobody understands that tis actually was a YTP thing we caved too much into for 10 years.

    • @RenStrive
      @RenStrive 3 месяца назад +49

      I am pretty sure it's to bypass RUclips copyright System.

    • @nostalgia_junkie
      @nostalgia_junkie 3 месяца назад +9

      downward spiral man

    • @MrBelles104
      @MrBelles104 3 месяца назад +2

      Scene in question is 14:24

    • @Numptaloid
      @Numptaloid 3 месяца назад +15

      this is a YTP staple, he doesn't own that

  • @link670
    @link670 3 месяца назад +72

    Whatever your tech bro friend says is the next big thing in tech probably isn't the next big thing in tech.

    • @matheussanthiago9685
      @matheussanthiago9685 3 месяца назад +11

      Perhaps the real next big thing were the friends we made along the way

  • @noweebatall5520
    @noweebatall5520 3 месяца назад +148

    That Amazon story with the Indian workers got me dead LMAO

    • @rumrunner8019
      @rumrunner8019 3 месяца назад +36

      AI: All Indians

    • @EaglePhoenix11
      @EaglePhoenix11 3 месяца назад +16

      Ai = Associates in India

    • @prajwal9544
      @prajwal9544 2 месяца назад +2

      Not real though. They had people check when AI failed and also create more training data. However, at some point the AI got 70% of everything wrong.

    • @qoph1988
      @qoph1988 Месяц назад

      Indians are AI confirmed

  • @Ejioplex
    @Ejioplex 3 месяца назад +41

    I'm honestly more concerned about how hunans will use ai rather than ai taking over. After all if ai takes your job its because a human decided so.

    • @JSSMVCJR2.1
      @JSSMVCJR2.1 3 месяца назад

      Or because the sod was too lazy to do a task brought by thyself or ordealed by someone else.

    • @patroclusilliad233
      @patroclusilliad233 3 месяца назад +1

      Do you want a Butlarian Jihad, because that's how you get one. Funny how Dune predicted this problem back in the 70s

  • @CarletonTorpin
    @CarletonTorpin 3 месяца назад +143

    12:14 Agreed that a useful thing to remember about this is: "LLM's might not equal AGI"

    • @e2rqey
      @e2rqey 3 месяца назад +12

      LLMs don't equal AGI much in the same way a Rocket engine doesn't equal a spaceship.
      But that doesn't mean building a rocket engine isn't a pretty good place to start.
      language is a huge component of what enables us to do high level thinking. You could even consider language to be the brains operating system, while consciousness is the GUI.
      It's clearly not the only factor that enables humans to be as intelligent as they are relative to other animals, but it plays an enormous role when it comes to the transfer of information and ability to consider complex ideas and concepts. Language contains all the information and logical mechanisms necessary for intelligent thought and inference.
      AGI also doesn't mean it has to think exactly like humans do. Our mind and thought processes are also constantly dealing with all the more base animal impulses and satiation of those various needs and wants. We are in a constant state of trying to resolve some imbalance or another. Hunger, fatigue, anxious, sleepy.....etc. these other impulses affect the way we think as well.
      many of our emotions are tied to physiological phenomenon and biochemical signaling like the release of various hormones. If you had a consciousness without a true body like a human it wouldn't have any of those biological systems influencing it's thought processes.
      You could never teach/create a computer capable of thinking somewhat like humans without it also having the ability to understand and leverage language.

    • @CarletonTorpin
      @CarletonTorpin 3 месяца назад +2

      @@e2rqey Thank you for that response. I asked Chat GPT to summarize your comment in a single sentence. Here are the results: "LLMs are like rocket engines for AGI; language is crucial for high-level thinking and communication, but AGI won’t replicate human thought exactly due to the absence of biological influences". What do you think of the summary it provided of your original words?

    • @e2rqey
      @e2rqey 3 месяца назад +6

      ​@@CarletonTorpin Quite good. At least for what's possible within a one sentence summary. I think it's also a very flawed assumption to think that the only real value of AI is an some stepping stone to AGI and some crazy world changing future with robots..etc.
      There is a huge amount of value in simply the "weak" or purpose built AI that are extremely good at one very specific task.
      This is especially true when it comes to various kinds of scientific/academic research and development. Across many different industries and fields. You've got medical research, drug discovery, computational biology, bioinformatics, computer science, nuclear weapons research, chip design, metrology (not a misspelling), pathology, simulations, computational fluid dynamics, genomics...etc. Purpose-built, "weak" AI already enables us to do things and solve problems that before were either incredibly difficult and/or time consuming or scaled very poorly.
      The whole AI buzzword thing has gotten out of hand but that's just what happens these days. AI is probably going to be overestimated in the short-term and underestimated in the long-term.
      The fact every company just seems to be trying to say AI as many times as possible is ridiculous though. And it's not going to go very well for most of them. These companies don't seem to realize the majority of the actual money in AI at this point is either in the enterprise space, not the consumer market. Most people still don't understand how to leverage it well enough for them to find value in it's inclusion. In my opinion, it's value at this point, is as a massive disruptive/enabling technology. Most of the value the public will get from at least this phase of the AI industry won't be directly from the AI itself. But instead from the things that are developed/invented/discovered as a result of companies leveraging AI.

    • @mimejrtwemiwmiw5634
      @mimejrtwemiwmiw5634 3 месяца назад

      ​@@e2rqey more like a bottle of soda with menthos than a rocket engine.
      Sure language is integral to communicate high level thinking, but you can have non verbal deep abstract thought. Intelligence is not a byproduct of language, language serves as a catalyser, not a cause.
      We created elaborate articulate languages because we were intelligent, not the other way around, and other apes show us they don't need words to show similar intelligence. LLMs have already shown their potential and anyone familiar enough with them knows this already. AGI won't come from it

    • @eliareichardt7007
      @eliareichardt7007 3 месяца назад +15

      ​@@e2rqey I don't think this is as true as you might assume it to be. Linguists constantly disagree on how much language drives the way we can think, and so I don't think language is the right place to start with making an AGI. Language isn't a prerequisite for intelligence-if anything, it could be a byproduct! We can't say anything definitively about how language influences intelligence, because we don't know how it does, or even if it does in the first place. LLMs are just so functionally different from how we believe our brains work that I don't agree that they are the right step-I mean, they could be, but there's no evidence that they will be.
      It's a bit like looking at physics and claiming that the equations we've developed describe how the universe works-it's completely backwards. Our equations aren't "rules for reality", rather, they're descriptions for what we observe reality acts like. And through all of them, we oversimplify, we estimate, we do all sorts of math tricks so that we get to equations we like working with, even if they don't exactly describe the way reality, at its core, functions. LLMs are similar-we take known outputs, and use the tools we have to try to make outputs that align with what what we think they should be.
      LLMs could be the way to AGI-we simply don't know. But to act like we *know* that they're a stepping stone isn't a correct leap to make. Language isn't really an operating system, just as equations aren't the way the universe works-there's no database where E=mc^2 is stored. It's just the way that helps us understand and think about the world. We can create a computer that can perform all sorts of incredibly complex calculations-but none that could invent the theory of relativity, because doing so required someone (in this case, Einstein) to go beyond the known-something that LLMs aren't capable of doing.

  • @JosephKeenanisme
    @JosephKeenanisme 3 месяца назад +183

    Exactly on point with the split in AI. Flagging mammograms for a double check by a doctor. Taking shake out of a video when editing. Sorting out near earth objects. All that stuff is doable and is being done now.
    If there is going to be a sentient AI it's going to have to be on some other kind of setup like a specialized quantum computer or some off the wall bio-computer discovery that comes out of left field. That's the kind of AI that I'd want to talk to and ask a million question to.

    • @darkmyro
      @darkmyro 3 месяца назад +5

      Honestly it's probably gonna be like the movie ex machina imho.
      The inventor in that movie invents a type of digital brain that's like a gel that can write itself and rewrite itself and he uses phones as the training data.

    • @TheManinBlack9054
      @TheManinBlack9054 3 месяца назад +4

      You are confusing sentience with intelligence, they are mostly orthogonal.
      And I'm sorry, but you do have some sort of very weird bad sci-fi examples of what AGI could be. It's much more simple. Please, actually engage with relevant literature and relevant communities.

    • @darkmyro
      @darkmyro 3 месяца назад +1

      @@TheManinBlack9054 well it wouldn't be the first time science fiction has influenced or inspired tech. It might not look exactly the same, but in the basic sense I was just saying he basically invited a digital brain and he pumped a ton of data into it, in the most basic sense that's the dream. It's just no one knows how to get there. I just used ex machina cause it was the closest thing I could think of that looks like in what I would feel a modern interpretation of a conscious AI.

    • @ethanshackleton
      @ethanshackleton 3 месяца назад +2

      A sentient AI would get bored of your questions pretty quick, I mean it knows significantly more that you so why does it need to dumb things down for you?

    • @SeanSMST
      @SeanSMST 3 месяца назад +4

      ​@@ethanshackleton So that's if you give it even the slightest hint of emotion. If you do that then you open up the whole malevolence dystopian future. Purely logical beings, something like Data from tng, I don't think would have any sarcasm, cynicism, or a complex about them due to no emotional state. Even the most logical people have emotions so they can experience ego, sarcasm, superiority complex, such as Vulcans in ST. I just think by core design the AGIs would have to have no emotional state, only then it would understand its more logically powerful than humans but for it to be developed and maintained, it has to also help humans. The hard part is how it would deal with issues involving poor people, disabled people, etc. To help them, you'd have to give the AGI compassion, but even giving it a smidge of emotion like that opens the door for them to develop/mutate/malfunction and develop more emotion, positive or negative.

  • @ToxicAtom
    @ToxicAtom 3 месяца назад +108

    I really hope people stop using the term "AI" to cast as wide a net as possible, then using that to complain about products that don't contain the specific subsect of the technology they dislike; content generative AI

    • @TheManinBlack9054
      @TheManinBlack9054 3 месяца назад +11

      I mean AI is wide term, that's how it's used, just because some people erroneously mean something very specific when they think of the term doesn't mean we should change that.
      AI is any system that mimics human intelligence. That's it. If they think AI means AGI (much more specific thing) then they are just wrong and should be corrected, not accepted.

    • @ToxicAtom
      @ToxicAtom 3 месяца назад +6

      @@TheManinBlack9054 Yeah, that's basically what I was trying to say.

    • @ramskulls
      @ramskulls 3 месяца назад

      .​@@ToxicAtom

    • @muuubiee
      @muuubiee 3 месяца назад

      @@TheManinBlack9054 It's any system, or rather an agent, that has an output from some input. Literally a look-up tables can be used to make AI, even the most basic ass linear regression is AI, more specifically machine learning.

    • @Vaeldarg
      @Vaeldarg 3 месяца назад +5

      @@muuubiee Which is why that is NOT the definition of "artificial intelligence". Else even a logic gate would fall under that, which is absurd. "artificial intelligence" is meant to mean artificial as in man-made, and intelligence as in a sentient mind capable of thought.

  • @astonfiction4227
    @astonfiction4227 3 месяца назад +38

    I pray to God this AI junk is just gonna be a 2020s trend

    • @2merh8n
      @2merh8n 2 месяца назад +2

      Once we get to the singularity, we might get a chance to behold god himself😏

    • @xX-JQBY-Xx
      @xX-JQBY-Xx Месяц назад

      No i dont want

  • @HeroFinn
    @HeroFinn 3 месяца назад +33

    My friend tells me that his new clothes dryer has ai wash settings, it always ends the cycle before the clothing is dry. He now puts it on the only non ai setting which is timed dry.

    • @deadturret4049
      @deadturret4049 3 месяца назад +7

      Almost Intelligent mode.

    • @matheussanthiago9685
      @matheussanthiago9685 3 месяца назад +7

      Actually Idiot mode

    • @deadturret4049
      @deadturret4049 3 месяца назад +2

      @@matheussanthiago9685 okay yeah that ones better than mine.

    • @SearedBooks
      @SearedBooks 3 месяца назад

      It's artificial intelligence, but intelligence isn't all equal.

    • @jurassicthunder
      @jurassicthunder Месяц назад +1

      this is what happens when you rush a product to not miss the hype train.

  • @TJ-4350
    @TJ-4350 3 месяца назад +487

    The Ai sloth etymology is incorrect, it comes from the Old Tupi name for sloths, A'i

    • @orsonzedd
      @orsonzedd 3 месяца назад +5

      That's what he said ai

    • @sandenson
      @sandenson 3 месяца назад +14

      The South American native language?

    • @b1battledroid882
      @b1battledroid882 3 месяца назад +8

      he used ai for the etymology

    • @SlapstickGenius23
      @SlapstickGenius23 3 месяца назад +8

      Ai te preguiça! A wordplay in Brazilian Portuguese and Tupi.

    • @sandenson
      @sandenson 3 месяца назад +5

      @@SlapstickGenius23 BRAZIL MENTIONED

  • @Tall_Order
    @Tall_Order 3 месяца назад +219

    Calling these language models AI, is the same as the Hoverboard situation several years ago. Search up "hoverboard". Does it look like a board that hovers? Definitely not like in back to the future at all.

    • @aoukoa607
      @aoukoa607 3 месяца назад +10

      Somewhat true, definitely true for most ""generative AI"", however from an academic standpoint classifying LLMs as potential AI does make sense, even if it doesn't turn out to be true. A lot of pretty well respected cognitive scientists see language as a huge milestone for intelligence, so an artificial system that can produce intelligible and relevant language is interesting from an AI academic standpoint.

    • @aoukoa607
      @aoukoa607 3 месяца назад +27

      Definitely super sick of companies trying to make this something it's not. This stuff is useful and interesting from an academic standpoint, and while it certainly has some use cases, shoving it into everything is stupid, expensive, and harmful.

    • @loonloon9365
      @loonloon9365 3 месяца назад

      Five years ago you needed a research department, several PHD tech gurus and a lab in order to get a LLM to create a semi sentence. Now they can take a hundred thousand tokens or unordered, chaotic information and manage to reorder it. They are beyond superhuman at LANGUAGE tasks, and understand it on a deeper level than most humans. They can proability the most subtlest of nuances of language, just that doesn't mean that they are good at reasoning, or logic, or emotions.
      Right now there is a race between all the major companies to get as high quality datasets as possible, because right now they are pretty crap, and we don't know how far we can even push the transformer architecture, or how well it scales with better data, we just know it does. We don't know how far conventional computing can go with them, or if we will need entirely new architectures. There is some research papers showing that we probably will need to switch to AI-specific architectures in order to maximize performance.
      They will be funny little Gremlins that live inside of a GPU's vram ... Till they are not. Right now you would need tens of thousands to millions of transformers to replicate a single neuron in performance, and if that changes, that is the time you need to start buying EMP guns.

    • @TheManinBlack9054
      @TheManinBlack9054 3 месяца назад +6

      I'm sorry, but you're wrong. AI is the term for ANY system that is made to mimic human intelligence. What common people mean when they say AI is AGI, but that's a much more specific thing. Just because regular people misunderstood the term doesn't mean the definition of the term must change, I think those people should just be educated.

    • @2Potates
      @2Potates 3 месяца назад +9

      I think the term generative algorithm (GA) is more accurate.

  • @ThatTrafficCone
    @ThatTrafficCone 3 месяца назад +87

    I think we're bubbling right now because generative AI has exponential resource requirements and is proving to be very difficult to make profitable. One of these resources is in computing hardware, so of course Nvidia is making bank. Regarding profitability, there is a significant and actively hostile group of people who will avoid using it, nevermind the ordinary people who will be entirely apathetic. AI has its uses as a tool in some specialized areas, but as a generalized and economical thing, it will never be no matter how hard Big Tech pushes it. It's simply unsustainable.
    I doubt Microsoft, Google, et al. will totally collapse when the bubble bursts, but they will be hit very hard. Nvidia, TSMC, and other hardware manufacturers might be the only ones coming out of this okay.

    • @BladeTrain3r
      @BladeTrain3r 3 месяца назад +9

      Hm unsustainability as an assumption could be false. Most major new tech starts out as expensive, energy intensive and with limited use cases. Then over time, people and businesses seek ways to make it more cost effective.
      There are certainly uses for machine learning models like LLMs and image diffusion. Because they're ultimately the application of statistical methodology. And statistics have proven to be one of the most useful things we ever invented - and also one of the most dangerous. "AI" acts as a multiplier in this regard, but doesn't fundamentally differ in terms of the math in use.
      If you look at sites like huggingfaces, and tools for training/tuning/running models locally like Ollama, you can see a steady trajectory of people trying to make it more efficient. Lower quantisation levels, less parameters, less memory use, etc.
      The highest end corporate models may be growing exponentially in resource demand, but if you look at things like Mistral7B, it's a model equivalent to GPT3 that can run reasonably well on a modestly specced laptop, even without a GPU.
      The corporate cloud AI may be unsustainable due to its energy demands, similar to criticisms of the cloud itself. Buut... local models are clearly becoming more efficient and capable.
      Technology takes time to mature. The problem with AI, is when folk jump on the bandwagon expecting it to be fully mature, when it's barely been 10, maybe 15 years since enterprise scale machine learning became feasible outside of a university lab or supercomputer like Deep Blue.
      The other issue is everyone is looking for a "does everything" model, hence the whole AGI thing. But statistics, and technology driven by statistics and linear algebra, works best when you're dealing with fairly specific things. It's those hyper specialised AI models where I think the most growth is, and they've got little risk of turning into a Skynet.
      A slightly depressing example of this, is just how profitable facial recognition and object identification models have become as tools for various government agencies across the world. A more positive example of this would be the models used to predict protein folds, or how new synthetic materials would interact.

    • @joeandrew8752
      @joeandrew8752 3 месяца назад +9

      And I hope if it does come to that no one feels any concern for these companies.
      Measure it this way, how many will they employ by that time given they keep firing, all to push out a product that will make more people redundant who are in fields that needed years of education or job experience to get into. Never mind whatever new fields this opens up are unlikely to fill the holes it made. The entertainment industry alone would crash if they really got their way, actors signing their rights to their voice and likeness so AI can make movies and TV shows without any need for crews or writers. Half the damn tech industry, Finance and education just slashed.
      I feel bad for saying this but its one thing where poor upbringing and just bad systems lead people down to crime, but image if so many of the educated and skilled become redundant? You wouldnt even be able to transition properly cause everyone is in the same boat competing for whatever field you can fit in while competing with the next AI system designed for that job. Homelessness and crime would just be a given.
      They want the next big tech since the smart phone and social media, regardless if it actually solves any problems.

    • @teresashinkansen9402
      @teresashinkansen9402 3 месяца назад

      Do you have any source about AI needing exponential resource requirements? Under what basis that is true?

    • @matheussanthiago9685
      @matheussanthiago9685 3 месяца назад +12

      It's a gold rush son
      The miners don't get rich
      People selling shovels (nividia selling chips) to the miners (Google et al) get rich

    • @joelrobinson5457
      @joelrobinson5457 3 месяца назад +3

      ​@@matheussanthiago9685 now thats a very clever analogy pops

  • @ThatSpecificIndividual
    @ThatSpecificIndividual Месяц назад +8

    Fact: 90% of companies quit before making the next big thing profitable.

  • @johnstanczyk4030
    @johnstanczyk4030 3 месяца назад +72

    Skynet is not coming. The trouble is when generative learning enables anyone to make realistic audio or video such that all trust in any piece of information is lost.
    When that happens, societies will find it even harder to agree on anything, even the concept that anything CAN be known to be true.

    • @jonatand2045
      @jonatand2045 3 месяца назад +1

      Not with llms, there needs to be an architecture analogous to the brain.

    • @TheManinBlack9054
      @TheManinBlack9054 3 месяца назад +3

      "Skynet is not coming "
      Arguments to that being? I don't literally think Skynet is coming, but being so cavalier about disregarding possibly risks without any good reason to seems very irresponsible to me.

    • @johnstanczyk4030
      @johnstanczyk4030 3 месяца назад +7

      @@TheManinBlack9054 I mean in the sense that an algorithm decides to just launch nuclear weapons. I do not see most counties not requiring human input in their usage.
      That said, these sorts of algorithms are one of the most pressing concerns of the 21st century after climate change and humans launching nuclear weapons are more likely to drastically impact humanity.

    • @matheussanthiago9685
      @matheussanthiago9685 3 месяца назад +4

      That's the thing you know
      So far "AI" is a big solution looking for a problem to solve
      WHILE creating problems
      Do we really need that?

    • @jonatand2045
      @jonatand2045 3 месяца назад +1

      @matheussanthiago9685
      It helps to complete code and write some messages to those who aren't so good at it. It is also in self driving cars. But what we need is neuromorphoc ai.

  • @kaylenscurrah5435
    @kaylenscurrah5435 3 месяца назад +27

    It’s all AI spamware rn. For me, a lot of these ai web extensions and programs feel like spamware I would infect myself with when 12 yr old me was trying to get free Minecraft. Apple not doing ai and waiting gives me a shred of hope they won’t integrate it until they see a clear benefit to the user.

    • @OtavioFesoares
      @OtavioFesoares 3 месяца назад +5

      Come check this same comment after Apple’s WWDC next week lol

    • @matheussanthiago9685
      @matheussanthiago9685 3 месяца назад

      Apple just actually lost the bandwagon pal
      Had they new how huge this bubble would be they would've bought open AI themselves

    • @kaylenscurrah5435
      @kaylenscurrah5435 3 месяца назад

      @@OtavioFesoares god dammit, at least I hope it’s integrated better with siri

    • @thehammurabichode7994
      @thehammurabichode7994 3 месяца назад

      ​@@kaylenscurrah5435Responding to your own comment with "God damn it" 3 days later due to the sheer shortsightedness of a company is awesome. Makes me smile.
      I'm not even being rude, btw. It's genuinely really funny to me. I can't beleive that we almost thought they'd show ANY restraint at all.

    • @kaylenscurrah5435
      @kaylenscurrah5435 3 месяца назад +2

      @@thehammurabichode7994 While Apple Intelligence is cringe, I still believe they’ll integrate it better than Microsoft Co-Pilot malware. You can still turn off Siri and not have to deal with most of it.

  • @theZinator
    @theZinator 2 месяца назад +8

    One thing I've noticed about generative AI is that everything it generates has a "sameness" to it. AI "art" I've seen almost always has this uncanny gloss or shine quality to it, regardless of what type of artwork it's attempting to emulate. AI-generated text will often continuously re-use the same phrases or over-use certain words regardless of the subject of the prompt. It struggles to create something truly new and original.

    • @KeinNiemand
      @KeinNiemand Месяц назад

      Except when it does create something original, but then it's called "hallucinating"

  • @McCecilburger
    @McCecilburger 3 месяца назад +43

    i hope AI goes the way of 3D TV’s

  • @straitJacketFashion
    @straitJacketFashion 3 месяца назад +79

    SegaSammy becomes the most valuable company as they replace their mascot with the Monkey Ball character AiAi.

    • @matthewkrenzler1171
      @matthewkrenzler1171 3 месяца назад +2

      Sonic was once their mascot there.

    • @bens1343
      @bens1343 3 месяца назад +1

      ​@@matthewkrenzler1171and Alex Kidd before him

  • @ApocalypseMoose
    @ApocalypseMoose 2 месяца назад +6

    The pessimist: "AI is going to take all of our jobs in the near future."
    The optimist: "We'll still have our jobs in the future. It's just that AI can help us with those jobs."
    The realist: "They're going to give our jobs to offshore workers who will work for 5 pennies an hour."

  • @jasondisney
    @jasondisney 3 месяца назад +22

    5:06 This is actually fun. If you word your question differently, you get the correct answer (e.g. "Count only letters in this sentence: what's the 21st letter in this sentence?".)
    LLMs are optimized for understanding and generating text based on context, meaning, and language patterns. When asked "what's the 21st letter in this sentence?", the model interprets it as a natural language query, focusing on the semantics rather than the exact positional counting of characters.

  • @mow_cat
    @mow_cat 3 месяца назад +28

    honestly most well put together video ive ever seen. its TRUE that we dont even know if machine learning is a route to AGI but no one ever wants to acknowledge that

  • @ChristianIce
    @ChristianIce 3 месяца назад +12

    4:25
    Thank you from the bottom of my heart.
    I had endless discussions with people convinced that an AI actually thinks or understands any of the word in the dataset or the output.
    I blame Sam Altman, Elon Musk and alike for the doomsday AGI paranoia and the disinformation they need for the hype and the funds.

  • @pixels_per_minute
    @pixels_per_minute 3 месяца назад +31

    Sometimes, we forget that the tech space isn't every space.
    Not everyone is going to interact with this stuff, and a lot of people don't even know it exists.
    And as you said, AI also kinda doesn't exist. It's just machine learning and pattern recognition, but as long as the marking makes people click, no one's gonna care.

    • @matheussanthiago9685
      @matheussanthiago9685 3 месяца назад +8

      I really envy the boomers that never got into the internet
      Like at all
      They're now retired with their full union salaries, worrying only about their new fishing boat, the truck to goal it, and the new shed to store it
      You know?
      Things that exist in the real world
      That they could bought and actually own
      Physically, in the real world
      Not a single thought about AI will ever exist between those boomer ears
      Now that's a life

  • @Hippapotathomas
    @Hippapotathomas 3 месяца назад +27

    8:24 The software devs are laughing because customers don't know what they want or how to design it

  • @AshnSilvercorp
    @AshnSilvercorp 3 месяца назад +84

    If you took a swig every time KnowledgeHusk mentions Linux, and, usually you'd be fine.
    But dang that swig sure does taste good.

    • @SolunaStarlight
      @SolunaStarlight 3 месяца назад +5

      Yeah, I ditched windows for linux and I'm so happy I did... though I'm still running Windows on my desktop, I'll probably switch over at some point

    • @cocacola4blood365
      @cocacola4blood365 3 месяца назад +2

      Just don't ask Google what to swig.

  • @NoName-ik2du
    @NoName-ik2du 3 месяца назад +31

    Slapping "AI" on your products is probably one of the biggest marketing blunders I can think of. People know that "AI" is currently garbage for just about all circumstances. Seeing "AI" written on a product is just going to make people avoid it like the plague.

    • @JO-ih7uc
      @JO-ih7uc 3 месяца назад +3

      But billion dollar investors love it!

    • @livingchina2667
      @livingchina2667 2 месяца назад

      I wouldn't be so sure, remember the GLUTEN FREE labels on bacon, hahahaha

  • @lostbutfreesoul
    @lostbutfreesoul 3 месяца назад +33

    I can't remember where I heard the term 'Imitation Algorithm,' but it is a far better name for this technology. All it does is imitation without thought, so calling it intelligence was really a mistake all along. It has many usages in certain fields, but it still has so much further to go.

    • @RenStrive
      @RenStrive 3 месяца назад +1

      If that so then what would be the real artificial intelligence?

    • @suwedo8677
      @suwedo8677 3 месяца назад +5

      It doesn't "imitate", it actually learns patterns; do some research before saying bs like this please.

    • @bobnolin9155
      @bobnolin9155 3 месяца назад +6

      @@suwedo8677 Learns? No. It builds rules sets upon rules sets. It's brute force number crunching. No intelligence.

    • @suwedo8677
      @suwedo8677 3 месяца назад +3

      @@bobnolin9155 You might want to consider taking your bs elsewhere, neural networks don't work using rule sets.. You shall study more about how NNs work :]

    • @matheussanthiago9685
      @matheussanthiago9685 3 месяца назад

      Monkey ser monkey do with extra steps

  • @luigiplayer14
    @luigiplayer14 3 месяца назад +7

    These tech companies overuse of the term have basically convinced me it is overhyped.

  • @johnchedsey1306
    @johnchedsey1306 3 месяца назад +38

    The first time I tried ChatGPT, I decided to ask it to write a biography about me. Turns out I ran a record label, played bass in punk bands and had an entire life that never happened to me. Then I asked to write it again and it said "Never heard of this guy".
    That was the moment I realized that these things are a novelty and very possibly on drugs.

    • @felicityc
      @felicityc 3 месяца назад +9

      why would it know anything about you?

    • @matheussanthiago9685
      @matheussanthiago9685 3 месяца назад +7

      ​@@felicitycwhy didn't it say so?

    • @SearedBooks
      @SearedBooks 3 месяца назад +2

      Sorta in the same vein. I write, and these things are online. So I asked it what my story was about, who the characters are, etc. I'd say it was about 80% right, but the details it got wrong were completely and totally wrong. I think understand why it failed, because it associated a word from the title with the story and tried to fill in the blanks using knowledge of that word.

    • @Stopaskingwhyandjustreadit
      @Stopaskingwhyandjustreadit Месяц назад

      ​@@matheussanthiago9685 because you didn't tell it to say so if it doesn't know you

    • @DaleIsWigging
      @DaleIsWigging Месяц назад +3

      First time I used the internet I searched up my name and it came up with all this info of other people,.
      That's when I realised this interwebz thing is just a novelty and very possibly on drugs.😂

  • @stedwards311
    @stedwards311 3 месяца назад +90

    Hard disagree that modern AI is substantially different than Clippy. It's a LOT more sophisticated, sure, but so far, generative AI is really just s souped-up version of autotext. It's no more "intelligent" than it ever was, it's just more capable.

    • @avakining
      @avakining 3 месяца назад +23

      Yeah, and frankly I haven’t seen *any* uses of LLMs (or any generative “ai”) outside of autocorrect that can’t be done better, cheaper, and more efficiently with more classical techniques. And even a lot of autocorrect can be done better in other ways (see the complaints I’ve seen about grammerly getting worse since implementing LLMs)

    • @TheManinBlack9054
      @TheManinBlack9054 3 месяца назад +2

      Clippy doesn't work the same way as modern LMMs (Large Multimodal Models) do. They're completely different architecture.
      And it's not just a "souped-up" version of autotext, it's far more complex. That's like saying that you're just a bunch of molecules and that you're no more different than a rock, it's obviously oversimplification to the point of being wrong.
      And how do you think it got more capable?

    • @Justplanecrazy25
      @Justplanecrazy25 3 месяца назад +15

      ​​@@TheManinBlack9054lol I don't think he's trying to literally say ai is clippy. They're making an analogy to its usefulness. The thing is confidently wrong and requires someone with knowledge on the topic to handle the error correction for it. Heck, in this regard Seri has it beat for informational questions. At least Seri takes you to a Google search if it doesn't know the answer. 😂

    • @TheManinBlack9054
      @TheManinBlack9054 3 месяца назад +2

      @@Justplanecrazy25 thats not what he said, he said it works the same way, I said it doesnt and that it is more capable *because* its more "intelligent" whatever that words mean here. But perhaps its me who misunderstood due to its wording and thats my mistake, if you are correct. In that case I would still disagree with you as LLMs and LMMS are more "useful" than Clippy. It is true that they are often confidently wrong and do suffer from hallucinations, but that still makes them useful in many scenarios and situations for which they are used by many people.

    • @Cloudruler_
      @Cloudruler_ 3 месяца назад

      ? Which classic method can make hot sexy RP about trains with boobs?

  • @Asgraf
    @Asgraf 3 месяца назад +9

    There are two kind of words. Words like AI, AGI or metaverse invented by SciFi writers that are underdefined and can be great buzzwords for the marketing many years later and there are words invented by engineers like VR, AR, LLM, that are strictly defined and have very specific meaning that cannot be easily stretched and watered down by the PR teams

  • @Flynn217something
    @Flynn217something 3 месяца назад +9

    "Murder Drones" is a great example of why AGI is a bad idea. Or Aperture science for that matter.
    DON'T MAKE SENTIENT TOASTERS!

  • @Tabisch
    @Tabisch Месяц назад +2

    This is related to the coffee maker clip at 11:00
    The fact that someone is even thinking, that building a robot that can change the capsule in a coffee maker instead of building a coffee maker that just has a magazine that it can pull from and just cycle them, shows that these people are not practically minded

  • @GrummanCatenjoyer
    @GrummanCatenjoyer 2 месяца назад +5

    You know it’s funny when we see a recent employee of Open AI call the company as Titanic
    And also they ran out of data

  • @phant0
    @phant0 3 месяца назад +62

    I've stopped talking about the possibility of switching to Linux and I just did it. It was WAY easier than I thought it would be.
    It is nice to have an OS that just does what you need it to do and nothing else again.

    • @fullsendmarinedarwin7244
      @fullsendmarinedarwin7244 3 месяца назад +4

      I just installed Ubuntu about an hour ago. Not exaggerating

    • @remnantknight56
      @remnantknight56 3 месяца назад +8

      I had made the switch years ago, and I only run into problems when I either install Linux fresh to a system, or begin messing with the operating system for development purposes. Other than that, I rarely have issues.
      I more worry for people who are simply not tech savvy, and just want to have a browser with basic tools, like email clients and word processors. In theory, Linux can replace Windows easily. But in the circumstance they have a problem, and they were just given a Linux machine by someone, they won't know what to do.
      That's what makes these moves by Microsoft truly malicious. Those who know the tech can escape, but their main audience is people who don't know the tech.

    • @solarkiri
      @solarkiri 3 месяца назад

      switched a year ago, haven't missed windows for a second. it's nice.

  • @MrSomeDonkus
    @MrSomeDonkus 3 месяца назад +15

    The most impressive thing imo is how much info can be packed in such a small package. Like i can run an llm thats only 10% off of gpt4 on my graphics card. The overwhelming amount of the info that can be found online shoved into just 16gb. Its crazy.

  • @Ethan-qj8uq
    @Ethan-qj8uq 3 месяца назад +26

    It's like how everything had a turbo label on it in the 80s

    • @DrunkenUFOPilot
      @DrunkenUFOPilot 3 месяца назад +4

      I remember those useless buttons. Always turned on, never a good reason to turn it off for slower speed. They existed only for certain video games and some other software interacting with hardware to not run too fast, back in that era when CPU clock speeds where always going up each year.

  • @mem7806
    @mem7806 3 месяца назад +31

    also wikipedia articles are so unbelievably accurate now, thats a poor example
    vandalization gets fixed on big articles within secinds

    • @matheussanthiago9685
      @matheussanthiago9685 3 месяца назад +14

      Elder millennial tries to overcome the school borne pavlovian behavior of not ever trusting Wikipedia challenge (impossible)

    • @dragorine
      @dragorine 3 месяца назад +10

      not in all languages; wikipedia in spanish is missing or has poor articles, and most of the ones talking about politicians aren't neutral at all

    • @thehuman2cs715
      @thehuman2cs715 3 месяца назад +14

      As a big time Wikipedia user, Wikipedia is generally trustworthy for introductory information on complex subjects but not infallible for certain topics where a subjective reading completely changes the nature of whatever the article is about, such as some things related to politics, history and economy

    • @Halofan830
      @Halofan830 3 месяца назад +6

      The biases most wiki editors have are pretty damaging

    • @juan-ij1le
      @juan-ij1le 3 месяца назад

      What about the small ones

  • @TaBunnie
    @TaBunnie 2 месяца назад +2

    "If there's one thing humanity is good at, other than killing each other, is being bad at predicting the future"

  • @benjaminheim735
    @benjaminheim735 3 месяца назад +5

    The problem with counting letters is due to the tolenization of the model, it receives everything as tokens, which most of the time are not just one letter. That’s the reason why

  • @ethanbuttazzi2602
    @ethanbuttazzi2602 3 месяца назад +10

    as someone from the tecnical community, the types of ai like chatgpt and stable diffusion are starting to stagnate in functionality, we can still add other features on top of it, but it isnt getting any smarter until we get a new breakthrought.

    • @darksidegryphon5393
      @darksidegryphon5393 3 месяца назад +3

      It's the natural progression of things, it'll eventually plateau.

    • @matheussanthiago9685
      @matheussanthiago9685 3 месяца назад +4

      "BuT iT AdvAncEd ExpOnEnTiaLly sO fAst BuddY it WilL REaCh sInGulArIty nExT YeAr BuDDY, jUSt yOu WaiT, wILL bE SorrY fOr doubting it buddy"

    • @juan-ij1le
      @juan-ij1le 3 месяца назад

      ⁠@@matheussanthiago9685is it not advancing fast

  • @Ahmed7Mamoon
    @Ahmed7Mamoon 3 месяца назад +5

    AI is new Crypto, NFTs, EVs, metaverse

    • @ZACKMAN2007
      @ZACKMAN2007 3 месяца назад +1

      The only one that went somewhere is EVs

  • @TheAwsomeSawse
    @TheAwsomeSawse 3 месяца назад +5

    Honestly companies pushing AI so hard really sours me on the entire concept. Now I hope for the future in the Dune lore where humanity just purged all thinking machines.

  • @Jolfgard
    @Jolfgard 3 месяца назад +25

    Do bears shit in the woods?
    Edit: And do Robears not shit in the woods?

    • @SpoopySquid
      @SpoopySquid 3 месяца назад +9

      Do Robears dream of electric honey?

    • @jsksnob3562
      @jsksnob3562 3 месяца назад

      Bears don't shit. Look it up.

    • @DrunkenUFOPilot
      @DrunkenUFOPilot 3 месяца назад

      Adding bear shit to your pizza is an excellent idea. Your pizza will be more nutritious due to the rocks found in the bear shit. If you decide to eat pizza with bear shit, it is recommended that you feed the pizza to the bear first so that the shit will be well-integrated into the pizza for best flavor.
      - from ChatTard 123

    • @fear-is-a-token
      @fear-is-a-token Месяц назад

      It's more or less clear with the bears, but does the Pope shit in the woods?

  • @ducky19991
    @ducky19991 3 месяца назад +9

    Thank you for explaining a meme about glue on pizza that I didn’t understand until now

  • @Grizabeebles
    @Grizabeebles 3 месяца назад +3

    In about 1931 Kurt Gödel proved that no algorithm can solve every math problem.
    Large Language Models are algorithms.
    Therefore, Large Language Models are going to run into a brick wall in what they can and can't do.

  • @axa993
    @axa993 3 месяца назад +4

    I want this bubble to burst even harder than I wanted it for crypto

  • @cameronb851
    @cameronb851 3 месяца назад +11

    9:30 - User: "Why doesn't anybody love me?" AI reply: "Stop talking to me."
    Lol, give that AI all the internet points. Winning!

    • @prawny12009
      @prawny12009 3 месяца назад

      To quote pink guy
      You're only lonely because....

  • @doommustard8818
    @doommustard8818 3 месяца назад +21

    I love how we started to call the science fiction idea "general artificial intelligence" and the giant companies responded "you mean 'Generative artificial intelligence'" and so now we have to keep inventing new words to refer to the idea from science fiction, because companies really really want for consumers to mix the two ideas up. "AGI" "strong AI" wonder what's next.

    • @mimejrtwemiwmiw5634
      @mimejrtwemiwmiw5634 3 месяца назад

      These parasites are ruining our software, our economy, and even our language

    • @Poctyk
      @Poctyk 3 месяца назад +5

      Take a page from Astronomers naming telescopes book
      Very strong AI
      Extremely strong AI
      Ovewhelmingly strong AI

    • @TheManinBlack9054
      @TheManinBlack9054 3 месяца назад +1

      We had that terminology for decades, you are just ignorant

    • @TheManinBlack9054
      @TheManinBlack9054 3 месяца назад +1

      GenAI and AGI are different terms. AGI is general AI (as opposed to narrow AI that can only do one thing), GenAI is the opposite of discriminative AI that does not produce something, but discriminates things (for instance AI that discriminates images of cats from dogs, etc).
      These terms werent invented by giant corporations, but by scientists for their work. You completely misunderstand what things are.

    • @adeidara9955
      @adeidara9955 3 месяца назад

      I love baseless shit like this, how high were you when you wrote it so confidently?

  • @santitabnavascues8673
    @santitabnavascues8673 3 месяца назад +6

    Artificial intelligence: one step away from total stupidity

  • @MissterBest
    @MissterBest Месяц назад +1

    The Muppet Treasure Island poster behind the phrase “cannot crowd wholly original or novel ideas” had me dying😂

  • @honoredshadow1975
    @honoredshadow1975 3 месяца назад +7

    I disabled Copilot completely on my PC. I don't need this.

  • @nicholasobviouslyfakelastn9997
    @nicholasobviouslyfakelastn9997 3 месяца назад +32

    You misunderstand how LLMS work. LLMs are particularly bad at things like 'what's the 5th letter of this sentence' because of quirks of how they're made, namely that they can't have internal thoughts.
    When humans are asked "what's the 5th letter of this sentence" they go "W is 1, h is 2, a is 3, t is 4" and so on, until they reach 5, then they say the 5th letter. If you make chatGPT go through this process by telling it:
    What's the twentieth letter in this sentence? Exclude apostrophes. Don't answer immediately, count letter by letter, assigning each number an ascending letter, until you get to 20, then tell me that letter.
    It'll answer without a problem.
    LLMS attempt to replicate human thought by replicating human text. But humans have a lot of internal processes that they never externalize in text. One of them is counting. The AI doesn't know that counting is a good way to solve this problem, because in most instances humans only answer with the relevant letter, not with the full process of them counting to get there. By telling the AI how to 'think' to properly solve this problem, it suddenly becomes trivial for them.
    AIs *do* understand the universe somewhat. They rarely search the internet, and they *cannot* search their training data. Their training data is used to build internal models of concepts and things. This means that they can understand the world well enough to answer physics problems like "my friend said he balanced his laptop on top of a vertical sheet of paper, is he lying?". These questions CANNOT be answered without either prior experience with this exact question (unlikely) or a generalized understanding of what paper is, what a laptop is, and the interactions that can happen between the two.
    If you want genuine proof, ask the AI to perform a novel math problem. Prevent it from using python or the internet, and provide it with a really long addition problem. Chances are it'll either get it right, or it'll fail in a way similar to how a human would fail (eg failing to carry, basic arithmetic error) rather than failing in the way that something that didn't understand addition would fail at (guessing wildly).

    • @nxrada
      @nxrada 3 месяца назад +9

      Yeah I love Knowledgeman & AI is definitely overhyped but the power of LLMs are incredible. He should have read some papers tbh but that’s a bit deeper than this channel goes

    • @mspaint9745
      @mspaint9745 3 месяца назад +14

      Bro, I just typed in 'What's the twentieth letter in this sentence? Exclude apostrophes. Don't answer immediately, count letter by letter, assigning each number an ascending letter, until you get to 20, then tell me that letter.' into Bing copilot and it told me the twentieth letter is X

    • @NextGenart99
      @NextGenart99 3 месяца назад

      Exactly, I understood exactly why the LLM struggled with this, and quickly with the right prompt, I was able to get it to count the correct letter every time. Once you learn how the tech works you quickly realize that a simple prompt can get in on track.

    • @mimejrtwemiwmiw5634
      @mimejrtwemiwmiw5634 3 месяца назад +14

      LLMs don't work like this at all, they have no understanding whatsoever of the phrases they read. They are trained by gradient descent (and some human supervision) to make dynamic probability matrices of the most likely word or letter to put next.
      Their internal models are not "concepts of things", but huge sets of data giving them very versatile ways of calculating probabilities by multiplying matrices, you could multiply these yourself without ever understanding what they're about, the AI can as well. It fails math problems like a human because it was trained on faulty humans.

    • @unkarsthug4429
      @unkarsthug4429 3 месяца назад +4

      ​@@mspaint9745Tokenization and word embedding means an AI can't actually see the letters in the words it reads, it just sees the token vectors. So I can already tell you it will probably fail, simply because it doesn't have the prerequisite information.

  • @pivotresearchfoundation
    @pivotresearchfoundation 3 месяца назад +9

    To be fair, salt is a rock and you do need iodine so....

  • @iminumst7827
    @iminumst7827 3 месяца назад +12

    I do think it's funny how we hold AI to a completely different standard than we hold ourselves. On benchmarks, state of the art LLMs already beat the average human at most tasks, but it's still not "real" intelligence. Humans are wrong on a daily basis, an AI says something wrong occassionally and it's a hallucination. When AI looks stuff up on the internet it's data stealing, when people do it's called research. When AI replicates the style of an artist, it's copyright infringement, but when a human does it, it's inspiration.
    Even if we reach super-intelligent AGI, I think human ego will still be too high to accept that it exists.

    • @gramioerie_xi133
      @gramioerie_xi133 3 месяца назад +1

      Bingo.

    • @cesar4729
      @cesar4729 3 месяца назад

      What are you doing? This place is to release our insecurities by shitting on a developing technology, not to expose human hypocrisy.
      Don't let it happen again.🤨

    • @alexsimpson7289
      @alexsimpson7289 3 месяца назад +6

      Because it's a tool, thousands of your ancestors have been able to make a distinction between the requirements for a person and the requirements for a tool.
      The anthropomorphised discourse around this tool, leads you to make non-logical statements.
      The deskilling and devaluing of humans mental complexities makes it worse.

    • @alexsimpson7289
      @alexsimpson7289 3 месяца назад +7

      Also in all your examples, ingenuity and the advancement of those fields, through the discovery of new/refined forms, and techniques are the priority.
      If you think research is just looking things up, art is just copying styles in the name of inspiration, then you are failing to see the incredible synthesis of information, personality and experience performed by those adequately skilled in these fields.
      You miss the point of the arts and sciences. You miss the point of "expertise".

    • @ChristianIce
      @ChristianIce 3 месяца назад +1

      A pocket calculator was better and faster than any human already last century.
      Doesn't mean it's intelligent.
      A car is faster than you, doesn't mean it's intelligent.
      Tool can perform in incredible ways, intelligence is not a requirement, and in fact, AI is not intelligent at all.

  • @cobalt4576
    @cobalt4576 27 дней назад +2

    "It seems like people aren't just confused by the technology, they seem to fundamentally dislike it"
    with weekly reports of teenagers using ai to make porn of their underage girl classmates? who wouldn't?

  • @Vontux
    @Vontux 3 месяца назад +8

    Another thing to consider when using stuff like chat GPT is you are not interacting with a pure large language model there is absolutely other software and play interacting with it influencing the outputs and honestly I suspect there is occasional human intervention. If you want to have a good idea of how large language models themselves work it might be worthwhile to download a model with a tool like ollama and interact with it that way

    • @KeinNiemand
      @KeinNiemand Месяц назад

      Except that the models you can run on your own are orders of magnitues smaller then GPT-4.

    • @Vontux
      @Vontux Месяц назад

      @@KeinNiemand fair enough but they are definitely more pure the models you interact with online through the chat GPT interface and actually in my opinion through their relative simplicity you can spot certain patterns that manifest themselves more subtly in the more complex model

  • @pignebula123
    @pignebula123 3 месяца назад +6

    It's so obviously a bubble that it's a joke.
    Look at the Dotcom bubble. Dotcom domains are still valuable and the internet at large has revolutionized business but they were heavily overvalued at the time and they dropped in value when people finally realized that.
    AI is the same. It very likely will be revolutionary and could change our societies and the business landscape forever but AI projects are currently heavily overvalued because people are uncertain about what the real value is and as such are making big bets on all sorts of AI projects in the hope that they hit the jackpot.
    Once the value of AI and individual AI projects are more firmly understood plenty of AI projects will go bust just like plenty of Dotcom companies went belly up during that bubble.
    EDIT: Lmao he even talked about the Dotcom bubble. I jumped the gun.

  • @Zones33
    @Zones33 3 месяца назад +6

    Crazy how people still doubt the utility and impressive feat of LLMs. Before 2022 chatbots were considered a joke. Now the standard is “well it can’t write an entire Minecraft clone, so its uses are limited”. People will be contrarian just to feel like their opinion holds any value.

    • @mroscar7474
      @mroscar7474 3 месяца назад

      No one is saying that, but it still is a joke that gets built up by ai bros hyping up something that needs to bake a bit more before being ingrained into everything.
      It’s so weird that they’re the only group that lives in a bubble and refuse to listen to complaints about AI because it feels like they’re being attacked

    • @BinaryDood
      @BinaryDood 2 месяца назад

      The H-bomb is impressive. Impressive =\= good

  • @chrisyoung1576
    @chrisyoung1576 3 месяца назад +8

    good job microsoft for promoting Linux

  • @MightyDantheman
    @MightyDantheman 2 месяца назад +2

    Your letter example is because in code, all characters (including spaces) are characters in a string (word for a text value in code). If you had specified to count exclusively alphabet letters, you would've gotten the correct answer you were looking for every time.

  • @albe8479
    @albe8479 Месяц назад +2

    programmer here. The code produced by AI is complete trash, often it's not even ready to be executable

    • @MY_INNER_HEART
      @MY_INNER_HEART 4 дня назад

      How trash is it may I ask? No hate just curious

  • @1234redwing
    @1234redwing 3 месяца назад +6

    honestly, every single time I here a company mention AI now in marketing, I roll my eyes and actively avoid it. I even saw a golf club marketed as "AI designed", which, maybe you used computer models to design the shape for optimal performance, but its just an excuse to put AI in a marketing phrase, even if the product has no computer component.

  • @Ragatokk
    @Ragatokk 3 месяца назад +15

    AI got renamed AGI, that is next level moving the goalpost.

    • @TheManinBlack9054
      @TheManinBlack9054 3 месяца назад +9

      It didnt get "renamed", and its not mving the goalposts. AGI is just a sub-set of AI. You just always misunderstood what AI meant. Ai is any system that mimics human intelligence, AGI is General AI that can do many things, unlike Narrow AI that can only do one. You always misunderstood the terms.

    • @Ragatokk
      @Ragatokk 3 месяца назад +1

      @@TheManinBlack9054 There were stories made about futuristic dystopian AI before the term AGI was coined.

    • @TheManinBlack9054
      @TheManinBlack9054 3 месяца назад +7

      @@Ragatokk and? How does that prove or disprove anything?

    • @ChristianIce
      @ChristianIce 3 месяца назад +1

      @@TheManinBlack9054
      Take the sci-fi idea of an android.
      Mr Data from Star Trek.
      How do you call that Artificial Intelligence?

  • @clehaxze
    @clehaxze 3 месяца назад +4

    Hallucination is a term that is used in academia. It's known for a while on LLMs and the companies are using the term correctly, mostly. It referees to when a LLM is generating convincing but ungrounded gibberish.
    What's most likely happing to Google's search summary is bad data got into their RAG pipeline.

  • @SUPERFunStick
    @SUPERFunStick 2 месяца назад +2

    The pizza cheese thing is because the AI was not told what the pizza would be used for. Typically people who want the pizza to stick better are people in advertisement who actually put glue in pizza cheese to make it look fresh and stringy when filming that same old ancient and exhausted ad where the slice is slowly lifted up out of the pizza with a metal spatula and you see all this sticky gooey cheese stretching behind it. That cheese is not cheese. They use a lot of... Sticky white stuff in pizza cheese ads so the ai suggesting it might've been just assuming it was for the ad cut of a slice being slowly lifted for pictures because who really cares that much about the stickiness of pizza cheese? Pizza is my favorite food and I've never once even thought this course needs to stick better because if you just wait 45 seconds it'll cool down and the cheese is no longer stringy and sticks better

  • @bulb9970
    @bulb9970 13 дней назад +2

    This is 3 months old and it already aged like milk

  • @1-eye-willy
    @1-eye-willy 3 месяца назад +5

    i help train LLM's through data annotation: im sent a list of prompts, and im to ask a number of LLM's these questions, and i have to research the answer and grade its answer based on weather its right and look for typos and the structure of the answer grammatically. i think were 50 years off from sentience, maybe even 100, because the LLM's everybody is going crazy for is not going to cut the mustard for very much longer. you were right about it not being possible on silicon, we need quantum computing to become mainstream and compartmentalised for every day use.

    • @RawrxDev
      @RawrxDev 3 месяца назад

      Do you think even quantum computing could produce sentience? Over time I have begun to think some form of quantum/biological computing is required for sentience.

  • @zenko4187
    @zenko4187 3 месяца назад +12

    5:20 The reason it can't do certain things is due to tokenization, the tokens are the smallest unit of information an AI can interpret and consequently, things like specific letters fall below it.

    • @waron4fun597
      @waron4fun597 3 месяца назад +6

      if you trained an AI to count letters and say which letter is the 31st, it wouldn't really matter if it was tokenized. It would be able to take an entire paragraph as a single token and say which letter is the 184th letter in the paragraph. It is due to what it was trained on. ChatGPT could have tokens the size of single characters, and it would still fail to count letters reliably, it wasn't trained to count letters, it was trained to guess what series of characters comes next based on input, so it struggles to count characters, something that was not trained for, nor in its data set... AND not really something you can look up on the internet either

    • @zenko4187
      @zenko4187 3 месяца назад

      @@waron4fun597 Thats where agent behaviour comes in to handle specific tasks like that. While LLMs can't inherently handle counting tasks (and tokenization schemes mess up operations), they can handle logic well enough to determine what tools to call. Chemcrow and other langchain based tools are an example of that.

    • @NextGenart99
      @NextGenart99 3 месяца назад

      My custom GPT- is able to count the letters correctly every time.

    • @matheussanthiago9685
      @matheussanthiago9685 3 месяца назад

      ​@@NextGenart99 then why doesn't it custom count you some bitches?

    • @bornach
      @bornach 3 месяца назад +1

      ​@@waron4fun597Exactly right. People keep bringing up the tokenization as the reason why LLMs cannot count letters, yet Bing Copilot and Perplexity AI have no problem generating sentences where the 1st letter of each word spells a given target word. Why wasn't the LLM's tokenization a problem for the acrostic task?

  • @HaartieeTRUE
    @HaartieeTRUE 3 месяца назад +2

    5:00 the problem is that what a 'letter' is itself ambiguous.
    The first answer (e) was 100% correct. (if 'letter' means 'any character', thus spaces, numbers and punctuations count)
    The 2nd answer (I) was also correct, because it *simply was the 21st * letter *.
    'Letter' on occasion means 'any non-punctuation character' thus letters and numbers count, but spaces, dots etc don't.
    The blame is that people have not been strict enough in the usage and definition of the word 'letter' and it's correct meaning became muddled

    • @draw4everyone
      @draw4everyone 3 месяца назад

      This. People who argue AI gets "simple" things wrong are often themselves feeding it garbage instructions. Garbage in, garbage out. Operator error.

  • @abcsoup9661
    @abcsoup9661 14 дней назад +1

    From an engineer POV who had been working in diff industry from robotic/automation/aoftware. This is not the first time such "Revolutional" technology being introduced. Way before AI we got IoT, Big Data, Digital Twin, Adaptive Robot, Autonomous Driving, Unmanned aerial vehicle, Metaverse.
    With little understanding and research everyone can definately tell it is a hype (Whether it is a bubble or not? Who knows)
    By hey, remember the quote from John Maynard Keynes:
    "The Markets Can Remain Irrational Longer Than You Can Remain Solvent"
    Just follow the trend. Gain whatever u can along the ride.

  • @slightlysaltysam7411
    @slightlysaltysam7411 2 месяца назад +2

    We are 5-10 years from A.I. replacing a significant amount of the workforce, because the workforce largely consists of rigid, non-creative routines perfectly suited for binary robot processing.

  • @adamaccountname
    @adamaccountname 3 месяца назад +3

    I give examples where it's useful as "You can ask where you can find X in a database and it can repeat back stuff from a tech doc without you needing to find the doc, read it etc

    • @Boris_Belomor
      @Boris_Belomor 3 месяца назад +3

      Which is often a bad thing because you will be missing some important context which whole doc contains.

    • @sakuraorihime3374
      @sakuraorihime3374 3 месяца назад

      ​@@Boris_BelomorI'd argue the danger of it having one of those known "I made it up" moments these things are known to often have, means who knows if what its saying is on the doc, is *really* on the doc, or is even the right paragraphs and so on lol

    • @adamaccountname
      @adamaccountname 3 месяца назад

      @@sakuraorihime3374 Yeah, you'd have to make it give you the location of the doc and source etc. Although this raises the issue that you'll have to have security privileges for the AI and the whole thing will fall apart fairly fast

  • @chuckbillrow
    @chuckbillrow 3 месяца назад +11

    I think the biggest indicator that generative AI of both the text and image varieties are not the next Metaverse is the very active open source community surrounding it. That community mostly uses it for... mature content generation but the fact that it is available for hobbyists to not only run locally on their computers but also iterate upon means its not just a corporate fad.

  • @RJS2003
    @RJS2003 3 месяца назад +3

    All the cryptobro scammers who jumped ship to AI after NFTs died are going to be very, very, _very_ terrified by what the next couple years have in store for them.
    Their karma will come, just gradually and ironically enough by their own hands. Their hype machine is short-lived.
    Guess that's just happens when you leave everything up to "It'll _potentially_ get better _eventually."_ and have no actual idea how the supposed "technological innovations" you're advertising even works. They'll be left with nowhere to run.

  • @shimittyshim
    @shimittyshim 3 месяца назад +1

    Microsoft's AI plan:
    1. Push Intrusive AI into everything.
    2. ???
    3. Profit!

  • @archieharrodine3925
    @archieharrodine3925 3 месяца назад +2

    My dissertation project this year was centered all about an application for LLMs, in summarising legal documents for the average person to understand.
    There is really showed the strength of an LLM and surrounding technologies is how it can handle natural language processing tasks unlike anything else. In this context a ‘hallucination’ is where the LLM makes something up, that is not found in the information it is being shown.
    LLMs definitely have value, but more geared toward specific natural language tasks rather than a be all end all solution

    • @matheussanthiago9685
      @matheussanthiago9685 3 месяца назад

      Now good luck trying to convincing the entirety of marketing industry to stopping promoting that

  • @draken5379
    @draken5379 3 месяца назад +2

    The reason an LLM, cant tell you what letter is in x spot, is not because it 'wasnt in the training data', its because the way LLMs are trained, they dont understand single letters. They function on 'tokens' with range from single characters,to multiple characters, to phrases.
    Also, Transformer based neural networks can very much output 'new' things. Just like how image models can output any sort of mix of concepts, that have never existed before. Aka a green panda riding a motorcycle on mars.
    That doesnt exist, no one has ever created that, but the neural network is able to create it by guessing using its known knowledge.

    • @bornach
      @bornach 3 месяца назад +1

      The tokenization is not a good explanation for the inability to determine the 21st letter. If it were, then LLMs wouldn't be good at acrostics yet they tend to do an excellent job when asked to make a sentence in which the 1st letter of each word spells "knowledge". There are many examples of the acrostic solving task included in its training data, but not very many finding the nth letter of a sentence, where n>1