can we stop ai

Поделиться
HTML-код
  • Опубликовано: 13 сен 2024

Комментарии • 1,4 тыс.

  • @FurtherReadingTV
    @FurtherReadingTV 6 месяцев назад +27

    I saw a take recently that really spoke to me. AI isn't gonna lead to disaster by becoming self-aware, it's gonna lead to disaster because some idiot trusts a bad AI's output.
    Remember that case a while back where a lawyer got chewed out because he used chatgpt to write an argument that referenced cases which never happened? Imagine that but it's an engineer using it to troubleshoot an issue in a factory which triggers a massive chemical release.

    • @DragoNate
      @DragoNate 6 месяцев назад +4

      I agree with this. We are nowhere near a bit of code becoming even remotely imaginably self aware. It's just advanced programs focused on specific things that still mostly rely on human interaction somehow.
      I used to think Terminator would become real life. Now I think that's rather extremely farfetched, possibly even fantasy.

    • @demodiums7216
      @demodiums7216 6 месяцев назад +1

      The real threat is AI taking most jobs....

    • @DragoNate
      @DragoNate 6 месяцев назад +1

      @@demodiums7216 not really.

  • @wildice7426
    @wildice7426 6 месяцев назад +97

    You impacted me greatly. When I entered the service I wanted to be a PA so that when I got out I can work on becoming a journalist. Found out the wait time was too long on that so I became an IS, and studied in the meantime. Your creation and passion is one of the main two reasons I’m working on becoming a PI instead, in the hopes that I can move from media coverage into something that can make a real difference. It’s a terrifying switch but I’ve never been more passionate about it than I am now. Thank you for that.

    • @E1yaa2a
      @E1yaa2a 6 месяцев назад +4

      Hey fellow coffee fan here!
      When i was 10 years old, i decided i wanted to become a journalist and stuck with it ever since. Im in high school now. Still hoping for it to be my future career but I've heard that journalism is the most regretted college major and apparently there's a lack of jobs. Also, heard that you gotta have some kind of reputation to actually get a job? im no nepo baby and my mother's working day and night to feed us so im getting worried. As someone with more experience than me, could you please provide some advice? Should I go with it or drop it altogether?

    • @More_Row
      @More_Row 6 месяцев назад +2

      @@E1yaa2aDrop it 100 %

    • @user-io7yk7qb1k
      @user-io7yk7qb1k 6 месяцев назад

      @@E1yaa2aMy advice is not to pigeon hole yourself with a journalist or media degree but get a solid academic degree from the best university you can get into as it will open doors into an array of careers that won’t necessarily have any connection to your actual degree subject.
      There are courses and master’s degrees you can do after to specialise if you can’t get an entry level job but having a solid academic degree as your foundation will always mean you have options, whether it’s English, history, science, maths, economics, philosophy, politics, geography etc, it will demonstrate intellectual capabilities and the contacts you will make at a good university can also be useful.
      Whether you can make it as a journalist will be down to your determination, grit and a little luck but you’re more likely to be hired/successful with an academic degree rather than a media one.

    • @dengar96
      @dengar96 6 месяцев назад +4

      ​@@E1yaa2a you don't need a college degree to do journalism. Look at Channel 5, all you need is motivation and curiosity to do journalism nowadays. Unless you want to go into traditional media or be a TV talking head on local news, college isn't the move, especially if money is a concern.

    • @LizT-qx3xl
      @LizT-qx3xl 6 месяцев назад +2

      Coffee has a degree... in chemical engineering. Perhaps get a minor in journalism so you're trained. Then when you get your big break, you'll know what to do with it. Major in... whatever! It'll be your safety thing anyway.

  • @alexroode2659
    @alexroode2659 6 месяцев назад +53

    As someone who studies AI, the question of whether AI will turn out to be the greatest blessing or curse purely comes down to how we use it. People in charge (of companies mainly) will make those decisions. And as far I have seen thus far, is that those people make decisions based on short-term solutions for making as much money as possible with little regard to any potential downsides that may come to bite them in the ass in the future or the way those decisions affect others in disproportionately bad ways
    So taking this into consideration, I completely understand why people are more afraid than excited about the introduction of AI. We're giving people with destructive tendencies about to give them a tool that can get incredibly dangerous and destructive real quick.
    The only reason I am interested is because I get to develop those systems. If I were not actively involved in AI, I would be equally terrified

    • @erreyakendo8290
      @erreyakendo8290 6 месяцев назад +10

      Like nuclear energy and atomic bomb. One can be one of mostly efficient clean energy available (even with downside of radioactive garbage), other can be a absolute weapon of mass killing that leave a long term effect.
      Both come from same technology. I quote a phrase of Frankwinnie (2012) that technology is not bad neither good but what people do with them is the one who decide.

    • @TheManinBlack9054
      @TheManinBlack9054 6 месяцев назад +2

      ​​@@erreyakendo8290yeah, no. I think bioweapons are more bad than good, but that's just me.
      And the difference between AI and nukes is that nukes don't have their own goals and subgoals, it's closer to a pilot AND a nuke.

    • @TheManinBlack9054
      @TheManinBlack9054 6 месяцев назад +1

      What about Agentic AI and the dangers of unaligned AI?

    • @zephyrr108
      @zephyrr108 6 месяцев назад

      "As a " - blow it out your ass,,
      blow it out your ass
      blow it out your ass
      blow it out your ass
      blow it out your ass
      blow it out your ass
      blow it out your ass
      blow it out your ass
      blow it out your ass
      blow it out your ass
      blow it out your ass
      blow it out your ass
      blow it out your ass
      blow it out your ass
      blow it out your ass
      blow it out your ass
      blow it out your ass

    • @behindyou702
      @behindyou702 6 месяцев назад +1

      Thank you.
      Edit: He’s comparing A.I to the past events in crypto… that should be enough proof

  • @Hathnotseen
    @Hathnotseen 6 месяцев назад +661

    Unfortunately the only catastrophic occurrence that would motivate regulators to do something... would be skynet becoming self aware

    • @ApolloVIIIYouAreGoForTLI
      @ApolloVIIIYouAreGoForTLI 6 месяцев назад +73

      Nah if the politicians bank accounts start getting hurt by it then you'll quickly see new bills introduced.

    • @lil-soda-boi
      @lil-soda-boi 6 месяцев назад +21

      Then whatever reaction the government had would we met with an "I'm sorry Dave. I'm afraid I can't do that" like HAL 9000...

    • @AidanDotDash
      @AidanDotDash 6 месяцев назад +27

      we barely know how humans have consciousness. it’s insane to even think AI will be able to replicate consciousness.

    • @Kimberly_Sparkles
      @Kimberly_Sparkles 6 месяцев назад +23

      @@AidanDotDash I really find how AI bros talk about it suspect. It's programming running programs. Even if it learns from previous things, it's not making decisions from a place of judgement or critical thought. It's only as good as the data and the programming. It's not a person who inherently prefers blue over green and likes bass more than treble. It's decisions are being shaped by averages of what it's exposed to. It can never do anything better than it's consumed, so ultimately it's limited.

    • @DaftRebel
      @DaftRebel 6 месяцев назад +1

      just like in the movie LOL

  • @AgamYou
    @AgamYou 6 месяцев назад +281

    You should check out the EU’s AI Pact. New laws and regulations will at the earliest pass this fall.
    So all companies with products or services that use AI in the EU will have to follow those laws.
    From what I’ve read it seems to be a start on regulating AI

    • @Kevin_Street
      @Kevin_Street 6 месяцев назад +64

      The EU is much more proactive with this kind of thing than anywhere else.

    • @ypp0p
      @ypp0p 6 месяцев назад +40

      Common EU w

    • @montyvr6772
      @montyvr6772 6 месяцев назад +36

      I doesn't matter what the EU does, if any place on earth has less strict regulations AI and AI companies will thrive there. It is inevitable.

    • @TheManinBlack9054
      @TheManinBlack9054 6 месяцев назад +15

      ​@@montyvr6772not necessarily, you need to attract talent and capital.

    • @TheManinBlack9054
      @TheManinBlack9054 6 месяцев назад +24

      EU AI ACT doesn't really address the safety and security risks, it's mainly focused on copyright and not actually things that matter as it was too much influenced by EU AI companies.

  • @austinsager912
    @austinsager912 6 месяцев назад +349

    Real void hours, real void topics

    • @TheManinBlack9054
      @TheManinBlack9054 6 месяцев назад +1

      I know this is probably isnt the place to say this, but if you really want to learn more about the dangers of AI i suggest you google Robert Miles AI Safety, he has a great channel.

    • @zalzalahbuttsaab
      @zalzalahbuttsaab 6 месяцев назад +1

      true say bro

    • @CorpsesReborn
      @CorpsesReborn 6 месяцев назад +1

      Real

    • @spark_art1
      @spark_art1 6 месяцев назад +1

      That would be a good banner saying for his RUclips channel.

    • @sleepforever8378
      @sleepforever8378 5 месяцев назад +2

      Real shit

  • @starman1004
    @starman1004 6 месяцев назад +97

    It already duped a bunch of people going to the Willy Wonka Experience...

    • @nejdalej
      @nejdalej 6 месяцев назад +7

      But then again, when people look back over what happened, they saw that the art and everything was made by AI. Hopefully itll be a learning moment for people so theyre better able to spot it in the wild. Like look out for things out of place in a picture or passage. It struggles with hands and other small details, which is why the cute critters peeking out from the mushroom forest look cursed af.

    • @Lekgolocator
      @Lekgolocator 6 месяцев назад

      My theory is that the End of the World™️ will be caused by AI + social engineering. I think we are giving ourselves far too much credit in expecting something like Skynet.
      We won’t make it long enough for it to get to that point.

    • @MarioGoatse
      @MarioGoatse 6 месяцев назад +12

      I don't know about that. False advertising has been around since the dawn of advertising. If it's not AI it's Photoshop and if it's not Photoshop it's images from another place.

    • @mr.christopher79
      @mr.christopher79 6 месяцев назад +1

      40 bucks entry fee that goes towards buying sudafed to cook dope. one hit and youre on your way too a lifetime experience

    • @GrumpDog
      @GrumpDog 6 месяцев назад +2

      A human did that, to scam other humans out of $40 each. If AI generated images didn't exist, they probably would've just used screenshots from the movies, photoshopped or something.

  • @GenerallDRK
    @GenerallDRK 6 месяцев назад +349

    Ai was cute and fun when it was just meme images and troll posts but the stuff I'm seeing now is absolutely terrifying.

    • @jennywarren
      @jennywarren 6 месяцев назад +30

      Even then it was stealing from artists

    • @stints
      @stints 6 месяцев назад +15

      It's positives far outweigh any negative.

    • @popcatru1149
      @popcatru1149 6 месяцев назад +30

      @@jennywarren "It" wasn't stealing from artists. The people responsible for the way AI systems trained did. I don't know why people keep making this distinction as if people weren't involved.

    • @serayog137
      @serayog137 6 месяцев назад +42

      @@popcatru1149 because it's a pedantic distinction to make.

    • @g.r.o.g.u.1892
      @g.r.o.g.u.1892 6 месяцев назад

      @@popcatru1149 People thinking beyond a tweet or headline???? LMAO
      Its like people forgot about how people lost their minds over deepfakes.
      Or people who trace and steal art without AI?

  • @emaemason2229
    @emaemason2229 6 месяцев назад +25

    I hope the dystopia is leading more towards Wall-E than Terminator 😅

    • @myscreen2urs
      @myscreen2urs 6 месяцев назад +1

      We're most certainly moving towards Surrogates starting Bruce Willis🤔

    • @Dave_of_Mordor
      @Dave_of_Mordor 6 месяцев назад +1

      Why do you assume it's going to be a dystopia? Who are you to decide that future for humanity?

    • @think_of_a_storyboard3635
      @think_of_a_storyboard3635 6 месяцев назад +1

      it's going to be terminator PRETENDING to be wall-e, everyone knows that!

    • @Dave_of_Mordor
      @Dave_of_Mordor 6 месяцев назад +1

      @@think_of_a_storyboard3635 Why do you guys just assume only the worst?

    • @Jaythesparrow
      @Jaythesparrow 6 месяцев назад

      @@Dave_of_MordorIf you paid attention to anything going on in the world, you would understand that we are heading to a dystopia, with some countries already being there. Especially in the last 4 years. It’s not assuming the worst, it’s being observant and not some ignorant sheep.

  • @TheMelMan
    @TheMelMan 6 месяцев назад +176

    It needs to be strictly regulated somehow, as much as I hate what governments do when they have authority over a lot of things, but AI has so many international security threats in this digitally connected world. Not to mention all the wrongful convictions that happen even with evidence to the contrary. Misinformation is already super rampant, so we need to be proactive rather than reactive. Remember the scene in Captain America: Civil War where the Winter Soldier was framed for bombing the UN conference? That will be real life if things go unchecked.

    • @Miranox2
      @Miranox2 6 месяцев назад

      How is regulation going to stop criminals or rogue governments? The only thing the regulations will stop is law-abiding citizens from building tools that detect fake content.

    • @portanrayken3814
      @portanrayken3814 6 месяцев назад +7

      regulation is very slow by the time it is actually regulated we will be already fked

    • @SuperLifestream
      @SuperLifestream 6 месяцев назад

      Regulation will not stop it. AI will soon go the way of weapons. Our governments will have to do it, because other governments are doing it. The only thing the government can do is regulate how you use it.
      I dont think the government can regulate someone making a fake human, to be in a fake video, and say fake words, do fake acts. and for the most basic of that technology, to one day, not be able to tell if it is real or fake.
      A president could go to a different country and all of a sudden a video apears of them doing something. becomes he said, she said if it actually happened

    • @TheMelMan
      @TheMelMan 6 месяцев назад +5

      @portanrayken3814 that's the annoying part. They wait for irreversible catastrophic events before they can act.

    • @SyphexGaming
      @SyphexGaming 6 месяцев назад

      Government's regulate it? Lol. Theyre the ones pushing its advancement. The only regulation theyll be pushing for when its at their acceptable level... Is asking for the key. And by asking, i mean taking.

  • @sophie____
    @sophie____ 6 месяцев назад +15

    As an AI researcher, I find myself less concerned about “AI taking over” but “AI taking over and being wrong”. The biggest hurdle we face is not getting correct answers from models, but them being correct for the right reasons. I research medical image models, and there’s lots of reports in the literature of AI models detecting disease through random correlations in the image (e.g. a study by Maguolo et al. found that an AI could detect covid on images with the lungs occluded with a black box, while DeGrave demonstrated AI models relied on random markers and shoulder positions to “detect covid”). We see cool things from OpenAI, because they are trained on truly gigantic datasets. But in specialist applications, we rarely have enough data to reliably train models. Nor do we have a reliable method of AIs self-reporting when they are wrong. It really is how Coffee said, until there is a huge public failure of AI, affecting millions of people, regulators and governments aren’t going to do anything… It’s one of the only reasons I feel driven to keep working in the field, because we need to push the creators of AI models to be ethical and responsible in what they produce.

    • @zephyrr108
      @zephyrr108 6 месяцев назад

      "As a " - blow it out your ass,,
      blow it out your ass
      blow it out your ass
      blow it out your ass
      blow it out your ass
      blow it out your ass
      blow it out your ass
      blow it out your ass
      blow it out your ass
      blow it out your ass
      blow it out your ass
      blow it out your ass
      blow it out your ass
      blow it out your ass
      blow it out your ass
      blow it out your ass
      blow it out your ass

    • @drewo.127
      @drewo.127 6 месяцев назад +1

      This is pretty much most of what I feel about AI! It can be used for good just as easily as it can be used for evil! The issue is when the people both developing and using it don’t have the best of intentions, or just don’t care about the consequences of carelessly using such a powerful tool with no guidelines, or vague guidelines!
      You seem to be one of the people that we need in the computer/coding/robotics field to help stop other, less ethical, more greedy, efficiency hungry people and companies from programming and using this powerful software for…well, malicious, greedy, short-term selfish reasons!
      As an artist, creative, and fan of technology, I salute you!
      (Side note, I’m considering downloading an open-source AI code (probably Stable Diffusion because out of all the AI tools, this one seems the most “artist friendly” from what I’ve heard/read. (Stability AI claimed to have changed their dataset to remove copyrighted, illegal, and inappropriate content in an earlier update, plus I think the code can run offline and locally on a device.) so, I’m considering tinkering with a Stable Diffusion code (after triple checking it doesn’t have any copyrighted material and can indeed be run offline with no internet connection) and hopefully I can figure out how to design a true artist friendly AI tool that puts artists in the drivers seat! I have an idea of making the tool have these 4 important features:
      1. Completely offline! No wi-fi connection at all!
      2. All data can be easily accessed, modified, deleted, and/or reset!
      3. The artist has to actually consent to have their work in the app be used to train the AI, which all that work is easily accessible once trained on!
      4. The artist/user can easily delete any data, wipe all info from the dataset, and reset the whole thing back to factory settings!
      These are the big things I’d want any AI generation tool to allow users to do!)

  • @amharbinger
    @amharbinger 6 месяцев назад +30

    Voidzilla isn't wrong, most governments will wait to swing the sledgehammer to fix the most immediate issue. But given how private industries are investing more into it to reduce work forces, I would argue that's the most immediate threat right now. Mass layoffs due to AI especially in the entertainment and art sectors.

    • @prrfrrpurochicas
      @prrfrrpurochicas 6 месяцев назад

      Got to cut the talk and divide through misinformation or other informal methods to cascade the clauses before the manufacturing gets its hold. That's the real prize and lets not go into the data collection. Remember, ai isnt nee at all. Goes back to the 70s and even to former math processors. Obviously, those should never be open to the public because people are irrational in their formats. Especially with math as they usually get a computer to do it and it goes back to .......

    • @Ducaso
      @Ducaso 6 месяцев назад +1

      It’ll be a sign of the times when AI replaces school teachers to save pennies.

    • @DragoNate
      @DragoNate 6 месяцев назад

      government destroyed millions of jobs during the recent [global event] - what makes you think they GAF about mass layoffs due to this?

    • @AClockworkHellcat
      @AClockworkHellcat 6 месяцев назад

      You're assuming the government has any interest in its citizens' job security. Many are quite happy to keep the citizens as dependent on government services as possible.

    • @theforsakeen177
      @theforsakeen177 4 месяца назад

      also in tech

  • @deathdrone6988
    @deathdrone6988 6 месяцев назад +16

    Governments aren't gonna stop AI since they are actively promoting their growth to gain an advantage over others (e.g make industry or services cheaper even at people's expense). Best we can hope for is some regulations after so things go horribly wrong.

    • @neoqwerty
      @neoqwerty 6 месяцев назад

      Time to generate a shitton of deepfake porn of the congress and senate?

    • @AClockworkHellcat
      @AClockworkHellcat 6 месяцев назад

      Governments aren't gonna stop AI because they can just use it to fuck the citizens over harder, faster, and more inscrutably than ever before. AI models already score higher than most actual human lawyers on the bar exam. Just imagine the tiny paragraphs an AI could produce, so thoroughly obtuse and incomprehensible in its legalese that not even the greatest lawyers (or judges) could possibly begin to argue against the wielder's position.

  • @bronco6057
    @bronco6057 6 месяцев назад +13

    The best solution I can think of is wait for AI to be advanced enough that we can ask it how we can stop it from growing any further. The problem with this is that there is a large chance that the answer we would get at that time is "too late, sorry" and then we'd be totally out of options.

    • @41-Haiku
      @41-Haiku 6 месяцев назад +3

      This unironically resembles OpenAI's alignment strategy.

    • @TheManinBlack9054
      @TheManinBlack9054 6 месяцев назад +1

      People usually say pull the plug, but what they don't understand is that if they could think of it, superhuman intelligent AI would think of that too.

    • @Sadlittlecloud
      @Sadlittlecloud 2 месяца назад

      Funny, you mentioned that, I asked the same question to the Instagram Instagram AI bot.. told me that AI can be very helpful to “US” humans and WE should find a way to coexist with technology… I swear the AI thinks it’s human 😭

  • @abcdefo
    @abcdefo 6 месяцев назад +83

    Technology like this is like a oil spill. Its easier to manage if you act fast and proactively, but if you wait it will take years to clean up and the affects will be long lasting or perminent.

    • @abcdefo
      @abcdefo 6 месяцев назад +9

      Or like trying to put a compressed foam mattress back in its bag after opening it. The more it expands the harder it is to fit back inside. Eventually its impossible.

    • @seananon4893
      @seananon4893 6 месяцев назад

      Its too late to try and control it, rouge actors have shared the original files, there will come a day when your facing the choices of either remove yourself from society or join the hive.

    • @neoqwerty
      @neoqwerty 6 месяцев назад +12

      @@abcdefo the foam mattress just ends up taking space and is generally useful.
      Oil spill is a way more adequate comparison. Or a nuclear meltdown. Oil is an annoyingly useful product given our obsession with plastics and how much we use them in our everyday life, and nuclear is the energy we SHOULD all be aiming towards the most, but the security and cost-cutting fuckery around both those things keeps fucking us over because it's companies's bottom lines over making shit that actually holds together rigorously.
      (edit: and then we all get stuck with toxic wastelands that ruin the local ecology because those companies chased dollars instead of setting up properly.)

    • @abcdefo
      @abcdefo 6 месяцев назад +4

      ​@@neoqwerty well said

    • @benjyyy
      @benjyyy 6 месяцев назад +7

      The problem is it’s like an arms race. Everyone is scared to stop because it just means others will get there first. If the US and EU stopped do you really think China or Russia would?

  • @Alex-cw3rz
    @Alex-cw3rz 6 месяцев назад +22

    My biggest worry is what happens when it has to start making money, 154 billion has already been invested and the power to run is enormous. What happens when we have to pay the actual cost, it's going to be incredibly expensive and it'll have driven out all alternatives. Not to mention the limit data set there is less and less for it to train on therefore it is going to get more and more generic meaning a worse product for more money will be required. Just like all these other Blitzscaling companies the inshitification of the service.

    • @UlshaRS
      @UlshaRS 6 месяцев назад

      The issue is that it hasn't hit the investors hard enough to impact the 1% when they can write it off. Until it does those corpos have the useful idiots screaming Muh Free Speeching!

    • @othercryptoaccount
      @othercryptoaccount 6 месяцев назад +3

      They're already starting to use synthetic data to train and it's not making them worse it's making them better

    • @Alex-cw3rz
      @Alex-cw3rz 6 месяцев назад +5

      @@othercryptoaccount what you said is nonsencial

    • @gairisiuil
      @gairisiuil 6 месяцев назад

      ​@@othercryptoaccount this is actually a fantastic troll, i'm impressed

    • @TheManinBlack9054
      @TheManinBlack9054 6 месяцев назад +3

      @@Alex-cw3rz Its not, its actually happening.He is right. How is it fundamentaly nonsensical???
      WHy are people supporting these clearly misinformed comment?

  • @Coen80
    @Coen80 6 месяцев назад +28

    @2:30
    No it's easy to explain.
    There are sayings in every language that explain this...
    In my country we call it 'fill the pit when the kalf has drowned'.
    😅
    Also, governments are NOT in charge.
    They are puppets of lobbies.

    • @ninjab33z
      @ninjab33z 6 месяцев назад +8

      "The rules of health and safety are written in blood." That's the one i usually hear. Bit more specific but still the same sentiment.

    • @seananon4893
      @seananon4893 6 месяцев назад

      "Never grab a bull by its horns" is a saying where I am from.@@ninjab33z

    • @natevanek2785
      @natevanek2785 6 месяцев назад +1

      Exactly. No one, including the government has the ability to stand up to corporations and regulate things. Blah blah capitalism blah blah innovation.

    • @TimoRutanen
      @TimoRutanen 6 месяцев назад

      Depends on country. There are degrees, for example the Chinese government is rather believably well in control of their companies.

    • @Ze_eT
      @Ze_eT 6 месяцев назад

      @@TimoRutanen China is a rather poor example. Would you rather have profit-oriented companies controlling the state, or a profit-oriented state controlling the companies? It ultimately leads to the same result, except maybe that China has the benefit of exerting that same control onto its citizens.
      Germany is a pretty big example of a state that is rather not controlled by lobbies, however this has also resulted in them making many decisions that are driving away companies to less regulated countries.

  • @MrHankHill
    @MrHankHill 6 месяцев назад +14

    Appreciate you covering AI like this!

  • @naotohex
    @naotohex 6 месяцев назад +55

    Money is so entangled with politicians that it would be more surprising if they do anything early. I feel like development of AI tools should be forced to stop until the tools to combat it are put in place and grow alongside the technology.

    • @GeometricPidgeon
      @GeometricPidgeon 6 месяцев назад +5

      The tools to combat harmful AI are... AI.
      Post poning AI development is impossible; the laws don't update as fast as ai development

    • @TheManinBlack9054
      @TheManinBlack9054 6 месяцев назад +2

      ​@@GeometricPidgeonthen first lets put our money in developing these tools. Even 10% of money currently put into AI would be great.

    • @EstamosDe
      @EstamosDe 6 месяцев назад

      There is no weapon or war that wasnt impulsed by a leader. It doesnt matter what technology, the only way to stop violence is to stop giving all power to the 0.001% of people to choose in name of the 100%

    • @pierregravel-primeau702
      @pierregravel-primeau702 6 месяцев назад +1

      AI don't make money but id does a lot of politics...

    • @SeventhSolar
      @SeventhSolar 6 месяцев назад

      @@TheManinBlack9054 In case you weren't listening, the tools are AI. We cannot _first_ put money into them, they _are_ the thing we're scared of. Only well-aligned AI can stop mis-aligned AI. That's why the government won't stop it, because that's equivalent to shutting down the only thing we could make to protect ourselves if China gets there first.

  • @ario4550
    @ario4550 6 месяцев назад +96

    If the question is if we can stop AI, the answer is no. But the question is if we can harness AI to make humanity better off, to which i also say no.

    • @hitoriwa
      @hitoriwa 6 месяцев назад +21

      it's already being used for medical reasons like cancer detection.

    • @BlazeMakesGames
      @BlazeMakesGames 6 месяцев назад

      @@hitoriwa and a lot of those uses are sketch at best. For example rather infamously there was a time when people were training an AI to tell whether or not a mole was problematic, and it got really good at telling them apart while it was training, until it came to using it on real patients at which point it went completely off the rails.
      This was because what it was actually doing after some extensive analysis and testing to figure it out, was noticing that most of the problematic moles had rulers next to them, so it was marking images with rulers as bad and images without them as good.
      Now obviously that was from a year or two ago and this tech is evolving fast, but the point is that this tech is designed as a black box that functions entirely on pattern recognition and little else. If you teach an AI to identify cancer cells, it doesn't know what cancer is or what a cell is or why a cancer cell looks different from a normal cell. It just makes guesses based on the data it's been fed. And yea that can work in a lot of circumstances, but would you really want your medical diagnosis to come from a machine that literally has zero concept of what medicine even is or what it means to diagnose you with something? A machine that is literally just making a guess based on what it happens to look like based on previous data, data that could easily be tainted by something that the original programmers didn't think about like the aforementioned ruler problem?

    • @AtariBoogie
      @AtariBoogie 6 месяцев назад +4

      I bet this is what the Neanderthals were thinking back when they were extinct…

    • @taraskuzyk8985
      @taraskuzyk8985 6 месяцев назад +12

      dumbest take in this entire comment section

    • @jemainmalzahar7577
      @jemainmalzahar7577 6 месяцев назад +1

      Can it be stopped ? Yes, just turn it off

  • @gabelang5941
    @gabelang5941 6 месяцев назад +93

    Equal concern and excitment. We live in the world of older scifi movies. What a time to be alive

    • @Poopeh_le_pieux
      @Poopeh_le_pieux 6 месяцев назад

      Very Wiseau-esque from you

    • @Poopeh_le_pieux
      @Poopeh_le_pieux 6 месяцев назад +2

      Anyway, what about your sex life, Mark ?

    • @TheCatherineCC
      @TheCatherineCC 6 месяцев назад

      I don't think much (if any) scifi covered the enormous mountain of grifting and outright fraud that we are seeing now. Hints of mediocrity for sure, but not the banal "I wrote 300 books using AI and are selling them on amazon by finding suckers" or VC pig butchering.

    • @theupsidedownsmiley
      @theupsidedownsmiley 6 месяцев назад +1

      Agreed!

    • @journials3283
      @journials3283 6 месяцев назад

      iRobot comes to mind yeah

  • @EricDaMAJ
    @EricDaMAJ 6 месяцев назад +5

    Our politicians are from generations confused about how to program a VCR. How are they supposed to regulate AI?

    • @haloimplant7678
      @haloimplant7678 6 месяцев назад

      True and this voter is still making VCR jokes in 2024 we're in trouble

  • @EnigmaticGentleman
    @EnigmaticGentleman 6 месяцев назад +5

    In all seriousness, the fact that they seem to have hit a roablock on the generative text front (they haven't even started training GPT-5 last i checked), Im not too worried about horrific real world consequences anytime soon. The Internet as we know it will be dead by the end of the decade though.

    • @sonorioftrill
      @sonorioftrill 6 месяцев назад

      Ya, i’m really interested in how they handle GPT5. The idea behind openai was to use a simpler transformer with a larger dataset. Crypto coin founder Sam Altman realized that by going from pirating books to scraping the entire internet you would get a predictable improvement in apparent profomance, and you could hype that leap up to make it look like your tech was improving in leaps and bounds.
      Of course, people had been taking about doing that for decades, but largely dismissed it because for most real applications you want your chatbot to regurgitate a relatively small and factual slice of information, like a manual or help page, not generate random words from the entire internet.
      Turns out, just pretending that the core technical problem with your whole method is actually just a little inconvenience and will be solved sometime soon no really works on company executives looking for the next big thing just as well as it did for crypto.

    • @41-Haiku
      @41-Haiku 6 месяцев назад

      GPT-5 is in training. Where are you getting this idea that a roadblock was hit? Better modals are being released every few months from various companies. The AI models can even reason about how best to reason about something and improve their performance at runtime that way. A coding AI was just released that can perform all developer duties, and it's getting better at fixing its own mistakes.

  • @greenockscatman
    @greenockscatman 6 месяцев назад +7

    AI is too important to entrust either the gov't or private companies with control over it. I think any AI that's gonna see widespread adoption needs to be open source and transparent.

    • @agentdarkboote
      @agentdarkboote 6 месяцев назад +2

      Exactly, this is what I've been saying! Just like nuclear codes and bioweapons.

    • @hieronymusbutts7349
      @hieronymusbutts7349 6 месяцев назад +1

      ​@@agentdarkboote comparing AI to nuclear weapons is like comparing apples to nuclear weapons. Like, were you dropped on your head or something?

    • @agentdarkboote
      @agentdarkboote 6 месяцев назад +2

      @@hieronymusbutts7349 they're both tools that can be used in destructive ways, fight me.
      I'm obviously not saying chatGPT is like a nuke, but as models become more capable, the two become more comparable. I'm simply saying it doesn't make sense to open source dangerous tools, because of bad actors.

    • @TheManinBlack9054
      @TheManinBlack9054 6 месяцев назад +1

      ​@@hieronymusbutts7349exactly, nuclear weapons aren't as scary as minds that created these nuclear weapons. Intelligence is far superior to simple weapons. What if we open source AI that can create thermonuclear bomb V2?

  • @Rillivid
    @Rillivid 6 месяцев назад +11

    I definitely think it will take a major event. I agree with Coffezilla that regulators probably won't do much until something happens that forces them to act. As someone who has worked in and out of the government, a number of factors can come in and sometimes things can come very fast or very slow. It depends on the urgency of the matter in the government's eyes.
    The only way I can see regulation coming fast is if it hits us hard financially, structurally as a society, or morally (involving the direct loss of human life more specifically). Even the context of that event would effect how it was regulated. I feel the most problematic would be in a tool of war but the night is still young. More benefits and atrocities could be on their way.
    I can only hope what would force the governments of the world to act would be something we could recover from. As someone who is a fan of history I think we take the concept of extinction events quite lightly on a world that has had several. Especially if we made it ourselves.
    My hypothesis being I'm cautious but not hopeless. It's the best mindset to survive as long as possible. It's worked for humans so far. Let's hope that holds.

    • @minze202
      @minze202 6 месяцев назад

      Yeah, it could really discourage governments to do something if AI is thriving as a business. We know what happens most of the time when there are regulations in some countries against companies. They'll just move somewhere where it's less restrictive. We see it with labor laws, taxes, minimal wages...man, companies sure are evil more often than not.

  • @nils2868
    @nils2868 6 месяцев назад +2

    I’ve heard the ai expert Connor Leahy talk about politicians not wanting to do anything out of the ordinary in fear of being blamed for it. We need to make not acting the thing that's out of the ordinary.

  • @JohnJaggerJack
    @JohnJaggerJack 6 месяцев назад +3

    Regulations are usually written in blood.

  • @hedgie_doll2314
    @hedgie_doll2314 6 месяцев назад +7

    I chose my college a few months before the ai art stuff started getting really good. I was either going to go for some type of illustration/character design or fashion design. I am very glad I chose fashion design because ai is not great at sewing and it can't do custom tailoring rn so I will have a job for a while at least

    • @v_enceremos
      @v_enceremos 6 месяцев назад +1

      some one got wealthy parents

    • @seananon4893
      @seananon4893 6 месяцев назад

      You should drop out now, learn a Trade.

    • @moonjelly5
      @moonjelly5 6 месяцев назад +7

      ​@@seananon4893 Tailoring and sewing is a trade.

  • @Aspecscubed
    @Aspecscubed 6 месяцев назад +14

    I can definitely see a near future where an accelerationist uses AI to do something horrific just to force the government to act and that alone is kind of scary.

    • @jdan35
      @jdan35 6 месяцев назад +2

      Ozymandias esque

  • @JamesOKeefe-US
    @JamesOKeefe-US 6 месяцев назад +1

    "I know this is a random thought but it's the void" Beautifully said! Really enjoy these Coffee and love the Void!

  • @JacobODonnellDesign
    @JacobODonnellDesign 6 месяцев назад +29

    No, there is already way too much open source AI out in the wild to stop it now.

    • @adelelopez1246
      @adelelopez1246 6 месяцев назад +1

      None of the AIs that currently exist are the kind that can literally kill everyone. We can still stop the smarter ones from getting trained in the first place, and we'll even get to enjoy the ones we already have.

    • @Miranox2
      @Miranox2 6 месяцев назад +4

      @@adelelopez1246You can't stop them. There is no conceivable effort which could prevent all AI research. Even if some countries or even all countries ban them officially, some will continue research in secret because the power they gain is worth it.

    • @blackspider4
      @blackspider4 6 месяцев назад +8

      Also regulation in US or EU won't stop China from developing skynet.

    • @popjam7744
      @popjam7744 6 месяцев назад

      You need robots to kill people

    • @41-Haiku
      @41-Haiku 6 месяцев назад +1

      ​@@Miranox2 This is not true. The hardware supply chain is a necessary component, and is extremely narrow and would be easy to control.

  • @MundMoriginal
    @MundMoriginal 6 месяцев назад +10

    Yes, please. In my business I use several services from companies where I know their ticket system is ai controlled and answered by. Their responses are so full of fluff all the time... Amazon is one of the biggest offenders. They never had a good support for their "partners", but it has become obvious that they want to get rid from all human labor, even business customer support workers.

    • @TemmiePlays
      @TemmiePlays 6 месяцев назад +5

      not quite true.
      they experimented with fully automated warehouse packing and it takes 10 times longer to do the job of one human. and that's peak efficiency with the bots not like, falling over
      it doesn't even cover maintenance.
      the ai customer service will drop off once companies see it just literally makes stuff up when it has no answer.

    • @TemmiePlays
      @TemmiePlays 6 месяцев назад +4

      also also the cost of one factory worker as a robot would cost soo much more just to own, and operating costs lol.
      we're fine for awhile.
      and any machine ' taking over ' needs a human to operate

    • @TorIverWilhelmsen
      @TorIverWilhelmsen 6 месяцев назад +3

      @@TemmiePlays Not only that it makes up stuff as answers to customers, but that those made-up answers are legally binding advice (see the recent case with Air Canada).

  • @TheTrueRandomGamer
    @TheTrueRandomGamer 6 месяцев назад +120

    Unfortunately...no.

    • @smellyvalley
      @smellyvalley 6 месяцев назад

      Shut up Ai

    • @folkenberger
      @folkenberger 6 месяцев назад +3

      Unfortunately, our only chance to stop it got destroyed by us, the moment imjaystation got put down by the people despite of him being the first one to document and also expose this topic, it really dumb that we got rid of the only person that was aware and was risking his life

    • @CyrikDub
      @CyrikDub 6 месяцев назад +10

      @@folkenbergerin fairness and hindsight, it was hard to believe anything that dude ever said about anything after he faked his ex gf's death a few years ago for clout and clicks.

    • @darlingtondeathbeam
      @darlingtondeathbeam 6 месяцев назад +4

      @@kevinortiz2597 *Unfortunately... no.

    • @onsokumaru4663
      @onsokumaru4663 6 месяцев назад +6

      Yep, we are like the dinosaurs looking up in the sky witnessing a tiny shiny object on a collision course with earth. No matter how much you regulate it or stop ai, that "asteroid" is coming for us eventually.

  • @Austinkungfuacademy
    @Austinkungfuacademy 6 месяцев назад +4

    Glad you posted about this. I try to follow what's going on in AI, but so many of the content creators on YT who cover it use such click-baity titles like "This will SHOCK you!" "GAMECHANGER!" "AGI IS HERE!!!" I've gotten numb to it, and stopped looking. But, I did watch the ones about the lawsuit Elon Musk slapped Sam Altman with. It's an interesting development, but at this point, it's like trying to stop a tidal wave with a broom.
    AI has already started taking jobs in the tech world and in the design industry, so this is definitely something to keep an eye on.
    China has upped their game in the robotics+AI arena, recently.
    I'm not sure how much longer attention can be averted away from all of these developments.

    • @hieronymusbutts7349
      @hieronymusbutts7349 6 месяцев назад +4

      I highly recommend the channel "AI Explained" for grounded analysis of updates in the field

  • @XXIIXIIIXXXIXXXIX
    @XXIIXIIIXXXIXXXIX 6 месяцев назад +6

    I love the content ppl make about AI. Yall sound like doomsday preachers on the street 😂😂😂

  • @EdLrandom
    @EdLrandom 6 месяцев назад +28

    I can't recommend enough " A.I. is B.S. " video by Adam Conover and subsequent interviews with Emily Bender and Timnit Gebru about AI by Adam as well. He goes pretty deep into the fear advertising aspect of AI. TLDR: "My product is so powerful it can destroy the world" is a good advertisement line because we tend to focus on destroying the world part without questioning "my product is so powerful" part.

    • @TheManinBlack9054
      @TheManinBlack9054 6 месяцев назад +1

      No, you should not recommend it. That video is itself nothing but B and S. IT'S A COMPLETE IGNORANT LIE. Guy literally ignores the ENTIRE field of AI Alignment research and says that it's a manufactured fearmongering by AI companies, this field existed before all of these AI companies, the people who are most vocal about it are the AI scientists themselves like Geoffrey Hinton, Russel Stuart, Yoshua Bengio, Max Tegmark, etc. They aren't affiliated with any AI company. Yet he just doesn't acknowledge them (most likely because he doesn't even know about them). He also completely downplays AIs abilities and its incredible impact, it's not a gimmick, yet he thinks its one.
      And the Gebru's paper about stochastic parrots is also incorrect since AI models have been proven by Max Tegmatk and his team to have a world model and a theory of mind.
      And then Adam will go and talk about the dangers of conspiratorial thinking and spreading misinformation.

    • @TheManinBlack9054
      @TheManinBlack9054 6 месяцев назад

      I wouldn't recommend it. It's a really bad misinformed video. If you want to really learn from an AI scientist the dangers of AI I recommend you to watch a video called "10 Reasons To Ignore AI Safety" by Robert Miles (ruclips.net/video/9i1WlcCudpU/видео.htmlsi=qiFXAJs5zdXq9Qa-)

    • @TheManinBlack9054
      @TheManinBlack9054 6 месяцев назад +1

      I wouldn't recommend it. It's a really bad and misinformed video. He ignorantly proclaims that the entire field of study of AI Alignment is nothing more but scaremongering by AI CEOs, when that field of study existed even before these companies existed, and the main people that actually warn people of misaligned AI are AI scientists themselves such as Geoffrey Hinton and Yoshua Bengio (AI Godfathers), Stuart Russel, Max Tegmark, Roman Yampolsky and so on, people who do not work for AI companies but for universities. And the people who actually say that AI is perfectly safe (like Andrew Ng and Yann LeCunn) do.
      And Gebru's stochastic parrots paper have been proven to be incorrect when Max Tegmark's team have found strong evidence of AIs having a world model and theory of mind. Adam tries to present a picture of AI being nothing but a another gimmick like NFTs when its increasingly obvious that it's not true.
      I used to be his fan when he debunked false claims instead of making them. But this video is just full of ignorant and yet confident statements that are just patently untrue with him dismissing an entire field of AI Alignment as just another corporate ploy despite people working there mainly being from universities or non-profits, instead of actually productively engaging with them.

    • @TheManinBlack9054
      @TheManinBlack9054 6 месяцев назад +3

      I wouldn't recommend that video. He is not an AI safety expert and doesn't consult one. If you want to actually learn why misaligned AI is dangerous I'd recommend you to watch a video by an actual AI scientist called "10 Reasons To Ignore AI Safety" by Robert Miles. It's worth a watch as a counterbalance to his video..

    • @seananon4893
      @seananon4893 6 месяцев назад

      Kind of like going to the moon, and Thermo Nuclear weapons. Both are myths used to control.

  • @michaelk6702
    @michaelk6702 6 месяцев назад +25

    As long as AI is left in the wild Coffee, it's going to change everything including our way of living. If there's no control and regulations it's going to continue to grow.

    • @UncleJoeLITE
      @UncleJoeLITE 6 месяцев назад +4

      It's far worse than that, michael.

    • @SedBuildsThings
      @SedBuildsThings 6 месяцев назад +2

      yes because those drug regulations are working wonders for stopping the drug market... and those traffic laws are stopping people from speeding...

    • @blitzentiger1306
      @blitzentiger1306 6 месяцев назад +8

      ​@@SedBuildsThingsso you want no laws sure that will work out

    • @cyrussilver8230
      @cyrussilver8230 6 месяцев назад

      It's always funny when people who's impression of what an "AI" is or how it works comes exclusively from science fiction perpetuate such opinions with such confidence. Reminds me of how some people(that I made up for comedic effect, but that probably exist) would go "dude, you're not using a VPN? you should get a VPN, otherwise all your data is gonna be stolen and everyone will know about the freaky stuff you've been watching on the internet and they're gonna locate your exact address by triangulating your IP address, and you're gonna get waterboarded by some government goon in a basement" after watching 45 vpn sponsor ads on youtube.
      It is however not as funny when tech company executives do it to discourage competition.

    • @GDKF0238
      @GDKF0238 6 месяцев назад

      Oh nice strawman. We can nip AI in the bud, how do you propose we nip the chemical imbalance in a person, “in the bud”, that causes them to rob, hm??

  • @maartent9697
    @maartent9697 6 месяцев назад +2

    There is a Dutch saying and I assume other cultures have something similar; "Als het kalf verdronken is, dempt men de put" Meaning, Only after something has gone wrong have measures been taken that should have been taken much earlier.

  • @iceman18211
    @iceman18211 6 месяцев назад +3

    We need to accelerate AI not stop it. Government suppression of technology is part of the problem not the solution.

  • @SeowYiZhe
    @SeowYiZhe 6 месяцев назад

    As a person who is in cybersecurity you are spot on about it. Our only hope is that we are proactive in mitigating the impact of it, because progress will not stop with the people who are on it

  • @archo5gd
    @archo5gd 6 месяцев назад +3

    "Can we stop AI" is entirely the wrong question to ask -- we are nowhere near any actual AI so there's nothing to stop (for the foreseeable future, anyway).
    Altman is only saying things like this for publicity and as misdirection to distract lawmakers and unwilling/unpaid contributors from the copyright infringement via data laundering that they're doing at an unprecedented scale.

    • @TheDirtysouthfan
      @TheDirtysouthfan 6 месяцев назад +1

      IDK, what comes to mind on this is when they asked Chat GPT questions about Chemistry and it gave mostly good answers, despite not being trained to do so. An AI in control of the military doesn't exist, but we don't even know where we are yet.

    • @archo5gd
      @archo5gd 6 месяцев назад

      ​@@TheDirtysouthfan It was definitely trained on various textbooks, anyone suggesting otherwise is just lying (and there was a lot of similar misinfo floating around about a year ago - even among supposedly scientifically-minded people, so I don't blame anyone for believing it). That said, the internals of ChatGPT in particular are simply inaccessible to most people in the world, and the only people who have full access are incentivized to lie about it for profit. There is also a concept called "RAG" (retrieval-augmented generation), a fancy name for "including a pre-written answer". Such a task can be easily automated as well (any search engine does this), and for all we know could be (or already is) a part of ChatGPT as well.

    • @TheManinBlack9054
      @TheManinBlack9054 6 месяцев назад

      You know other people who warn about this exist, right?

    • @MorbiusBlueBalls
      @MorbiusBlueBalls 3 месяца назад

      ​@@TheDirtysouthfan source: trust me bro

  • @beep5406
    @beep5406 6 месяцев назад +17

    I saw this show years ago they were went to a job expo and they were asking people if ai would replace their jobs, then they showed the percentage likelihood of that happening all thr blue collar jobs had a high percentage all the white collar jobs had a low one. I have not met a single blue collar worker who has any hint of being replaced whilst met white collar using it to make their jobs easier.

    • @pawnzrtasty
      @pawnzrtasty 6 месяцев назад

      You forget about work that real men do. You know like building roads houses and dams. Or pumping oil out of the ground. There are tons of jobs ai will have zero effect on. I personally work around flammable materials robots can’t do it. Unless they are pneumatic. The internet has nothing to do with my job. It’s old fashioned manual labor.

    • @beep5406
      @beep5406 6 месяцев назад +4

      @@pawnzrtasty that was my point ai has not had an effect on blue collar work only white collar work how much would it cost to build a robot that can go into a house go into a bathroom and fix something simple like a leaky tap when any plumber can have it done in 15mins with a tea break

    • @hjpev6469
      @hjpev6469 6 месяцев назад +1

      Technology usually makes workers in a sector more productive at first, then replaces them later when it improves even more.

    • @beep5406
      @beep5406 6 месяцев назад +1

      @@hjpev6469 it typically reduces the amount of workers needed but workers have always still been needed, as it does this more jobs are created to build these new tools that have allowed workers to be even more productive than historically was possible. Ai though is not replacing blue collar workers anytime soon, nor is it likely to replace white collar workers soon anyway allow them to be more productive yes but not replacing

    • @ChrisS-oo6fl
      @ChrisS-oo6fl 6 месяцев назад

      This is a fundamentally flawed comprehension of the technology and it’s rate of rapid development. That expo was years ago with zero anticipation of modern technology. No job is safe. Current technology can technically replace 30-40% of white color jobs with proper implementation. Within two years nearly 90% could be replaced. Customer service, any and all analysis, clerical work, journalism, management, social services, healthcare, billing, engineering, coding, sales, marketing, art, graphic design, accounting, and on and on. Can and will be replaced. 700 jobs where just eradicate by a single company last week and replaced with AI. Blue color jobs have already been ravished with technology but they aren’t safe from AI either. AI and AGI is the core asset required to advance robotics to a point that will drastically effect hard working individuals. We’ve already seen massive advances just this month by numerous robotics companies thanks to AI. Ai will replace more assembly workers, shipping departments, service workers, transportation, logistic, maintenance, security, janitorial and yes even mechanics. Those jobs that it doesn’t replace will be reduced drastically in numbers thanks to the tools and assets used to streamline their jobs! Denying this reality is caused by an adolescent comprehension of the technology and the real world.

  • @grantcivyt
    @grantcivyt 6 месяцев назад +2

    Asking for regulation is a truly awful stance when it's likely that AI is coming no matter what, regulation is likely to do more harm than good, and the most probable outcome is the hamstringing of Western countries to the benefit of less desirable regimes.

    • @natk8541
      @natk8541 6 месяцев назад +1

      Imagine trusting the people who oversaw the Pandemic Measures to even wipe their own asses.

  • @sindurwavesismaturf
    @sindurwavesismaturf 6 месяцев назад +33

    The AI we're seeing currently are machine learning models built for very specific purposes. They're not sentient or even adaptable for other applications without a massive team of developers working on it

    • @wideningcarrot6
      @wideningcarrot6 6 месяцев назад +8

      Thank you. Someone with some sort of comp sci background or understanding. Marketers need to shut the fuck up about AI when they have no technical skill.

    • @Aspecscubed
      @Aspecscubed 6 месяцев назад +1

      What they are currently capable of has the potential to completely delegitimize many forms of evidence, identification, etc in a year or two, the sheer amount of misinformation capable of being generated could spin the world into total uncertainty. They don't need to be walking robots to be a threat to humanity. You guys look at what currently exists and not how rapidly it is evolving like 5 years in the future doesn't even exist for you, it's never going to stop it will just progress exponentially in a runaway positive feedback loop.

    • @Aspecscubed
      @Aspecscubed 6 месяцев назад

      Why are you assuming this guy has any computer science background or understanding? @@wideningcarrot6

    • @chuck600
      @chuck600 6 месяцев назад

      ​@@wideningcarrot6"AI" has been widely adopted on the backend for over a decade, but the moment you give it a little chat box on a website it's an evil Skynet that will kill everyone (just wait a few months guys you'll see!!!!!)

    • @jemainmalzahar7577
      @jemainmalzahar7577 6 месяцев назад

      Yes, they are not intelligent, people need to understand that, artificial intelligence is just for marketing

  • @TheRogueWolf
    @TheRogueWolf 6 месяцев назад +2

    The thing is that, here in the US, _any_ move to regulate something before a tragedy has happened is immediately decried as the government "stifling innovation" and "playing favorites in the market". (And then the tragedy happens, and people scream "why didn't the government do anything to prevent this". We really are idiots.)

  • @PauseAI
    @PauseAI 6 месяцев назад +1

    We can stop this. Over 70% support a pause. We've implemented international treaties before, like with blinding laser weapons. And even if a treaty fails: All AI chips are created in one Taiwanese factory (Tsmc) which is reliant on a small number of monopolies (notably ASML). We need one country in the supply chain to press the brakes.

  • @Funnyvid16
    @Funnyvid16 6 месяцев назад +9

    There is another thought process that can be brought up when it comes to the government regulating AI. They also can see it as another business as well, and they don't like cracking down on a business unless its major due to it being a job for people and as a business they have to pay the government something we all know they love the most.
    Money!
    Like so many things, it has to get worst before they step up. (Hurricanes, School shootings, etc are all past examples that we can look at of their slowness)

    • @seananon4893
      @seananon4893 6 месяцев назад +3

      "The nine most terrifying words in the English language are: I'm from the government, and I'm here to help."

    • @neoqwerty
      @neoqwerty 6 месяцев назад

      @@seananon4893 Nah, it's... Well, probably whatever bullshit they used to say to justify the church being a political entity, it's not like most of known history is the church being corrupt as fuck and having a nasty tendency to crawl back into government systems to get its old political power back. Something something "God gave me the mission to enlighten you all."
      Government saying "I'm the government I'm going to help you" is only the SECOND most terrifying words in the English language.

    • @rars0n
      @rars0n 6 месяцев назад +4

      The reality is that the government actually has little incentive to fix any of those problems, and in the case of AI, are completely unequipped to do so even if they wanted to because they don't understand the first thing about it.
      Although, another reality is that the fears of AI are completely overblown by people who also don't understand the first thing about it.

    • @TheManinBlack9054
      @TheManinBlack9054 6 месяцев назад +1

      @@rars0n "fears of AI are completely overblown by people who also don't understand the first thing about it" like two of Godfathers of AI: Hinton and Bengio? Or Stuart Russel, the guy who literally wrote a book on Artificial intelligence?

    • @rars0n
      @rars0n 6 месяцев назад +4

      @@TheManinBlack9054 You're just proving my point. Those people are participating in a wider-scale thought experiment, not logically assessing the LLM technology (which is erroneously being referred to as "AI") that we have today and in the near future.

  • @ikatmax
    @ikatmax 6 месяцев назад +2

    Horrific stuff is already happening ie AI voice cloning kidnapping scams and have you seen the AI music? Ie the reason universal removed their music from tiktok. It is a scary but also fascinating world we live in.

  • @StarkVandalez
    @StarkVandalez 6 месяцев назад +1

    coolest thing is going to be video games that have totally evolving story lines constantly.. in real time. it will be incredible.. But obviously that negatives are huge also

  • @Velata
    @Velata 6 месяцев назад +1

    Coming from a very different (but related) scenario: the FDA didn't regulate new, emerging chemicals that was once thought to be "legal highs" (like bath salts) or certain supplements until something disastrous happened. Most of the time regulations are brought about only when people start asking for it. It's always going to be reactionary because it was set up to be that way - even when you realise being proactive was probably a better course of action. If the government acts on anything early, the administration would be admonished for being "anti-democracy", "anti-business" etc, and most governments wouldn't want to be seen as totalitarian. It's a bit of a poisoned chalice for lawmakers, too.

    • @TheDirtysouthfan
      @TheDirtysouthfan 6 месяцев назад

      Exactly, when I heard that I immediately thought of the anti-pandemic task force that Obama set up got cut by Trump for waste, and then bit him in the ass. It is, partly understandable, the government does a lot of things that it's difficult to comprehend. There used to be a Senator I think who would complain about this government funded study or that government funded study which was a waste of money. One was about the reproductive process of a particular worm, of course said study was later used to develop a pesticide to counter said worm, saving many crops. If someone is wasting government money or overusing their power, it requires a lot of expertise and investigations to know for sure. It all goes back to the lack of trust citizens have in the government.

  • @4RILDIGITAL
    @4RILDIGITAL 6 месяцев назад

    Your insights on the potential impacts of AI and the relativity to how crypto was perceived are thought-provoking. It's interesting yet unsettling to think that regulation might only come after some unfortunate incident, as history often dictates.

  • @yomanink
    @yomanink 6 месяцев назад

    I enjoy the void's ability to bring up topics like this. Its an interesting thought, and I dont think it will be stopped.

  • @huimoin
    @huimoin 6 месяцев назад +1

    We need to make sure these big models are open source, accessable to all, not just governments and big tech.

  • @austrich0
    @austrich0 6 месяцев назад +1

    the AI existential crisis warning is the single greatest marketing ploy i've ever witnessed.

  • @ATHLETE.X
    @ATHLETE.X 6 месяцев назад +9

    Can’t stop, won’t stop

  • @Dogo.R
    @Dogo.R 6 месяцев назад +1

    > Problem exists
    > People: "Centralized action makers are the solution. Give central entities more unilateral power to coerce and force, then hope they fix the problem. Thats the solution."
    > ...
    You sure AI is the thing to fear?
    You do know cooperation rather that exploitation is only optimal for an entity to do when it doesnt have the power to force you to not have a choice?
    Enough power to force exploitation = exploiting you becomes more optimal then cooperating with you.
    What type of entity is more likely in the future to get that level of power and then realize exploitation is optimal?? AI?... or is it the thing we are asking to save us from AI?
    We sure do love to solve every problem through these centralized cohersion bodies...

  • @cedrove7513
    @cedrove7513 6 месяцев назад +2

    i would like to have extensive regulation implemented as soon as possible because no we can't stop it.

  • @jfh667
    @jfh667 6 месяцев назад +17

    The question has never been if you can stop AI and automation, the question has always been : will we live in a Star Trek society or an Elysium society. And it looks like it will be Elysium.

    • @lordbizzle2424
      @lordbizzle2424 6 месяцев назад +1

      Why Elysium?

    • @neoqwerty
      @neoqwerty 6 месяцев назад +1

      @@lordbizzle2424 I'd also like to know, because I don't think I EVER heard anyone talk about that movie before.

    • @ivani3237
      @ivani3237 6 месяцев назад +1

      @@lordbizzle2424 Guy just haven't seen terminator

    • @Szpagin
      @Szpagin 6 месяцев назад +3

      ​@@lordbizzle2424 I fear AI can devalue labour. Because it will make the jobs easier, there will be less incentive to hire skilled (and expensive) employees, while the management will keep whatever money this saves.

    • @hieronymusbutts7349
      @hieronymusbutts7349 6 месяцев назад

      ​@@Szpagin and maybe that's actually a good thing, and you're afraid because you're thinking of a world based in labour-additive theories of value

  • @Private-GtngxNMBKvYzXyPq
    @Private-GtngxNMBKvYzXyPq 6 месяцев назад

    Another issue is figuring out how to establish practical and effective enforcement mechanisms for any contemplated protections.

  • @JoesMadness
    @JoesMadness 6 месяцев назад

    My brother keeps bringing up Wall-E when we talk about A.I.

  • @biteofdog
    @biteofdog 6 месяцев назад +1

    Ilya Sutskever saw this coming: In a documentary by The Guardian, he stated that AI will solve “all the problems that we have today” including unemployment, disease, and poverty. However, it will also create new ones: “The problem of fake news is going to be a million times worse; cyber attacks will become much more extreme; we will have totally automated AI weapons,” he said, adding that AI has the potential to create “infinitely stable dictatorships.”

  • @Hotshot2k4
    @Hotshot2k4 6 месяцев назад

    It blows my mind that I've, on more than one occasion, seen people talking about humanity's future in Wall-E like something aspirational. That it isn't aspirational should not be a hot take - it was very obviously a criticism of consumerism and over-reliance on technology.

  • @mbg9650
    @mbg9650 6 месяцев назад

    Politicians ADN is spending others money; not thinking about what is coming.

  • @Skankhunt-42-
    @Skankhunt-42- 6 месяцев назад

    this will be one of those things that by the time anyone realises something is majorly wrong its going to be too late anyway.

  • @TPaz117
    @TPaz117 6 месяцев назад

    Regulators can’t/won’t do anything even if something horrific happens. Norfolk Southern did a Chernobyl to East Palestine, Ohio just last year and they faced ZERO consequences.

  • @TheRealTimMeredith
    @TheRealTimMeredith 6 месяцев назад

    I don't really have excitement or fear, more fatigue and eye-rolling.

  • @SirGrimly
    @SirGrimly 6 месяцев назад +1

    Fortunately, there was a court case last year that determined that anything generated by AI can not be protected by copyright (as it was not made by a person, thus no one owns it)... so there is currently some regulation in the fact that people can't reliably monetize the shit they generate with AI... though it does nothing to stop the issue of the AI companies themselves.

    • @coonhound_pharoah
      @coonhound_pharoah 6 месяцев назад

      That's not what the course case found. They found AI art generated WITHOUT INPUT FROM HUMANS, which is almost zero instances of AI art, cannot RECEIVE a copyright.
      Copyright law already covers AI art. It is a derivative work, similar to fanfic or a parody, and is perfectly legal.

    • @SirGrimly
      @SirGrimly 6 месяцев назад

      @@coonhound_pharoah AI art has absolutely zero input from humans though... it's literally the entire point of AI. The machine does all of the work.
      Also, no it's not perfectly legal, it's fucking plagiarism... it's been plagiarism since it's inception. Just because laws are not set up to specifically handle the way AI is stealing doesn't change the fact that they have stolen. And theft is illegal.

  • @RedMedicine86
    @RedMedicine86 6 месяцев назад +1

    Regulators have always been reactive and I certainly don't expect any action until AI becomes a financial bane for some hedge-fund capitalist. I think the real answer depends on how much we collectively value the human experience. My hope is that the flood of AI generated inauthenticity will break some of us away from this weird, always-online, algorithmic nightmare we've been stuck in for decades now. This is completely anecdotal, but it has gotten me to invest more time and interest in my local community. I've been reading old literature more. Hell, I've even been creating more just to spite it.
    In reality though, I am very pessimistic because my entire hope is predicated on the western world valuing human life.

  • @oORaytakuOo
    @oORaytakuOo 6 месяцев назад +1

    I think one of the scenarios that would get regulators to Do something is if someone used AI in a way to either damage the reputation of someone important or the reputation of a company in a way that would negatively affect their stocks, kinda like what happened with the twitter blue fiasco.

  • @LilaTheMoo
    @LilaTheMoo 6 месяцев назад

    There was this farmer named Vincent Kosuga that presents a great case for what the government does as far as regulations go. The man used to be a farmer until he realized the lengths his control of the onion trade could bring him. He managed to control most of the supply line so when it came to onion futures on the stock market he could manipulate it to his financial gains, all the while messing up the price of onions. Nothing was being done until enough people complained to their congress people, at which point, instead of putting in regulations keeping perishables that people rely on for food being manipulated like this again in the future, they simply made it illegal for onion futures to be traded any further.
    They didn't act towards the bigger problem, but simply targeting the biggest example facing the public. This is how our government runs, and is likely to be the case so long as money controls the US government.

  • @MadeOfYpres
    @MadeOfYpres 6 месяцев назад +1

    Ai cannot be stopped for the simple reasons that it props up stocks like crazy. The S&P is still in the green because of 6 tech companies who went all in on AI.

  • @jwgdlm
    @jwgdlm 6 месяцев назад

    Problem is. If THE THING happens for regulation. It just means Corporation's and AI Labs' hands get tied up, Open Source projects get tied up. And after that it's a matter of "Do you want to develop frontier models dancing around the poor regulation or come work for the government to develop our in house AI and later on AGI." So no, you can't stop it, only move the problem from in plain view to in the dark. And personally i'd rather have it in plain view and let society build it's own brand of awareness and social defenses.

  • @ItsQualitycontent
    @ItsQualitycontent 6 месяцев назад +1

    Ai needs regulations for sure. There is some basic reasoning that needs to be used to keep ai safe and not a scamming tool. A developer should not be allowed to make ur therapist ai also your home security ai. It will need to have separation by default for different kinds of ai. any tool that combines ai should have human overrides that are able to be used regardless of power or internet loss to the building or device.

  • @HelloYersoGae
    @HelloYersoGae 6 месяцев назад +2

    We know coffee is switched with AI when he praises it as safe and valuable. The future is now

  • @digitalmarketinghumans
    @digitalmarketinghumans 6 месяцев назад +15

    Appreciate you covering people like this!

    • @JustAFishBeingAFish
      @JustAFishBeingAFish 6 месяцев назад +4

      LOL

    • @mikem7084
      @mikem7084 6 месяцев назад +1

      lol

    • @temporelucemtenebris5313
      @temporelucemtenebris5313 6 месяцев назад +1

      Another Julia trades, nice!

    • @digitalmarketinghumans
      @digitalmarketinghumans 6 месяцев назад

      @@temporelucemtenebris5313yeah, with less subscribers and more videos. Funny how that works. 😂
      The RUclips algorithm truly pushes those high subscriber channels as top comments. And in this case, good for Coffee to say something but I doubt YT will change that.

  • @hjpev6469
    @hjpev6469 6 месяцев назад

    This is one of the best videos on the AI pause debate. Actually well thought out.

  • @PhilosophyofElivagar
    @PhilosophyofElivagar 6 месяцев назад

    It’s also because legislators are used to debating ceaselessly over issues, and the rise of new technology doesn’t allow them to do that for very long. We’re still debating whether to allow capital punishment let alone Artificial Intelligence.
    I think it’s a combination of governmental inaction, inexperience, and the lengthy legal process. Mostly, governments have just gotten too comfortable pretending to debate issues they don’t intend on fully solving, and then something like AI, which _needs_ an answer, comes along

  • @containercore6832
    @containercore6832 5 месяцев назад

    Was reading on Gary Marcus's substack about how the peer review process has been completely destroyed by GPT generated papers, I think that's a pretty big crisis that should catch people's attention.

  • @slykele547
    @slykele547 6 месяцев назад +1

    Was the Willy Wonka experience not horrific enough to mobilise the nations?

  • @miuzoreyes6547
    @miuzoreyes6547 6 месяцев назад +1

    Don't think regulators would be able to do anything at this point. What exactly would be regulated? Even if there is a ban on AI, it wouldn't prevent someone wealthy from using video or image generation models without telling everyone for propaganda and misinformation purposes.
    Cat's already out of the bag and it's far too late for any regulation. We never really had a chance to begin with too, with most of the world living in liberal democracy which is incredibly slow to do anything.

  • @lumski
    @lumski 6 месяцев назад

    Yeah, I agree. The same thing happened on the stock market, banking, and aviation. Rules are often made with blood.

  • @desertdwellintom
    @desertdwellintom 6 месяцев назад +3

    Thanks for dropping this on a Sunday night right before bedtime there Coffee...

    • @thesunreport
      @thesunreport 6 месяцев назад

      Maybe Aliens are AI's attempt to recreate humans after we're all wiped out. Sleep Well.

  • @vanyadolly
    @vanyadolly 6 месяцев назад +1

    It's hard to believe we're so unprepared for AI since we've been warned about it since the 60s

  • @mattcowdisease1346
    @mattcowdisease1346 6 месяцев назад

    "SQUIDWARD THEY GOT THE NAVY", "NOT THE NAVY!!"

  • @GWiggz
    @GWiggz 6 месяцев назад

    The government hasn’t done a damn thing about gun violence, and that’s an actual horrific thing, I don’t see them doing any real regulating. The AI lobby is gonna be huge!

  • @wmpx34
    @wmpx34 6 месяцев назад

    "You know we're doing science here"

  • @firesonic1010
    @firesonic1010 6 месяцев назад

    The thing is, if we stop ai now, our adversaries will continue developing and improving theirs, so it is in our nest interest to keep ai going. Its also worth noting that ai at the moment, is very, very, and I do mean VERY dumb. We're not at the stage of ai becoming self aware. Nowhere near it, nor do I think we'll ever get to that point.

  • @ppkgaming210
    @ppkgaming210 6 месяцев назад +1

    Even if the USA try to slow AI, China is not going to.

  • @Slinky_Takin
    @Slinky_Takin 6 месяцев назад

    I heard a guy say on The Bankless podcast that basically his argument was that A.I. wont take over the world over night and because it would be a process over some time if it were to head in that direction we, in theroy, would have enough time to stop it.

  • @FrankSmith
    @FrankSmith 6 месяцев назад

    They will just wait until 80% of the workforce is laid off then decide to pass some sort of bill to give us $25 a month.

  • @kevincortez6227
    @kevincortez6227 6 месяцев назад +7

    it's not AI yet.

    • @doctordaro2112
      @doctordaro2112 6 месяцев назад

      It is not ASI and probably not AGI but it is AI.

    • @kevincortez6227
      @kevincortez6227 6 месяцев назад +4

      @@doctordaro2112 There is nothing intelligent about it.

    • @TheManinBlack9054
      @TheManinBlack9054 6 месяцев назад +1

      ​@@kevincortez6227AI is the name of the field. AI is just any technical program that mimics human intelligence. That's it. Your misunderstandings is your problem

    • @IneaFaedyn
      @IneaFaedyn 6 месяцев назад

      This is a nothing argument. Learn more.

    • @kevincortez6227
      @kevincortez6227 6 месяцев назад

      @@TheManinBlack9054okay, sounds good thanks.

  • @_IanMRountree
    @_IanMRountree 6 месяцев назад

    Unfortunately, all law is retroactive, that's the model.
    A thing must happen for a concrete decision to be available to lawmakers on whether they want it to stop. In some cases, this is good, it prevents a lot of vectors of abuse and dictation. But it's not universally good, because as you say, it means we get to wait -knowing- that a bad event is required for action to be taken.

  • @TheYugilicious
    @TheYugilicious 6 месяцев назад

    I am mostly afraid for low educated people, like myself, who do redundant jobs that could easily be replaced by AI (in combination with basic robotics?).
    I am also confused by people like Elon Musk who on the one hand promote AI and replacing jobs by AI for personal gain, but on the other hand promote having a lot of children and thus having more people who will need a job.
    This goes full circle when the consequence of less people having jobs leads to less people being able to consume (pay for) products and services.

  • @emperorxenu519
    @emperorxenu519 6 месяцев назад +1

    I'm kinda hopeful that model collapse will prove to be a really big problem and this whole thing will be at least somewhat self limiting.

  • @artform_
    @artform_ 6 месяцев назад +1

    I've tried talking to people about this in real life, some really smart people like my University lecturers.
    Most people are seriously misinformed about the catastrophic potential of AI; people seem to just think it's a new harmless tech tool. I've had people scoff at me and think i'm crazy.

  • @JamesCairney
    @JamesCairney 6 месяцев назад +2

    "Should people be allowed to play with AI?"
    "Maybe there should be people that decide if people should be allowed to play with AI."
    I'm not sure how "people" can't see how nonsensical this is.
    Which people do you trust more?
    Seeing as AI is just code on a computer, how and who would do the policing of this?
    Fraud laws already exist, obtaining money through fraudulent means using AI or not is already illegal, the fact that "authorities" fail to control things isn't a reason to give the same "authorities" more laws to use in a incompetent manner.
    These "authorities" will also be the "people" dreaming up military applications for AI, so what regulations by who will help what?
    My personal opinion is, AI isn't as intelligent as people would like to believe that it is. It is excellent at doing "donkey work" that we can't be bothered doing. It can do nothing original.
    AGI will not happen anytime soon, what AI will be is a tool, useful but easy to be abused.
    It's as dangerous as the first disposable lighters, and "people" haven't burned the world down yet.
    People love a horror story, AI is that.

  • @Heisenberg_747
    @Heisenberg_747 6 месяцев назад +1

    We will turn into cyborgs. With all the microchips installed into us. Everyone will become an npc as most of us will have the same skills which can be downloaded and integrated into our algorithm.

  • @macrograms
    @macrograms 6 месяцев назад

    1. That willy wonka Glasgow thing. Fear? Pffft.
    2. The world made killer robots BEFORE AI.

  • @piratesaddict
    @piratesaddict 6 месяцев назад

    I am so glad that you used WALL·E as an analogy. That’s the best case scenario path I see when it comes to technology and humanity

  • @desenagrator
    @desenagrator 6 месяцев назад +4

    The void is the only escape we have from AI

    • @ben.lebron
      @ben.lebron 6 месяцев назад +2

      Plot twist, the void is an AI coffeezilla