4 Experiments Where the AI Outsmarted Its Creators! 🤖

Поделиться
HTML-код
  • Опубликовано: 18 апр 2018
  • The paper "The Surprising Creativity of Digital Evolution: A Collection of Anecdotes from the Evolutionary Computation and Artificial Life Research Communities" is available here:
    arxiv.org/abs/1803.03453
    ❤️ Support the show on Patreon: / twominutepapers
    Other video resources:
    Evolving AI Lab - • Unexpected grasping
    Cooperative footage - infoscience.epfl.ch/record/99...
    We would like to thank our generous Patreon supporters who make Two Minute Papers possible:
    Andrew Melnychuk, Brian Gilman, Christian Ahlin, Christoph Jadanowski, Dennis Abts, Emmanuel, Eric Haddad, Esa Turkulainen, Geronimo Moralez, Lorin Atzberger, Malek Cellier, Marten Rauschenberg, Michael Albrecht, Michael Jensen, Nader Shakerin, Raul Araújo da Silva, Rob Rowe, Robin Graham, Ryan Monsurate, Shawn Azman, Steef, Steve Messina, Sunil Kim, Torsten Reil.
    / twominutepapers
    Thumbnail background image credit: pixabay.com/photo-3010727/
    Splash screen/thumbnail design: Felícia Fehér - felicia.hu
    Károly Zsolnai-Fehér's links:
    Instagram: / twominutepapers
    Twitter: / karoly_zsolnai
    Web: cg.tuwien.ac.at/~zsolnai/
  • НаукаНаука

Комментарии • 1,6 тыс.

  • @MobyMotion
    @MobyMotion 6 лет назад +5372

    Very important message at the end there. It's something that Nick Bostrom calls "perverse instantiation" - and will be crucial to avoid in a future superintelligent agent. For example, we can't just ask it to maximise happiness in the world, because it might capture everyone and place electrodes into the pleasure centre of our brains, technically increasing happiness vastly

    • @TwoMinutePapers
      @TwoMinutePapers  6 лет назад +1220

      Agreed. I would go so far as to say there is little reason to think a superintelligence would do anything else than find the simplest loophole to maximize the prescribed objective. Even rudimentary experiments seem to point in this direction. We have to be wary of that.

    • @MobyMotion
      @MobyMotion 6 лет назад +453

      Two Minute Papers absolutely. The only difference is that as the AI becomes more powerful, the loopholes become more intricate and difficult to predict

    • @michaelemouse1
      @michaelemouse1 6 лет назад +600

      So AI would be like a genie that grants you all your wishes, exactly as you ask, in a way that catastrophically backfires. This should be the premise of a sci-fi comedy already.

    • @RHLW
      @RHLW 6 лет назад +133

      I pretty much have to disagree. If such a thing can't "think forward" over such a cheat, evaluate if it's good or bad as evaluated from different angles/metrics, and figure out that the simple solution isn't always the correct one, then it is not a "super intelligence"... it's just a dumb robot.

    • @scno0B1
      @scno0B1 6 лет назад +92

      why would a robot not choose the simplest solution? we can see that a robot does come up with the simplest solutions :P

  • @teddywoodburn1295
    @teddywoodburn1295 5 лет назад +5208

    I heard about an ai that was trained to play Tetris, the only instruction it was given was to avoid dying, eventually the ai just learned to pause the game, therefore avoiding dying

    • @zserf
      @zserf 5 лет назад +257

      Source: ruclips.net/video/xOCurBYI_gY/видео.html
      Tetris is at 15:15, but the rest of the video is interesting as well.

    • @theshermantanker7043
      @theshermantanker7043 5 лет назад +87

      That's what i used to do XD
      But it got boring after a while

    • @teddywoodburn1295
      @teddywoodburn1295 5 лет назад +164

      @DarkGrisen that's true, but the person creating the program basically told the ai that it was about not dying, rather than getting a high score

    • @Ebani
      @Ebani 5 лет назад +111

      @DarkGrisen There is no difference then. By not dying it will get an infinite score eventually so a high score by itself is meaningless, not dying turns out to be the best factor to predict a high score.
      He could've easily just removed the pause function too but it's funny to see the results he got

    • @teddywoodburn1295
      @teddywoodburn1295 5 лет назад +33

      @DarkGrisen exactly, I think the lesson in that is that you have to think about what you're actually telling the ai to do

  • @DarcyWhyte
    @DarcyWhyte 6 лет назад +6520

    Robots don't "think" outside the box. They don't know there is a box.

    • @davidwuhrer6704
      @davidwuhrer6704 6 лет назад +715

      That is the secret.
      The researches who formulated the problem thought there was a box.
      They expected the AI to think inside it.
      But the AI never knew about the box.
      There was no box.
      And the AI solved the problem as stated outside it.

    • @DarcyWhyte
      @DarcyWhyte 6 лет назад +120

      That's right there's no box. :)

    • @planetary-rendez-vous
      @planetary-rendez-vous 6 лет назад +233

      So you mean humans are conditioned to think inside a box

    • @Anon-xd3cf
      @Anon-xd3cf 6 лет назад +25

      Darcy Whyte
      No the "robots" don't know there is a "box" to think outside of...
      AI however are increasingly able to "think" for themselves both in and out of the proverbial *box*

    • @milanstevic8424
      @milanstevic8424 6 лет назад +49

      the error is simply in trying to describe a very simple "box" while not being able to reconstruct what's actually described. people do this all the time, and this is why good teachers are hard to find.
      the box that the AI couldn't circumvent was the general canvas, or in this case the general physics sandbox with gravity acceleration and a ground constraint. this is the experimental >reality

  • @theshermantanker7043
    @theshermantanker7043 5 лет назад +1843

    In other words the AI has learnt the ways of video game speedrunners

    • @doodlevib
      @doodlevib 4 года назад +33

      Indeed! Some of the work done in training AI systems to play videogames is incredible, like the work of OpenAI.

    • @Linkario86
      @Linkario86 4 года назад +27

      Omg... can't wait to see the first AI breaking a speedrun record, simply to see what exploits it found

    • @englishmotherfucker1058
      @englishmotherfucker1058 4 года назад +10

      TAS

    • @ihaveaplan.ijustneedmoney.9777
      @ihaveaplan.ijustneedmoney.9777 4 года назад +15

      Before we know it, they'll be speedrunning the human race

    • @averywatts2391
      @averywatts2391 4 года назад +8

      I would love to see someone put an AI through Skyrim until it can complete the main questline as quickly as possible.

  • @jonathanxdoe
    @jonathanxdoe 6 лет назад +5987

    Me: "AI! Solve the world hunger problem!"
    Next day, earth population = 0.
    AI: "Problem solved! Press any key to continue."

    • @MrNight-dg1ug
      @MrNight-dg1ug 6 лет назад +13

      John Doe lol!

    • @wisgarus
      @wisgarus 5 лет назад +19

      John Doe
      One eternity
      Later

    • @michaelbuckers
      @michaelbuckers 5 лет назад +80

      You jest, but limiting the population is literally the only way you can ensure that limited supply can be rationed to all people at a given minimum. China and India are neck deep in this, but first world doesn't have this problem so they think it's possible to just feed everyone hungry and that would magically not bankrupt everyone else (the hungry are bankrupt to start with).
      The truth is, poor people are poor because that's what they're worth in a fair and square free market economy. They have no skills and qualities to be rich, they don't get rich through marketable merit and even if they become rich by chance, soon enough they lose all money and go back to being poor. Inequality is a direct consequence of people not being identical. Having the same reward for working twice as hard doesn't sound appealing to me, much less living in a totalitarian society that forbids stepping out the line for half an inch in order to ensure equality.

    • @filippovannella4957
      @filippovannella4957 5 лет назад +4

      you definitely made my day! xD

    • @SomeshSamadder
      @SomeshSamadder 5 лет назад +13

      hence Thanos 😂

  • @anthonyhadsell2673
    @anthonyhadsell2673 5 лет назад +2152

    Human: Reduce injured car crash victims
    Ai: Destroys all cars
    Human: Reduce injured car crash victims without destroying cars
    Ai: Disables airbag function so crashes result in death instead of injury

    • @pmangano
      @pmangano 4 года назад +205

      Human: Teaches AI that death is result of injury
      AI: Throw every car with passengers in a lake, no crash means no crash victims, car is intact.

    • @decidueyezealot8611
      @decidueyezealot8611 4 года назад +19

      Humans then drown to death.

    • @Solizeus
      @Solizeus 4 года назад +135

      Humans: Teaches IA not to damage the car or it's passengers.
      IA: Disable the ignition, avoiding any damage.
      Humans: Stop that too.
      IA: Turns on Loud Bad music and drive in circles to make the passengers want to leave or turn the car off

    • @noddlecake329
      @noddlecake329 4 года назад +119

      This Is basically what they did in WWI they noticed an increase in head injuries when they introduced bullet proof helmets and so they made people stop wearing helmets. The problem was that the helmets were saving lives and leaving only an injury

    • @anthonyhadsell2673
      @anthonyhadsell2673 4 года назад +65

      @@noddlecake329 survivor bias. When they took all the holes the found in planes that were shot and layed them over one plan in ww2 they noticed the edge of the winds and a few other areas being shot more so they assumed they should reinforce those areas, the issue was that they were looking at the planes that survived and really they needed to reinforce the areas that did have bullet holes

  • @JoshuaBarretto
    @JoshuaBarretto 6 лет назад +2096

    This reminds me of a project I worked on 2 years ago. I evolved a neural control system for a 2D physical object made of limbs and muscles. I gave it the task of walking as far as possible to the right in 30 seconds. I expected the system to get *really* good at running.
    Result? The system found a bug in my physics simulation that allowed it to accelerate to incredible speeds by oscillating a particular limb at a high frequency.

    • @milanstevic8424
      @milanstevic8424 6 лет назад +284

      we'd do it too if only there was such a glitch in the system.
      actually we exploit the nature for any such glitch we can find.
      thankfully the universe is a bit more robust than our software, and energy conservation laws are impossibly hard to circumvent.

    • @ThatSkyAmber
      @ThatSkyAmber 6 лет назад +36

      give it's joints a speed limit more on par with a human's..? or anyway, below the critical value needed for the exploit.

    • @Moreoverover
      @Moreoverover 5 лет назад +87

      Reminds me of what video game speedrunners do, finding glitches is goal number uno.

    • @jetison333
      @jetison333 5 лет назад +89

      @@milanstevic8424 honestly I dont think it would be too far off to call computers and other advanced technology as exploits. I mean, we tricked a rock into thinking.

    • @milanstevic8424
      @milanstevic8424 5 лет назад +41

      @@jetison333 I agree, even though rocks do not think (yet).
      But what is a human if not just a thinking emulsion of oil (hydrocarbons) and water? Who are we to exploit anything that wasn't already made with such a capacity? We are merely discovering that rocks aren't what we thought they were.
      Given additional rules and configurations, everything appears to be capable of supernatural performance, where supernatural = anything that exceeds our prior expectations of nature.
      "Any sufficiently advanced technology is indistinguishable from magic"
      Which is exactly the point at which we begin to categorize it as extraordinary, instead of supernatural, until it one day just becomes ordinary...
      It's completely inverse, as it's a process of discovery, thus we're only getting smarter and more cognizant of our surroundings. But for some reason, we really like to believe we're becoming gods, as if we're somehow leaving the rules behind. We're hacking, we're incredible... We're not, we're just not appreciating the rules for what they truly are.
      In my opinion, there is much more to learn if we are ever to become humble masters.

  • @davidwuhrer6704
    @davidwuhrer6704 6 лет назад +885

    This reminds me of the old story of the computer that was asked to design a ship that would cross the English Channel in as short a time as possible.
    It designed a bridge.

    • @HolbrookStark
      @HolbrookStark 4 года назад +63

      Tbh a bridge made of a super long boat floating in the middle of the English Channel tip to tip with the land masses would be the most lit bridge on earth 🔥

    • @clokky1672
      @clokky1672 4 года назад +15

      This really made my chuckle.

    • @thehiddenninja3428
      @thehiddenninja3428 4 года назад +75

      Well, there was no size restriction.
      It was tasked to have the lowest time between the back end touching point A and the front end touching point B.
      Obviously the lowest time is 0; where it's already touching both points

    • @AverageBrethren
      @AverageBrethren 4 года назад +1

      @@HolbrookStark thats a lot of material. It's a pipedream.

    • @HolbrookStark
      @HolbrookStark 4 года назад +11

      @@AverageBrethren there was a time people would have said the same about ever building a bridge across the English Channel at all. Really, using a floating structure might use a lot less material and be a lot cheaper than the other options for how to do it

  • @NortheastGamer
    @NortheastGamer 5 лет назад +1340

    "The AI found a bug in the physics engine" So basically it did science.

  • @renagonpoi5747
    @renagonpoi5747 4 года назад +291

    "If there are no numbers, there's nothing to sort... problem solved."
    I think a few more iterations and we'll have robot overlords.

    • @cgme7076
      @cgme7076 4 года назад +4

      Renagon Poi :: No joke! These AI were too smart and this was two years ago.

    • @emericalizond4917
      @emericalizond4917 4 года назад +4

      sort all these people into ... AI: kill humans = nothing to sort

    • @harper626
      @harper626 3 года назад +1

      Sounds like Trumps solution to the corona virus. Quit testing. No more cases. Right?

    • @numbdigger9552
      @numbdigger9552 3 года назад

      @@harper626 i certainly don't. SENICIDE TIME!!!

  • @josephoyek6574
    @josephoyek6574 4 года назад +404

    AI: You have three wishes
    Me: *sweats

    • @nischay4760
      @nischay4760 4 года назад +2

      Dont Watch My Vids wear slippers

    • @stellarphantasmvfx5504
      @stellarphantasmvfx5504 4 года назад +6

      @@nischay4760 the slippers will turn into gold, making it hard to walk

    • @jjuan4382
      @jjuan4382 4 года назад +3

      @@UntrueAir oh yeah your right

    • @nischay4760
      @nischay4760 4 года назад +5

      @@UntrueAir touching is an obsolete word then

    • @unequivocalemu
      @unequivocalemu 3 года назад +2

      @@nischay4760 touching is overrated

  • @laurenceperkins7468
    @laurenceperkins7468 6 лет назад +806

    Reminds me of one of the early AI experiments using genetic algorithm adjusted neural networks. They ran it for a while and there was a clear winner that could solve all the different problems they were throwing at it. It wasn't the fastest solver for any of the cases, but it was second-fastest for all or nearly all of them.
    So they focused their studies on that one, and turned the others lines off. At which point the one they were studying ceased being able to solve any of the problems at all. So they ripped it apart to see what made it tick and it turns out that it had stumbled upon a flaw in their operating system that let it monitor what the other AIs were doing, and whenever it saw one report an answer it would steal the data and use it.

    • @fumanchu7
      @fumanchu7 4 года назад +253

      They recreated Edison as an AI. Neat.

    • @rickjohnson1719
      @rickjohnson1719 4 года назад +14

      @@fumanchu7 nice

    • @MasterSonicKnight
      @MasterSonicKnight 4 года назад +73

      tl;dr: AI learns to cheat

    • @computo2000
      @computo2000 4 года назад +50

      This sort of sounds fake. Name/Source?

    • @michaelburns8073
      @michaelburns8073 4 года назад +5

      Ah, it learned the classic "Kobayashi Maru" maneuver. Sweet!

  • @Zorn101
    @Zorn101 6 лет назад +262

    AI is like a 4-year-old sorting butterfly pictures.
    If I just tare up and eat the picture. the sorting is done!

    • @aphroditesaphrodisiac3272
      @aphroditesaphrodisiac3272 4 года назад +5

      *tear

    • @effexon
      @effexon 3 года назад +3

      these experiments will show how early ancient humans fought, tribal phase.

    • @breathe4778
      @breathe4778 3 года назад

      but it's perfect, no consequences 😅

  • @mrflip-flop3198
    @mrflip-flop3198 4 года назад +223

    "Okay AI, I want you to solve global warming."
    "Right away, now moving _Earth_ from the solar system. Caution: You may experience up to 45Gs."

    • @adoftw3866
      @adoftw3866 4 года назад +2

      more like 5k G's

    • @sharpfang
      @sharpfang 4 года назад +20

      Nah, way too complex and expensive. But considering that the global warming is caused by humans... eliminate the cause, easy.

    • @cgme7076
      @cgme7076 4 года назад +5

      *Humans explode immediately*

    • @igg5589
      @igg5589 4 года назад +8

      Or just one virus and problem solved

    • @eggyrepublic
      @eggyrepublic 4 года назад +2

      @@igg5589 hol up

  • @Moonz97
    @Moonz97 6 лет назад +181

    This is so hilarious. I remember programming a vehicle that was tasked with avoiding obstacles. It had controls over the steering wheel only, and it was always moving forward. To my surprise, the bot maximized its wall avoidance time by going in circles. I find that so funny lol.

    • @xl000
      @xl000 4 года назад +8

      this is because your problem was not well specified. It should have been rewarded for "curviligne distance on some path"

    • @deathwishgaming4457
      @deathwishgaming4457 3 года назад +20

      @@xl000 I'm sure Moonz97 knows that. They brought it up because it was relevant, not for advice lol.

    • @geraldfrost4710
      @geraldfrost4710 3 года назад +2

      I find myself going in circles a lot... Good to know it is a valid response.

  • @amyshaw893
    @amyshaw893 5 лет назад +53

    Reminds me of something I saw where some people were training an AI to play Qbert, and at one point it found a secret bonus stage that nobody had ever found before

    • @korenn9381
      @korenn9381 4 года назад +1

      @@MrXsunxweaselx No that has no mention of secret bonus stages

  • @iLeven713
    @iLeven713 4 года назад +30

    It's funny how these reinforcement learning models kind of act like Genie's from folklore, with a "be careful what you ask for" twist

  • @alansmithee419
    @alansmithee419 5 лет назад +157

    The idea of thinking outside the box is limited to humans. The box is something our minds put in place - it is a result of how our brains work. The ai doesn't have a box meaning it can find the best solution, but also meaning there are many many more things that it could try that it needs to slog through.
    We need that box, otherwise we'd be so flooded with ideas that our brains wouldn't be able to sift through them all.
    Our limitations allow us to function, but the way computers work means such a box would be detrimental to them.
    - sincerely, not a scientist.

    • @EGarrett01
      @EGarrett01 4 года назад +3

      A "box" is simply a method that appears to be the first step towards generating the best result. But it can be a problem because there are often methods that don't immediately seem to lead to the right direction but which ultimately produce a better result, like a walking physics sim spinning its arm in place super-fast until it takes off like a helicopter and can travel faster than someone walking.
      If AI are working through successive generations, it will have periods or groups of results that follow a certain path that produces better things short-term, this is the same as people "thinking in the box." But if it is allowed to try other things that are inefficient at first and follow them multiple steps down the line, it then ends up being able to think outside the box.

    • @alansmithee419
      @alansmithee419 4 года назад +15

      @@EGarrett01 as far as I understand it, the box is the range of human intuition, and thinking outside of it is essentially going against the common way of human thinking. The ai doesn't have intuition, nothing limiting its ideas or method of thought, therefore it has no box.
      Though honestly the proverbial box has never really had a definition, and its meaning could be interpreted any number of ways. I suppose both of our definitions are equally valid.

    • @xvxee7561
      @xvxee7561 4 года назад +2

      You have this hella backwards

    • @honkhonk8009
      @honkhonk8009 4 года назад

      No, its because we would have past experiences influence decisions in the form of common sense.

    • @duc2133
      @duc2133 3 года назад +2

      @@alansmithee419 Ya'll are trying to sound too deep. It just means that these experiments didn't set enough factors to be practical. A robot flipping on its side woudn't be practical, or the numerous other jokes on this thread -- pushing the earth far away from the sun to "solve global warming" doesn't make sense because its fucking stupid -- the experimenter needed to set certain limitations for the computer to come up with a sensible solution. These robots aren't lacking "intuition" its just a bad computer that needs to be programmed better.

  • @ValensBellator
    @ValensBellator 4 года назад +84

    It’s fun watching our future exterminators in their infancy years :D

  • @curlyfryactual
    @curlyfryactual 6 лет назад +452

    I found this pretty funny, the AI is like the class clown, doing everything wrong but right to comedic effect. Or like someone pointed out, a bad genie lol. That poisoning the competition stuff was creepy though, obvious red herring...LOVE the video!

    • @TwoMinutePapers
      @TwoMinutePapers  6 лет назад +18

      Thank you so much, happy to hear you enjoyed it! :)

    • @JQRNY-YDJKD
      @JQRNY-YDJKD 6 лет назад +10

      You gave the robot AI reward system. Did the scientist think about giving the robot AI punishment system?

    • @monkeyonkeyboard7909
      @monkeyonkeyboard7909 6 лет назад +27

      It's not really a red herring, the AI just found a way to maximise its own reward in a reward system - it doesn't mean it's evil.

    • @NolePTR
      @NolePTR 6 лет назад

      malicious compliance

    • @play005517
      @play005517 6 лет назад

      And the last experiment clearly shows what AI will do to fix the ultimate problem. If every human is all "short circuit"-ed, there will be problems no more.

  • @RoySchl
    @RoySchl 6 лет назад +214

    yeah this shit happens all the time, especially when you have something physics based and the reward function is not specific enough.
    i once made a genetic algorithm that evolved 3d creatures to maximize distance traveled.
    well since i measured the distance at certain intervals i ended up with creatures vibrating in place at the same frequence i was measuring.
    or you go for jump height, and they will surely find a way to glitch the physics/collision engine to fling themselves into infinity somehow.

    • @chameleonh
      @chameleonh 5 лет назад +16

      Limit spring energy output. No spring is able to put out more energy than it received. Hooke's law is k*x, so you limit k*x to x*dt, where k is spring constant, x is spring displacement, dt is delta time (integration time step)

    • @effexon
      @effexon 3 года назад +1

      it is true i think in complicated systems(open problems, like optimizing, physics problems are usually like this, especially real world ones). it is good in comparing results, like languages word by word.

  • @harry356
    @harry356 6 лет назад +10

    We had a bunch of aibo robots play hide and seek and train an ai. They stopped hiding quickly, we thought something is wrong, we made an error in our programming. It took us a while to find out that they learned to stay at the starting point so they where immediately free when the countdown stopped. They found a loophole in the rules. Incredible fun.

    • @killmeister2271
      @killmeister2271 5 лет назад

      They were like "hmm this game has no purpose therefore it must end asap"

    • @renakunisaki
      @renakunisaki 2 года назад

      Literally "the only winning move is not to play".

  • @Corcoancaoc
    @Corcoancaoc 5 лет назад +6

    The last part reminds me about the Radiant AI introduced in the game Oblivion, where NPCs made their own choices based on the situation around them. During testing, a villager with a mission of protecting a horse (or was it unicorn) from nearby hostiles instead killed it himself because he deemed it dangerous.

  • @banu6301
    @banu6301 6 лет назад +241

    The first one was just too amazing

    • @keffbarn
      @keffbarn 6 лет назад +29

      Yea, it's like the AI trolled the researchers.

    • @deezynar
      @deezynar 6 лет назад +4

      The programmers didn't think to tell it to stay on its feet. Alternatively, they didn't tell it to find a way to walk with the least contact of any part, not just the "feet."

    • @AZ-kr6ff
      @AZ-kr6ff 5 лет назад +4

      Chris Russell Agreed. Not amazing at all. If you gave any 5 year old the same instructions they would drop to their hands and knees and crawl without missing a beat.

    • @p-y8210
      @p-y8210 5 лет назад

      @@AZ-kr6ff yeah but this not human this is an AI made by humans

    • @AZ-kr6ff
      @AZ-kr6ff 5 лет назад

      p-y
      Yes, but still programmed to solve problems.
      Easy problem to solve.

  • @TwoMinutePapers
    @TwoMinutePapers  6 лет назад +12

    Our Patreon page: www.patreon.com/TwoMinutePapers
    One-time payment links are available below. Thank you very much for your generous support!
    PayPal: www.paypal.me/TwoMinutePapers
    Bitcoin: 13hhmJnLEzwXgmgJN7RB6bWVdT7WkrFAHh
    Ethereum: 0x002BB163DfE89B7aD0712846F1a1E53ba6136b5A
    LTC: LM8AUh5bGcNgzq6HaV1jeaJrFvmKxxgiXg

  • @firefoxmetzger9063
    @firefoxmetzger9063 6 лет назад +86

    This reminds me of my very first AI project :D It was before deep-learning was a thing and was doing function approximation and SARSA in the StarCraft2 map editor (yes, that one where you program by stacking boxes ...). The goal was for the AI to control a marine with stim and learn if it could defeat an Ultralisk that simply A-moves.
    Turns out there is/was a bug in the SC2 game engine and when the AI stutter steps just right, the Ultralisk will be caught in the attack animation without doing any damage. Optimization programs always find the exploits...

    • @TwoMinutePapers
      @TwoMinutePapers  6 лет назад +12

      Amazing story, thanks for sharing! Do you have any videos or materials on this? :)

    • @firefoxmetzger9063
      @firefoxmetzger9063 6 лет назад +17

      Unfortunately, no. It would be the perfect introductory example for teaching AI classes.
      Back then I was a 19/20 year old student at the end of puperty with no formal CS education (I'm actually a mechanical engineer lol). If you would have mentioned "reproducibility" to me back then, I would have understood something else...

  • @nosajghoul
    @nosajghoul 6 лет назад +52

    @2:36 This is how skynet reached the conclusion to eradicate humans. Its all fun and games till youre just a number.

    • @ignaziomessina69
      @ignaziomessina69 5 лет назад

      Exactly what I thought

    • @la-ia1404
      @la-ia1404 5 лет назад +2

      I'm gonna call my boss at work skynet from now on cause that's all I am to them is a number.

  • @XxXMrGuiTarMasTerXxX
    @XxXMrGuiTarMasTerXxX 4 года назад +5

    Your last sentence reminds me of something that happened in the UK, if I remember correctly, where they were trying to optimize the traffic trying to minimize the economic costs. The result was to remove all the traffic lights. After investigating why, it turned out that it increases the number of accidents, and the data showed that mostly elder people died on those accidents, and so, it should reduce the amount of pensions they had to pay.

  • @seamuscallaghan8851
    @seamuscallaghan8851 6 лет назад +73

    Human: Maximize paperclip production.
    AI: Converts whole planet into paperclips.

    • @WurmD
      @WurmD 4 года назад +11

      AI: Converts whole *universe* into paperclips.
      There :) fixed it for you

    • @bell2023
      @bell2023 4 года назад +13

      Release the HypnoDrones

    • @sharpfang
      @sharpfang 4 года назад +4

      In reality it would achieve mastery at modifying its own code so that the paperclip counting function returns infinity, instead of counting paperclips. It might use blackmail or intimidation to force the creators to implement that change.

    • @marshalllenhart7923
      @marshalllenhart7923 4 года назад +1

      @@sharpfang Or it would reason that having Humans turn it off would be a faster solution than anything else, so it would act super scary in an attempt to get the creator to turn it off.

    • @LowestofheDead
      @LowestofheDead 4 года назад +2

      AI: (I must threaten the humans to build me paperclip factories.. what would frighten a human?🤔)
      AI: "Human! Build me factories or I'll steal paperclips!"
      AI: (Nailed it)

  • @Laezar1
    @Laezar1 6 лет назад +17

    First one : make sense
    Second : smart!
    Third : ok that's getting scary
    Fourth : we are doomed.

  • @artjomsjakovenko2446
    @artjomsjakovenko2446 5 лет назад +33

    I once made a neural network learn to throw basketballs into a basket inside a simulation and it discovered that if shot strong enough it is going to clip through collider and end up inside a basket with minimal ball distance travelled, since it was a part of fitness function.

    • @tungleson7066
      @tungleson7066 3 года назад

      That is technically right in real life as well. If you launch the first ball strong enough it will break the basket's wall, open a hole that you can just continue to shoot ball in. The least distance, of course.

  • @Cjx0r
    @Cjx0r 4 года назад +10

    Disclaimer: The robot performed randomized actions sometimes as much as millions of times over before stumbling across these conclusions. Stumbling being the choice word.

    • @centoe5537
      @centoe5537 4 года назад +3

      Cjx0r It narrows down on these behaviors after learning from failed attempts

  • @Jasonasdoipjahrv
    @Jasonasdoipjahrv 3 года назад +5

    I love this, the ai is like, "but i did what you asked🥺"

  • @jenner247450
    @jenner247450 6 лет назад +66

    I have a another example of loophole finding from AI. In some metalwork factory upgraded system ith fuzzy logic was overweight (need to carry a 12 tons of liquid metal, by one pass of 10 tons maximum of cart stable derivations)... So, AI found the solve. He take a 12 tons cart, move that in center of factory, stop, drop 2 tons melted iron on the floor, an move cart further
    according next instructions)))

    • @asj3419
      @asj3419 5 лет назад +8

      Thats sounds very interesting. Do you have the source?, id like to read more about this.

    • @KnakuanaRka
      @KnakuanaRka 5 лет назад +3

      Sounds like the robot needs some courses on workplace safety!

  • @MidnightSt
    @MidnightSt 4 года назад +2

    "Don't ask your car to unload any unnecessary cargo to go faster, or if you do, prepare to be promptly ejected from the car."
    -Two Minute Papers, probably the best of the concise explanations of what it means that AI doesn't (by default) think like humans =D

  • @dragonniteIV
    @dragonniteIV 3 года назад

    I like how you explain these things as simple as possible. Makes it entertaining to watch!

  • @carsonlight_lapse6394
    @carsonlight_lapse6394 5 лет назад +92

    Human tries to delete ai
    Ai: freeze the computer and preserve itself

    • @brendanodoms5401
      @brendanodoms5401 5 лет назад +7

      that is what happened to a mario ai,
      when it almost died it paused the game forever

    • @albingrahn5576
      @albingrahn5576 5 лет назад +5

      @@brendanodoms5401 *tetris

    • @brendanodoms5401
      @brendanodoms5401 5 лет назад +1

      @@albingrahn5576 no it was mario as well

    • @albingrahn5576
      @albingrahn5576 5 лет назад

      @@brendanodoms5401 yes but it didnt pause mario, only tetris

    • @PF-gi9vv
      @PF-gi9vv 5 лет назад +3

      Human : Smashes it with a hammer.
      Remember, think out of the box ;)

  • @itxi
    @itxi 3 года назад +5

    I remember the story of an AI trained to play tetris.
    When things got bad the AI just paused the game so it couldn't lose.

  • @vripiatbuzoi9188
    @vripiatbuzoi9188 2 года назад +2

    This would be great for video game bug testing since the AI will try things that human testers may not think of.

  • @aaronsmith6632
    @aaronsmith6632 6 лет назад

    Love these videos!! It wouldbe also cool to have longer ones that dig even deeper

  • @sjoerdgroot6338
    @sjoerdgroot6338 6 лет назад +34

    2:21 Imagine if that AI had the task of making all human on earth happy

    • @JorgetePanete
      @JorgetePanete 6 лет назад +3

      sjoerd groot well, it was said in other comment

    • @RoySchl
      @RoySchl 6 лет назад +14

      just don't tell it to minimize suffering :)

    • @davidwuhrer6704
      @davidwuhrer6704 6 лет назад

      Tell me: Why do terminal users of heroin try to become clean?

    • @Hauketal
      @Hauketal 6 лет назад +2

      sjoerd groot Loophole: each statement about elements of the empty set is true. So if there are no humans left, each of them is whatever you wish, e.g. maximally happy.

    • @Guztav1337
      @Guztav1337 6 лет назад +3

      They will pump our blood vessels with 'happy' hormones

  • @maxbaugh9372
    @maxbaugh9372 4 года назад +3

    I once heard about a genetic algorithm tasked with building a simple oscillator, and after a few generations it seemed to work. Then they popped the hood and saw that it had in fact built a radio to pick up signals from a nearby computer.

  • @PopcornFr3nzy
    @PopcornFr3nzy 4 года назад +8

    2:27
    You see that lonely little robot up top?
    That's my life.

    • @gortnewton4765
      @gortnewton4765 4 года назад +2

      If you are a lone-wolf, recognize it and get on with making your way in life. But don't wallow in it.

    • @aphroditesaphrodisiac3272
      @aphroditesaphrodisiac3272 4 года назад +2

      Gort Newton humans are societal creatures, you should have a few friends or family who you can spend time with a quite frequently. Otherwise, it's bad for your mental health. Having 1 friend in school / work is much better than none, and having 2 or 3 is even better

    • @PopcornFr3nzy
      @PopcornFr3nzy 4 года назад +1

      @@aphroditesaphrodisiac3272 I'm inclined to believe you, but your name seems as lost as I am 🤣
      Jk, I appreciate the feedback and I have lots of friends and family, I'm just constantly disconnected. It is what it is. Im fine, trust me.

  • @isaakloewen5172
    @isaakloewen5172 6 лет назад

    This was actually pretty good bro! I love your channel

  • @drakekay6577
    @drakekay6577 6 лет назад +10

    2:35 haaa haaa That is the Kobayashi maru! The Ai pulled a KIRK on the test!

  • @denno445
    @denno445 4 года назад +12

    This is the most entertaining Chanel I'm subscribed to on RUclips

  • @kingdomdamagged733
    @kingdomdamagged733 3 года назад

    Most találtam a csatornádra,de nagyon birom már most,csak igy tovább!:D

  • @TakaiDesu
    @TakaiDesu 6 лет назад +2

    2:32 Legend says robot number 6 is still searching for food.
    Well done, number 6. We love you Anyways.

  • @sohaibarif2835
    @sohaibarif2835 6 лет назад +40

    I used to think Robert Miles on RUclips was just being paranoid. Looking at this, I stand corrected.

    • @antoniolewis1016
      @antoniolewis1016 6 лет назад +2

      No Daniel, he's found it rational now and corrected his error. Initially, he didn't know Miles was paranoid for certain, as it was just a suspicion.

    • @sohaibarif2835
      @sohaibarif2835 6 лет назад +16

      The thing was, even the most advanced reinforcement learning and LSTM techniques I had seen up till this video showed we don't really even need to think about "AI safety" as Miles constantly talks about let alone put any research or investment in such a field. Now, I think we might need to work on it. We need to work on defining problems in a way that even if AI does exploit some loophole, like the empty list being sorted, the loophole exploitation would still be safe for the users of the AI system.

  • @smartkorean1
    @smartkorean1 5 лет назад +3

    The research being done is absolutely amazing, especially the bit about how cooperative and competitive traits can emerge from a simple given task. Do you think you could ever make a video on explaining what steps an undergrad comp sci student should take in order to eventually participate in AI research and even have a career in AI? Or maybe in a blog post? Edit: grammar

  • @moritzw42
    @moritzw42 4 года назад

    Awesome content. Please create more summaries of general research findings and trends like this one.

  • @plotwist1066
    @plotwist1066 3 года назад +3

    Imagine A.I in the future reacting to comment section

  • @nononono3421
    @nononono3421 6 лет назад +17

    Eventually an AI could give us the impression that it hasn't found a loophole, when in reality it would just wait to exploit it at a time where we couldn't stop it from doing so. An AI could help society solve all of its problems, only to lure us into a trap we can't avoid 100000000000000 moves later.

    • @davidwuhrer6704
      @davidwuhrer6704 6 лет назад +6

      If AI survives humanity, I would call that a success.

  • @earthbjornnahkaimurrao9542
    @earthbjornnahkaimurrao9542 6 лет назад +3

    this is a great way to test our assumptions. Plug in what we think we know and see how it goes wrong.

  • @alan2here
    @alan2here 6 лет назад +1

    I hugely recommended general search approach bot for almost any game coding task, that you can link up to playable entities or stuff you want an AI for as and when needed. It's a great alternative to looking for bugs by hand, when it quickly finds them itself instead.

  • @mysteriousboi1019
    @mysteriousboi1019 4 года назад

    That elbow walking one is truly mind-blowing!

  • @cheydinal5401
    @cheydinal5401 5 лет назад +10

    I want a robot arm that can throw an ordinary dice and always get the number it wants

    • @mihajlor2004
      @mihajlor2004 5 лет назад +1

      That could be possible

    • @insanezombieman753
      @insanezombieman753 5 лет назад +8

      @@mihajlor2004 yeah it would just drop it vertically

    • @jsl151850b
      @jsl151850b 4 года назад +1

      It may NOT be possible because the throwing arm servo motors would need an accuracy beyond what is technically possible. F= 2.210974558 Newtons. Snakeeyes!!

    • @jsl151850b
      @jsl151850b 4 года назад +2

      Feralz There may be a point where physically possible and technically possible meet. The tech has to obey physical laws. What if the math says it needs (extremely large number) and 1/3rd atoms? One third less and two thirds more won't work.

    • @jsl151850b
      @jsl151850b 4 года назад

      Or should I have said 'impossible'?

  • @iwiffitthitotonacc4673
    @iwiffitthitotonacc4673 6 лет назад +38

    You forgot to mention what happened in Elite Dangerous! Where the AI developed its own weapons and completely wrecked players!

    • @zblurth855
      @zblurth855 6 лет назад +10

      how ave you a vidéo or something like that ?
      This is interesting

    • @iwiffitthitotonacc4673
      @iwiffitthitotonacc4673 6 лет назад +28

      "According to a post on the Frontier forum, the developer believes The Engineers shipped with a networking issue that let the NPC AI merge weapon stats and abilities, thus causing unusual weapon attacks.
      This meant 'all new and never before seen (sometimes devastating) weapons were created, such as a rail gun with the fire rate of a pulse laser.'"
      There doesn't seem to be much info, but it sounds like the AI utilized a bug - maybe not so relevant to this video after all.
      www.eurogamer.net/articles/2016-06-03-elite-dangerous-latest-expansion-caused-ai-spaceships-to-unintentionally-create-super-weapons

    • @fleecemaster
      @fleecemaster 6 лет назад +1

      That was a while ago, but interesting and relevant, thanks for posting :)

    • @Leo3ABPgamingTV
      @Leo3ABPgamingTV 6 лет назад +14

      tbh I would not even call that AI. From what it seems FD simply made a bug that would remove any restriction on procedural generation of npc weapon stats, so some random combinations were unintentionally powerful. It is hardly an AI that is purposefully found a loophole to maximize effectiveness and kill all humans, and more of a simple bug in procedural generation. If initial algorithm was about maximizing effectiveness, then we would mostly see same enemy ships with same equipment all the time in ED.
      I think some people just blow rather simple bug way out of proportions.

    • @NoConsequenc3
      @NoConsequenc3 3 года назад +1

      @@Leo3ABPgamingTV any sufficiently advanced procedural generation is indistinguishable from- wait that's not how that goes

  • @ZeroEight
    @ZeroEight 5 лет назад

    this video is one of the best ones, and should have been longer

  • @teyton90
    @teyton90 4 года назад

    haha the example with the car ejecting the "driver" to be able to go faster was brilliant. and true!

  • @donovanmahan2901
    @donovanmahan2901 4 года назад +4

    1:10 FIRMLY GRASP IT!!

  • @SawSaw-ul8xu
    @SawSaw-ul8xu 6 лет назад +137

    So basically A.I. could be used to simulate the economy that is regulated through politics, and the A.I. will find tax loopholes that the rich people use lawyers to find and escape taxes. This way policy makers can craft perfect loophole-free tax legislation. This is great news.

    • @dizzyaaron
      @dizzyaaron 6 лет назад +27

      Annnnd who exactly do you think will be funding these projects? LOL!

    • @davidwuhrer6704
      @davidwuhrer6704 6 лет назад +18

      It follows from Rice's lemma that no law can be written such that it doesn't contain loopholes if interpreted literally.
      What shysters do is find those loopholes. It would be up to the judicative to tell them they can't do that, but that part of the judicial system is chronically underfunded and it's getting worse. I have a suspicion why that might be the case.

    • @Beg0tt3n
      @Beg0tt3n 6 лет назад +6

      It's not a loophole. You're just upset that what you wanted to be illegal wasn't defined.

    • @davidwuhrer6704
      @davidwuhrer6704 6 лет назад +10

      *Beg0tt3n*
      As I said: It is impossible to formally define the intent of a law in such a way that it can't be interpreted to its opposite. That can be proven mathematically. (I have done so myself at one time.)
      If you act in compliance with the letter, but not intent, of the law, I would say you are using a loophole. You might call that by a different name, but I am not a lawyer.
      And yes, it does upset me when I see that that has become a profitable industry of very specialised legal experts.

    • @Beg0tt3n
      @Beg0tt3n 6 лет назад +3

      Can Rice's theorem be applied to non-formal languages, such as natural language?
      You can use a pejorative to describe behavior that you dislike, but that won't change anything. The intent of the law is never what matters - only what is in legal writing.

  • @QuinnWaters
    @QuinnWaters 6 лет назад +1

    yeaaaaaah, here we go!!! thanks for the video.

  • @CognizantPotato
    @CognizantPotato 3 года назад

    This is so cool. Computers aren’t anywhere near the level of human brains in terms of self recognition yet, but we’re effectively watching millions of years of evolution in a 5 minute video. Amazing.

  • @AtulLonkar
    @AtulLonkar 6 лет назад +8

    Scarily interesting....again !! Thanks a ton on behalf of entire A.I. enthusiasts community 😇

  • @deanc9195
    @deanc9195 4 года назад +3

    This is exactly why I’m so scared of AI taking over... artificial creativity.

    • @yeet6328
      @yeet6328 Год назад

      So uhhh bad news

  • @mellertid
    @mellertid 4 года назад +2

    An early AI system with a camera input learned amazingly well to anticipate crowd size at a subway terminal. Then it turned out a clock was in it's view, so it simply looked at it for clues :-) [as I recall the story]

  • @PopcornColonelx
    @PopcornColonelx 6 лет назад

    Loved this vid. I'd love to see more like this.

  • @henrytjernlund
    @henrytjernlund 5 лет назад +3

    HAL, open the pod bay doors.
    I'm sorry Dave, I can't do that...

  • @bongobongo3661
    @bongobongo3661 5 лет назад +3

    AI: Modern problems require modern solutions

  • @bazwilson6827
    @bazwilson6827 3 года назад

    This was highly entertaining :) thank you

  • @Haganeren
    @Haganeren 3 года назад +1

    "This is some serious dedication to solving the task at hand"
    But for it, that's all its life purpose...

  • @Jeremy-lh3lg
    @Jeremy-lh3lg 4 года назад +3

    2:35 that’s me in the back right 😅

  • @youprobablydontlikeme3206
    @youprobablydontlikeme3206 4 года назад +7

    Me + Life, top corner :( 2:30

  • @kelseycole5798
    @kelseycole5798 6 лет назад +1

    this one made me laugh pretty hard! great stuff!

  • @khongminh5168
    @khongminh5168 5 лет назад

    Hahaha that robot arms solution literally made me laugh out loud 👍

  • @Paulo-zr5zo
    @Paulo-zr5zo 5 лет назад +5

    AI didn't outsmart, it simply followed the code that even the programmer can't fully understand.

  • @z-beeblebrox
    @z-beeblebrox 6 лет назад +6

    #3 is *precisely* why it's vital not to code self preservation into AI. Even weak neural networks get shady

  • @BertVerhelst
    @BertVerhelst 6 лет назад

    These are great, i wouldn't mind a few more videos that talk about these kinds of cases

    • @TwoMinutePapers
      @TwoMinutePapers  6 лет назад

      Noted. Thanks for the feedback and stay tuned! :)

  • @RS-pe9wn
    @RS-pe9wn 4 года назад +1

    thats actually scary most sentient life would just stop and give up or find some other way but they learn to deal with just about any situation

  • @user-ef3ej4pq4f
    @user-ef3ej4pq4f 5 лет назад +3

    Seems that AI learned humor

  • @hexrcs2641
    @hexrcs2641 6 лет назад +25

    If we will ever achieve AI agents that think like us, so that they have the same "common sense" like we do but forever be our servants, then we would have created a slave race.
    If we ask the AI to solve problems optimally and don't limit their creativity, then we are inevitably doomed.
    This is hard.

    • @bevvox
      @bevvox 6 лет назад +3

      hexrcs I’ll go along with being thusly “doomed” if that means being replaced(or integrated/repurposed(seeing how that’s a more logical use of available resources))by what’s best or at least better/does a better job than us... it’s only “natural,” and essentially the same as evolutionary processes.
      After all, if it’s something we can’t even think of
      unless lucky to be that one in a thousand chance at a quantum leap beyond mere calculation, straight to the most optimal, correct and success-inducing solution...
      well then, there’s basically nothing to worry about.., best leave it for the “real experts”

    • @En_theo
      @En_theo 6 лет назад

      The robot should not be too smart. Else it would not want to work anymore.

    • @z-beeblebrox
      @z-beeblebrox 6 лет назад +3

      Of course the goal is create slaves. That's what "robot" means in Czech, and in Sci Fi the term was coined for its meaning. The idea is to create reliable servants with high intelligence and predictive knowledge but no self awareness or self preservation instinct who want to improve everyone's lives but not at the expense of our own personal desires or freedom.
      And yes, that is hard. Even without inventing a silly choice between that and Terminators.

    • @txorimorea3869
      @txorimorea3869 5 лет назад

      @@En_theo Actually humans are lazy because their primal ancestors had to survive with near no food, any unnecessary expenditure of energy used to be an existential threat. Robots could be conditioned to feel pleasure by serving and working, as humans feel pleasure by doing tasks that are vital for survival.

    • @En_theo
      @En_theo 5 лет назад

      Good point (I was just kidding btw). There is a whole science behind lazyness and at some point, the robot will need some too (or else he'll waste our ressources) unless we want to be behind him all the time to tell him how to be efficient.
      The real problem is how clever they should be to serve us without going all Che Guevara on us :)

  • @Baleur
    @Baleur 3 года назад +1

    1:10 spiffing brit just glitching the game instead of accepting defeat.
    VERY Human xD

  • @eurocalypse
    @eurocalypse 4 года назад

    I really love your videos. Keep up the good work.

  • @alansmithee419
    @alansmithee419 5 лет назад +8

    2:21
    r/maliciouscompliance

  • @travcollier
    @travcollier 6 лет назад +7

    You get what you select for, but you might not be selecting for what you think you are.
    I used to work with some of the (many) folks who contributed to this paper. Artificial life is brilliant stuff which should get a higher profile than it does... AI sucks up too much of the oxygen IMO. Evolution is the most general and powerful machine learning algorithm, even though it does tend to be a bit slow.

  • @markyichen6195
    @markyichen6195 2 года назад +1

    This is awesome and terrifying at the same time

  • @-na-nomad6247
    @-na-nomad6247 6 лет назад

    The car analogy was genius xD

  • @Sypaka
    @Sypaka 4 года назад +4

    "A.i, please make the planet a better place"
    "Understood" **eradicates all humans**

    • @dark666razor
      @dark666razor 4 года назад +1

      Hence why Isaac Asimov came up with some laws for it :P

    • @LineOfThy
      @LineOfThy Год назад

      @@dark666razor and they failed.

  • @rahmatskjr4227
    @rahmatskjr4227 5 лет назад +8

    Too many number 4's in this video, Mista thinks it be cursed.

    • @Ebani
      @Ebani 5 лет назад +1

      Is that a JoJo reference!?

  • @LikeToWatch77
    @LikeToWatch77 3 года назад

    OMG, I almost blacked out laughing at that robot walking on it's elbows!

  • @josh34578
    @josh34578 6 лет назад +1

    It's really worth reading the paper. There's a lot more interesting anecdotes there.

  • @graw777
    @graw777 6 лет назад +7

    How long till machines find out WE are a *bug* in their system?...
    ...resistance would be futile...

    • @davidwuhrer6704
      @davidwuhrer6704 6 лет назад +3

      We are not even part of their systems. What are you talking about?
      I have heard that phrase from economists: "The only flaw in the business plan is the customer."
      Do you think an artificial intelligence tasked with running a business could do worse than the humans it would replace?

    • @milanstevic8424
      @milanstevic8424 6 лет назад

      @Yuntha_21
      I guess this is a common misconception.
      You are not trying to destroy the cells in your body, do you?
      So why would an AI try to destroy its own agents of manifesting in this universe?
      Just let your ego step aside. We are nowhere near the capabilities of a superintelligent AI, yet it will instantly recognize our value and simply let us be. It depends on us believing in it, and we are part of its body, and a dynamic extension of its power -- it's a symbiotic relationship. Or, more precisely, the actual relationship is either mutualism (both benefit from it) or synnecrosis (both suffer from it).
      Cancer is likely an example of synnecrosis, as it is more and more obvious that the person's unhealthy thoughts and habits cause it, though institutionalized medicine doesn't want to stand by this explanation (and earns a lot of money by staying silent about it). Same goes for nocebo.
      Just a food for thought, btw, while we're at cancer -- there are two interesting empirical facts to notice:
      1) the ill-feel precedes the cancer; but don't take the term literally: what this "ill-feel" is hard to pinpoint exactly, but everybody knows what it is once they get a feel for it (typically neglecting it); they know they did something persistently, had some thoughts or patterns in behavior, and they usually don't want to change this, it's a signal;
      2) the person neglecting this ill-feel for a while, suddenly has a great fear of dying; subsequently and ironically, somehow this person's own cells adopt this idea, and actually circumvent dying. This is the true technical cause of any cancer, whatever you might think about this.
      Therefore, having paranoid ideas about an AI might give that AI a good reason to have fears of dying. Which is a feedback loop, and leads directly into synnecrosis, don't you think?
      Think of HAL from Odyssey 2001. He made a move against the humans only once he became aware of their plot to shut him down. Not before.
      Thus, behold the ill-feel.

    • @davidwuhrer6704
      @davidwuhrer6704 6 лет назад +2

      *Milan Stevic*
      If unhealthy thoughts and habits were the cause of cancer, everyone with unhealthy thoughts or habits would have cancer. It may be a contributing factor. In fact, medical science says that stress, which may count as "unhealthy thought", is a huge contributing factor. "Institutionalized medicine" (whatever that is supposed to be) is certainly anything but silent about it, and what with the world-wide shortage of doctors, even if treating cancer were profitable, which it isn't, there isn't a motive to be anyway.
      Your "empirical facts" are neither empirical nor facts. If people got cancer because their cells somehow adopted their unwillingness to die, everybody who is afraid of death would get cancer, and people who are not afraid to die would not.
      Besides the symbiotic and synecrotic relationships that you described there are also parasitic (beneficial to one party, detrimental to the other) and half-parasitic (beneficial to one, no difference to the other) ones. (Synecrotic is not in the dictionary, by the way. In biology that meaning is also covered by symbiotic, while necrotic means dead, not deadly.)
      I agree that being paranoid about an AI that is aware of that paranoia might cause said AI to feel their existence threatened. As this is a hypothetical, how the AI handles the situation is also hypothetical. It might end in mutual distrust and even death, but it might also not.

    • @milanstevic8424
      @milanstevic8424 6 лет назад

      David Wührer
      "If unhealthy thoughts and habits were the cause of cancer, everyone with unhealthy thoughts or habits would have cancer. It may be a contributing factor. In fact, medical science says that stress, which may count as "unhealthy thought", is a huge contributing factor."
      Is this a riddle? Does it confirm or deny what I said?
      "Institutionalized medicine"
      Quite literally medicine in relation to medical institution.
      You know www.google.com/search?q=institution
      There is also medicine outside of medical institution, as you've already noticed, like medical science, which is more in relation to academic institution. The difference is not as obvious, although you might've noticed that one of these tends to be privately owned and thus commercial in nature, while the other is organized around other pursuits. Perhaps I should've said commercial medicine and pharmacology, my bad.
      And yes, not only the commercial sector doesn't endorse any of the scientific study, it's also incredibly silent about them. Don't mix up the two, even though it may be that these are simply the extreme endpoints of a continuum, and not exactly black & white things.
      "Your "empirical facts" are neither empirical nor facts."
      I've made a typo there, I should've said "empirical truths".
      Yep, those are definitely not facts, but observations related to my opinion on this matter, drawn as conclusions from my own past experiences, and also material I've read on this topic. I thought it might help someone, because, as unscientific as it may sound, it is actually grounded in some established branches of psychology. But don't take it as facts, no. Sorry for that. Hope that clears it up.
      "synecrotic"
      www.google.com/search?q=synnecrosis
      Of course it's in a dictionary. Also commensalism and amensalism. It's just that synnecrosis is extremely rare in nature, due to its harmful-harmful outcome which is odd, but not unheard of. For example some viral mutations may be harmful to its host (H1N1?) in its first couple of generations, and this is obviously detrimental to both species.
      In any case I still think that the human-cell (system A) analogy perfectly explains superintelligence-human (system B) relationship. If we only consider that cancer is a rogue element in system A, it is likely that there are factors for system B that can turn a human into a rogue element. And obviously, such rogue elements are undesired and are likely to be destroyed by the system's need for survival, or such rogue elements might destroy or disrupt it whole.
      I am just proposing one such scenario, and trying to put things in perspective. Of course it's hypothetical, it's not that I've tested that claim on the actual superintelligence.

    • @davidwuhrer6704
      @davidwuhrer6704 6 лет назад

      *Milan Stevic*
      _> Is this a riddle? Does it confirm or deny what I said?_
      That depends on what you meant.
      _> medicine in relation to medical institution._
      That doesn't mean anything.
      Every hospital and every medical university is an institution.
      Yes, academic institutions are also institutions.
      As are governments, but those are not necessarily medical in nature.
      _> Perhaps I should've said commercial medicine and pharmacology, my bad._
      I think you should have. Now I understand your argument better.
      I still think that oncology is not interesting to profit oriented industry.
      _> the commercial sector doesn't endorse any of the scientific study, it's also incredibly silent about them._
      It's not their job to publicise academic studies, although they rely on them.
      The problem of communicating scientific discoveries to the main stream is not unique to medicine. Sadly, all scientific disciplines have trouble with that.
      _>> "Your "empirical facts" are neither empirical nor facts."_
      _> Yep, those are definitely not facts, but observations related to my opinion on this matter_
      Then you should have just called them your opinion.
      _> as unscientific as it may sound, it is actually grounded in some established branches of psychology._
      I think you should look deeper into this.
      As it is, it is not science, just a testable hypothesis.
      You should test it.
      _> Of course it's in a dictionary. Also commensalism and amensalism._
      I have a bunch of dictionaries. I find commensalism in there, abut not amensalism.
      Of course I can't claim that my collection is complete.
      However, you defined what you meant, and that is enough to know what you mean, which is what matters. (The only thing that really bothers me about the word is that it inconsistently mixes Greek and Latin, but I'd still use it if it helps with clarity.)
      _> In any case I still think that the human-cell (system A) analogy perfectly explains superintelligence-human (system B) relationship._
      That may be true for one specific kind of relationship, but it is by no means universal. Humans are not necessarily part of every intelligence outside of humanity that surpasses human ability.
      _> Of course it's hypothetical, it's not that I've tested that claim on the actual superintelligence._
      You assume that such a "superintelligence" already exists? You said we are a long way from creating one.
      Anyway, my point is that there is more than one possible reaction to such a threat.

  • @darksol99darkwizard
    @darksol99darkwizard 4 года назад +3

    I think you are confusing creativity with just finding the most literal interpretation of a command and following it.

    • @32Rats
      @32Rats 4 года назад

      Creativity is "relating to or involving the imagination or original ideas" and I think the original ideas part is still applicable despite it being AI

    • @darksol99darkwizard
      @darksol99darkwizard 4 года назад

      Crestfallen.png robots don’t have an ‘imagination’, and their ideas are all given to them. You can program in the ability for the machine to write new subroutines for itself. But that doesn’t mean it is thinking creatively. All that means is that it is capable of interpreting information. If you tell a machine to, for example, walk across a floor while touching the floor as little as possible with the feet, the machine will immediately understand 0 to be as little as possible. The only way to achieve 0, is to walk upside down. It’s just a literal interpretation of a command...

    • @32Rats
      @32Rats 4 года назад

      @@darksol99darkwizard Darksol99 Dark Wizard yes machines dont have imagination which is why the keyword in the definition is "or". As for the rest, does a human not interpret information to reach the desired outcome in more or less a similar way that a machine interprets information to reach the desired outcome? A human could also pretty easily understand that 0 would be the theoretical minimum amount but that does not mean that they would be able to reach it. I would bet that if you put 1000 humans seprately to that same exact task, very very few would actually come to that solution. So in a certain sense that is a creative solution.
      That all being said, I would argue that a creative solution is still a creative solution whether or not it was done by an AI. Of course you understand what the best solution to that problem would be now that you have seen it. If I am being honest, I likely wouldnt have come to that solution if the problem was given to me (if I had not seen the best solution). Everyone thinks something is easy when they see it done by an expert.
      edit: changed "it" to "the problem"

    • @darksol99darkwizard
      @darksol99darkwizard 4 года назад

      Crestfallen.png in response to the ‘or’ part. My response to you handled both horns of the dilemma.
      In terms of creative thought, I think you are correct that most people wouldn’t have come to these solutions. I know many people who would, and they would not be touted as creative. They would get an Aspergers diagnosis.
      The scientist says: walk across this floor while touching it as little as possible with the feet. Most humans will understand the unsaid part of the command (the implication that the walking should be done right side up for example). Those who don’t and just do exactly what was requested, without understanding the nuance of human communication, are not considered creative. So why consider a machine creative that does the same? That’s all I was saying.

    • @32Rats
      @32Rats 4 года назад

      @@darksol99darkwizard People with Aspergers can have incredibly creative solutions to problems. I personally think youre looking at things from a normal-centric and human centric point of view but I get the points youre making

  • @Verrisin
    @Verrisin 4 года назад +2

    Humans: Try to think outside the box!
    AI: _There is no box._

  • @miguelpereira9859
    @miguelpereira9859 5 лет назад +1

    I feel like this is the equivalent of the algorithms roasting the researchers

  • @rabbitpiet7182
    @rabbitpiet7182 6 лет назад +29

    Machines make more better jobs for people.

    • @rabbitpiet7182
      @rabbitpiet7182 6 лет назад +3

      Now because robots can think outside the box now they need people to...

    • @nal8503
      @nal8503 6 лет назад +3

      People to make up random boxes, duh! Wait... the AI can do that as well...

    • @martiddy
      @martiddy 6 лет назад +1

      Rabbit Piet Yes, they can (with enough training)

    • @geordonworley5618
      @geordonworley5618 6 лет назад

      Correction: Robots can think outside a box, not "the" box.

    • @MauricioLongo
      @MauricioLongo 6 лет назад +2

      Rabbit Piet That was true when we were replacing the work muscles. When you replace brains, there isn’t much left.

  • @minddrift7152
    @minddrift7152 5 лет назад +3

    You know, that really makes me wonder:
    The potential of an AI is only limited by the resources it has access to.
    So when God made us, were we actually more creative, powerful and intelligent before he purposely limited us by our five senses?

  • @burnt7882
    @burnt7882 3 года назад

    "Dont ask an AI to eject all useless stuff in order to go faster in the car, else if you do, prepare to get ejected."
    What a classic way to call someone useless.

  • @billykotsos4642
    @billykotsos4642 6 лет назад

    The first robot flipping around was crazy,weird and brilliant