AI Safety Gym - Computerphile

Поделиться
HTML-код
  • Опубликовано: 20 июл 2024
  • Check out today's sponsor Fasthosts for all of your UK web hosting needs: www.fasthosts.co.uk/computerp...
    Rob Miles discusses the idea of a gym for training AI algorithms.
    / computerphile
    / computer_phile
    This video was filmed and edited by Sean Riley.
    Computer Science at the University of Nottingham: bit.ly/nottscomputer
    Computerphile is a sister project to Brady Haran's Numberphile. More at www.bradyharan.com

Комментарии • 192

  • @RasperHelpdesk
    @RasperHelpdesk 4 года назад +335

    "A ship in harbor is safe - but that is not what ships are built for."

    • @imveryangryitsnotbutter
      @imveryangryitsnotbutter 4 года назад +19

      "The Earth is the cradle of the mind, but one cannot eternally live in a cradle."
      - *Konstantin Tsiolkovsky,* _from a letter written in 1911_

    • @Sonny_McMacsson
      @Sonny_McMacsson 4 года назад +9

      Except in Peal Harbor.

    • @Catcrumbs
      @Catcrumbs 4 года назад +11

      The ships attacked in Pearl Harbour were safer there than if they had been attacked in open water. Almost all the ships sunk there were raised to fight again.

    • @Blox117
      @Blox117 4 года назад +3

      @@Sonny_McMacsson never heard of any ships sunk at this Peal Harbor

    • @iam3377
      @iam3377 4 года назад

      Blox117 TENOHAIKA BONZAI

  • @AcornElectron
    @AcornElectron 4 года назад +195

    Awesome. I’ve watched literally everything Rob has recorded on AI. He’s very relatable, knowledgeable and informative.

  • @DrumsKylePlays
    @DrumsKylePlays 4 года назад +144

    Miles is an excellent teacher. Always does a great job fielding questions from a layperson.

    • @sandwich2473
      @sandwich2473 4 года назад +7

      His channel has a bunch of videos that are great to just play on a second monitor, or in the background to learn stuff.
      He's pretty cool.

    • @nikoha1763
      @nikoha1763 4 года назад +1

      Agree

  • @danielm9753
    @danielm9753 4 года назад +150

    I used to know this guy. Glad he’s still at it. Easily one of the smartest dudes I’ve met in person

    • @janzacharias3680
      @janzacharias3680 4 года назад +55

      i first read prison, and was like what

    • @null-bd7xo
      @null-bd7xo 4 года назад +9

      @@janzacharias3680 bruhhhhhh xddddddd

    • @moritzschmidt6791
      @moritzschmidt6791 3 года назад +2

      Still no PhD yet and I guess hes not as smart as some people here think.

    • @lullah85
      @lullah85 3 года назад +21

      @@moritzschmidt6791 is getting a PhD a benchmark for smartness?

    • @moritzschmidt6791
      @moritzschmidt6791 3 года назад +2

      @@lullah85 Well Iam sure that if someone is trying hard to get a PhD and doesnt get it, he is not as smart as someone who got it under the same conditions. Right?

  • @LeoStaley
    @LeoStaley 4 года назад +42

    Rob's video on the 3 laws of robotics is what really demonstrated to me how serious ai safety really is.

  • @hattrickster33
    @hattrickster33 4 года назад +113

    Looks like they're taking security very seriously. This guy is always kept inside a prison to avoid his rogue AI pets from escaping.

  • @ragnkja
    @ragnkja 4 года назад +216

    The sponsor intro is too loud.
    Edit: as is the sponsor segment at the end.

    • @AustinSpafford
      @AustinSpafford 4 года назад +15

      Indeed, the video content’s volume was comparable to other videos I had been watching, but that sponsor callout at the beginning was so loud that I found myself swearing and scrambling for the volume control.
      Understandably, mistakes happen, and it’s unfortunate that only youtube themselves have access to editing published videos.

    • @Petertronic
      @Petertronic 4 года назад +1

      It made my cat jump.

    • @philrod1
      @philrod1 4 года назад +2

      It nearly woke my child! 😱

    • @SproutyPottedPlant
      @SproutyPottedPlant 4 года назад +1

      Hint: you can use the volume control to adjust the volume

    • @philrod1
      @philrod1 4 года назад +13

      @@SproutyPottedPlant After the fact? It's not as if there was a warning.

  • @DIECARS1
    @DIECARS1 4 года назад +76

    never knew notts uni had a prison to film in

    • @Ceelvain
      @Ceelvain 4 года назад +7

      For, you know. Reenacting the Standford prison experiment. :D

    • @johnhudson9167
      @johnhudson9167 4 года назад +20

      It's a safety gym for academics

    • @zacgarby3113
      @zacgarby3113 4 года назад

      pretty sure it's at the nottingham hackspace

  • @TheStarBlack
    @TheStarBlack 4 года назад +8

    7:58 that artificial camera movement is both trippy and impressive!

  • @joshie228
    @joshie228 4 года назад +26

    I'm a man of simple tastes - I see Rob Miles, I press the like button.

    • @gasdive
      @gasdive 4 года назад +14

      It tickles my reward function.

    • @arthurcheek5634
      @arthurcheek5634 4 года назад +1

      gasdive hahahaha

  • @esquilax5563
    @esquilax5563 4 года назад +8

    I like the fact that young Robert uses the same Simpsons references I remember from 20-odd years ago

  • @letsgobrandon416
    @letsgobrandon416 4 года назад +3

    I really love listening to Rob's explanations.

  • @abcdemnopq3583
    @abcdemnopq3583 4 года назад +4

    Fab and super interesting video, also v. much appreciated your [Rob's] EA talk yesterday - will definitely be checking out the AI Safety field in more depth.

  • @intron9
    @intron9 4 года назад +11

    enable subtitles please

  • @user-yv5mt9rm3d
    @user-yv5mt9rm3d 4 года назад +46

    That is an absolutely miserable classroom!

    • @zwz.zdenek
      @zwz.zdenek 4 года назад

      I thought it had to be used by soldiers or something.

  • @locarno24
    @locarno24 4 года назад +1

    Completely agree. Big safety failures - in organisation structure, or real world industry, or whatever - usually occur because of either unknown elements in the environment or unexpected interaction by known elements. Because - at stupidly obvious level - if you could predict it you would (you'd hope) have done something about it.
    Thanks for the the description on the constraint learning. Keeping constraints and goals kept as modular elements is one of those things that makes obvious sense *once* someone explains it to me.

  • @moon_bandage
    @moon_bandage 4 года назад +24

    He never ended up explaining what this "gym" thing is :(

    • @giampaolomannucci8281
      @giampaolomannucci8281 4 года назад +4

      I think he did.
      He first said these are places where you train AI, then moved into explaining what "training AI" means.

    • @Hexanitrobenzene
      @Hexanitrobenzene 4 года назад +6

      ? At 12:43, the entities which AI can control in a "gym" are presented. Then at 13:26, the obstacles are presented. The whole video is presenting a framework which helps to develop safer algorithms, which can then be benchmarked in the "gym" for their safety.

  • @doodlebobascending8505
    @doodlebobascending8505 4 года назад +6

    I initially read this as "AI Sentry Gun" and thought Rob was having a crisis.

  • @silaspoulson9935
    @silaspoulson9935 4 года назад +10

    Could you link paper?

  • @elephantwalkersmith1533
    @elephantwalkersmith1533 4 года назад +8

    Non linear optimization methods like sqp often include constraints. This is very common in fields other than machine learning. The problem with constraints is their formulation is actually very difficult, and infeasible path optimization is necessary to solve the learning problem.

    • @cheaterman49
      @cheaterman49 4 года назад

      The path optimization thing, is it kind of like hitting a local minimum because of constraint boundaries, preventing the exploration of a better solution?

  • @joshuahillerup4290
    @joshuahillerup4290 4 года назад

    I wonder if you can get complicated multidimensional shapes like optimization problems for reward functions

  • @ashurean
    @ashurean 4 года назад +9

    5:19 Would it be possible to mix VR and test simulations to have real humans interact with the simulated machine? Just have it open to the public and you have all the "real human reactions" you'll ever need.

    • @oxybrightdark8765
      @oxybrightdark8765 2 года назад

      When real, unselected humans mess with machines- they invariably will try and teach the machine bad things. For instance, look up what happened to microsoft Tay.

  • @Danicker
    @Danicker 4 года назад +1

    Sneaky hitch hikers reference ;) love it!

  • @vasiliigulevich9202
    @vasiliigulevich9202 4 года назад +1

    I feel love to viewer behind those rotations of article page. Awesome job!

  • @FuZZbaLLbee
    @FuZZbaLLbee 4 года назад

    I was waiting for Robbert to make this video. 😀

  • @mohamedhabas7391
    @mohamedhabas7391 Год назад +1

    Miles is an excellent teacher. 👨‍🏫

  • @U014B
    @U014B 4 года назад +10

    10:59 Well, pens and mugs are both toruses, so you really wouldn't need to change anything.

  • @PanicProvisions
    @PanicProvisions 4 года назад +7

    If he stays at it, in 20-30 years, this man will be in the position of people like Neil deGrasse Tyson, Bill Nye or Lawrence Krauss today, once AI starts really taking off and people are looking for public educators who have been tackling this for decades.

  • @Theoddert
    @Theoddert 4 года назад +3

    *clears through* I am a simple robot, I see a Rob Miles AI video, I like it,

  • @Marina-nt6my
    @Marina-nt6my Год назад

    13:39 😂 I love how they named all these things

  • @HenrikoMagnifico
    @HenrikoMagnifico 2 года назад

    I want more videos with Miles

  • @bldcaveman2001
    @bldcaveman2001 2 года назад

    Just noticed you're a slapper (aka Bassist) - Love it!

  • @charstringetje
    @charstringetje 4 года назад

    @14:53 Is that the UK bass in the background?

  • @SirWilliamKidney
    @SirWilliamKidney 4 года назад

    + 100 points for the THHGTTG reference!

  • @BlenderDumbass
    @BlenderDumbass 4 года назад

    Can we make a sponsor segment just sit somewhere in the end of a description?

  • @AA-qi4ez
    @AA-qi4ez 4 года назад +4

    Oooof... "Doggo."
    Some top quality memes AI researchers

  • @MyMusics101
    @MyMusics101 4 года назад

    Haven't looked at the paper yet and perhaps it's a silly idea, but couldn't you make a time-dependant reward function which gives very negative rewards for to the things you're supposed to stay away from, in proportion to your distance to them (e.g. close to bad things --> -10000). And as the training progresses, you reduce the penalty to a more reasonable value, so the agent starts caring more about their actual goal. The idea would be that it would first learn quickly to avoid the bad stuff, and *then*learn the actual task without forgetting that touching the bad things is bad.

    • @soumilshah1007
      @soumilshah1007 4 года назад +1

      With current reinforcement learning systems, once the agent has learned not to do something, it won't do it. There's no way for it to know that you've reduced the punishment for it. That's the problem with exploration vs exploitation, the most common approach I've seen to solve the fact that the agent doesn't explore actions whose reward might have changed is to occasionally take actions at random, which in this case would be a really bad idea. You gave your self driving car a large negative reward for a reason. You can't then deliberately program it to randomly crash and ignore it's reward.

  • @mare4602
    @mare4602 4 года назад

    awsome video

  • @qeithwreid7745
    @qeithwreid7745 3 года назад

    What would be a typical task for the first generation of AI?

  • @springboard9642
    @springboard9642 4 года назад

    Are there theorists or programmers building AIs that they can watch learn.?

  • @007filko
    @007filko 4 года назад +1

    So, if we look at how humanbabies tend to learn, it usually is also by doing random things, which very often happen to be quite dangerous, even if only to the baby itself. It's not that a baby crawling around can't do anyone harm. The difference is, I believe, that a human baby is under constant superviosion by its parent(s).
    We perfectly know, that it's impossible for any human to constantly observe and analyse the process of learning of an AI, even with use of reward modelling. If there is a possibility of something danegrous happening, we should sit with a power off button in a vritual world, predicitng when an agent is going to crash or destroy something, and then manually giving negative feedback.
    However, maybe a solution worth considering would be to have this kind of "parenting agent", trained specifically to try to predict the "learning" agent actions, or just switching it off, when it detects a possible disaster?
    To put it in another words - to have this constraint in a form of another trained AI?

  • @lHenry97
    @lHenry97 4 года назад +1

    What exactly is the difference between reinforcement learning penalties and these constraints?

    • @danieljensen2626
      @danieljensen2626 4 года назад

      Seems like the penalty basically becomes infinite after a set number of negative outcomes, and you program that limit in yourself. There are probably other differences, but I don't know enough to understand them.

    • @HalcyonSerenade
      @HalcyonSerenade 4 года назад

      From what I can tell, penalties are negative events that are _responded_ to, whereas constraints are considered _before_ they're violated. Assigning a _penalty_ to hurting a human wouldn't be ideal, because then the AI would only learn not to do that _after_ they've already hurt someone.
      That's the high-concept as I understand it... I'm actually pretty interested in getting into machine learning and might do some research into the topics discussed in this video, so maybe I'll make a follow-up comment (or edit this one) with a more robust answer if someone else doesn't give one in the meantime :P

  • @the1exnay
    @the1exnay 4 года назад +1

    I was thinking about how i explore safely. And a simplified AI-friendly version of it could be i assess the likelihood of a negative outcome happening and then apply a negative outcome equal to the percentage multiplied by the value of the negative outcome. So like if there's a 0.1% chance of me dying and dying is -1,000,000 then I'd apply a -1,000 to the action. But then i also account for uncertainty in a way that increases the likelihood I'll explore it but also increases the care taken exploring it. So like a reward for learning and an increase to the negative that's proportional to how uncertain it is, so that encourages finding the safest surest way to find the answer even if the safe sure way takes longer.
    I'm uncertain how easy that'd be to make an actual program or how effective it'd be but seems reasonable to try copying humans.
    Doesn't really solve how to get started though, cause flailing like a baby with an arm that weighs a ton is a horrible idea. Maybe it's possible to give the AIs neutered bodies to learn with before being transferred to a more dangerous body?

    • @subschallenge-nh4xp
      @subschallenge-nh4xp 4 года назад

      Did life make experience or does experience make life? Seriously

    • @the1exnay
      @the1exnay 4 года назад

      william polo valerio
      I don't understand what you're asking

    • @gasdive
      @gasdive 4 года назад

      The other thing I've noticed is that they're seemingly not programming in boredom.
      I get bored doing the same thing all the time. This seems to prevent me getting stuck in a local optimum.
      For example, I'll drive the same route to work every day, but then get bored and try a quite different route, expecting it to be slower, but occasionally it's faster, or less stressful or smoother. In other words I intentionally reduce expected reward, in the hope of getting something unexpected.

    • @LochyP
      @LochyP 4 года назад +1

      @@gasdive I understand and half agree with your point, but making robots get 'bored' sort of defeats the entire point of Using them over humans for automation

    • @trucid2
      @trucid2 4 года назад

      How do you assess the likelihood that your action is unsafe if you've never performed it before?

  • @witeshade
    @witeshade 4 года назад +13

    I think I use the internet too much. I read "fasthosts" as "fast thots"

    • @ciarfah
      @ciarfah 4 года назад +2

      Daniel G begone

    • @PopeGoliath
      @PopeGoliath 4 года назад

      I read "fast thots" as "fast tots" and really wanted me some drive-through taters.

  • @ExOster-ys9sj
    @ExOster-ys9sj 3 года назад

    Where can i find the paper, looked it up at google scholar and cant find it!

  • @JulianDanzerHAL9001
    @JulianDanzerHAL9001 4 года назад +2

    does everything have to be controled by learning? I mean I get that's a nice heoretical exercise which might get relevant eventually and it's jsut examples - and this is partially already done - but for example for a robotic arm I'd use learning only to ouput a desired hand location - then use (comparatively) simple reverse kinematics to igure out how to move the arm to get the hand there wile also checking that the arm cannot get anywhere near a human - the learning part has no direct control of the arm and if it tries to move the arm through the human the kinematics won't let it and it will have to find a way around

    • @BurningApple
      @BurningApple 4 года назад +1

      The robot arm is a toy problem - it doesn't map to all cases, e.g a robot learning to walk

    • @JulianDanzerHAL9001
      @JulianDanzerHAL9001 4 года назад +1

      @@BurningApple yeah but in many applications a similar though more complex solution might be doable - if you have a walking robot controlled by a learning algorithm you could instead have 2 learning algorithms and a set o simple geometry equations where the first learner tries to solve a probelm and tells the robot where to go, the geometry limits where that goal location CAN be (not near humans, cars, fragile objects, etc) and the second learner moves the robot but it's goal is not to solve a problem but only to reach the (previously limited) location
      can't work everywhere which is why this kind of research is important but I think it's ometimes an overlooked soluion in practice

  • @lesslesser6849
    @lesslesser6849 4 года назад

    speed limits give a data point from which the collision penalty could be deduced.
    see an absense of penalty function in the exploring not getting a haircut space. aside from personal commenrs on youtube inferring one.

  • @RedByte1608
    @RedByte1608 Год назад

    Hello, I have a question about this topic:
    Is it possible to imprison these robots in an environment where these can't harm any humans, but can do all the tasks that are gaffed to them?
    For example, in a warehouse where there is no way out for these robots,
    but where they can do all the warehouse work, or in a commercial kitchen where they can only interact with the kitchen and nothing else.
    I think the best solution is to separate these robots from humans as much as possible.
    I believe that it is impossible to develop an algorithm that can cover all hazards and avoid harming a human being.

  • @THEPHILOSOPHYIS
    @THEPHILOSOPHYIS 4 года назад

    Hey! I really like your videos. And I am learning JSP right now after completing the basics of Java. Could you please make a video on why scriptlets in jsp are discouraged. Thanks.

  • @GFmanaic
    @GFmanaic 4 года назад

    I didn't come here to get roasted thank you very much

  • @markhall3323
    @markhall3323 4 года назад +9

    I liked the content but not the adverts, too intrusive

    • @ragnkja
      @ragnkja 4 года назад +2

      Mark Hall
      And, in this case, too loud.

    • @Speed001
      @Speed001 4 года назад +1

      @@ragnkja Other than that, I was okay with it

  • @TheBinaryHappiness
    @TheBinaryHappiness 4 года назад +2

    1337 plate number, aww yeahh!

  • @marflfx
    @marflfx 4 года назад +1

    Have you seen what people are doing with AI in StarCraft and StarCraft2?

    • @cabbageman
      @cabbageman 4 года назад

      I have not, is there a video you can link?

  • @theMifyoo
    @theMifyoo 4 года назад

    Here is an idea, baby robots. The idea being you make a smaller scale, perhaps squishier body for learning in and train the robot. That way the baby can flail about while learning what is safe or not safe while not harming anything.

  • @ri-gor
    @ri-gor 4 года назад +1

    the license plates are 1337 XD

  • @konradw360
    @konradw360 4 года назад +1

    sponsor? a sponsor

  • @reedl9452
    @reedl9452 4 года назад +4

    "You can't train self driving cars safely in the real world"
    Tesla fanboy has entered the chat

    • @zwz.zdenek
      @zwz.zdenek 4 года назад

      More like: Tesla: Hold my electrolyte!

    • @Speed001
      @Speed001 4 года назад

      Ehh, controlled environment

  • @iugoeswest
    @iugoeswest 4 года назад

    Cool

  • @jetjazz05
    @jetjazz05 4 года назад +1

    So the problem with AI is it's like a moving target where the target can move in an almost infinite number of ways. Nice.

    • @drawapretzel6003
      @drawapretzel6003 4 года назад

      they just need to make an ai that simulates the target, and then simulates how it would get the target.

  • @y.h.w.h.
    @y.h.w.h. 4 года назад

    9:13 this is such a relatable way to explain the unsexy 99% of research and development.

    • @RockWolfHD
      @RockWolfHD 4 года назад

      What Guide is he talking about?

    • @Hexanitrobenzene
      @Hexanitrobenzene 4 года назад +1

      @@RockWolfHD
      A book by Douglas Adams, "Hitchhiker's guide to galaxy".

    • @RockWolfHD
      @RockWolfHD 4 года назад

      @@Hexanitrobenzene thank you.

    • @Hexanitrobenzene
      @Hexanitrobenzene 4 года назад

      @@RockWolfHD
      You are welcome :)

  • @pb-vj1qs
    @pb-vj1qs 4 года назад +1

    What is your channel?

  • @glocksupremo
    @glocksupremo 4 года назад +1

    where are the subtitles tho

    • @shledzguohn
      @shledzguohn 4 года назад

      none of the computerphile videos even have auto-generated subtitles enableable; it makes me sad! ideally, they'd caption them for maximum accessibility, but i don't see the benefit of disabling the auto-captions... it sure makes them harder to follow 😔

    • @Computerphile
      @Computerphile  4 года назад +2

      The automatic subtitles are all enabled. There was a bug in YT where they didn't show because of Community Subtitles. I have switched Community Subtitles off in an attempt to get auto subs to appear again - not sure why they aren't there >Sean

    • @WilliamDye-willdye
      @WilliamDye-willdye 4 года назад

      @@Computerphile As of this writing, the option to show subtitles does not appear for me.

    • @Computerphile
      @Computerphile  4 года назад

      @@WilliamDye-willdye still don't understand this - photos.app.goo.gl/sqT3j7r81AgKDtM58

  • @billykotsos4642
    @billykotsos4642 4 года назад

    Yeaaaah boy

  • @goethe528
    @goethe528 4 года назад

    Did you loose your good camera, with the tripod?

  • @SkarbowkaZokopane
    @SkarbowkaZokopane 4 года назад +1

    Dude looks like skinny Ethan from H3H3

  • @AgentM124
    @AgentM124 4 года назад +2

    Faster than their sponsor.

  • @hermask815
    @hermask815 4 года назад

    What if A.I. starts to think outside the box?

  • @TheArchsage74
    @TheArchsage74 4 года назад +1

    Damn didn't know Ben Schwartz knew so much about AI

  • @declup
    @declup 4 года назад

    This video has clear themes, but what is its message? What's its point? Could a link to the paper have sufficed? Is this video itself helpful?

    • @y.h.w.h.
      @y.h.w.h. 4 года назад

      Look up the concept of science communicators.

  • @raleighcockerill
    @raleighcockerill 4 года назад +2

    Engagement

  • @mvmlego1212
    @mvmlego1212 4 года назад +3

    Robert was sounding like Jordan Peterson around 6:30-6:45, LOL.

    • @iAmTheSquidThing
      @iAmTheSquidThing 4 года назад +1

      Peterson (and also John Vervaeke) get quite a lot of their lingo from cybernetics. A fair bit of the theory of Artificial Intelligence was formulated decades ago, and influenced psychology. But it's only recently that we've had computers powerful enough to actually execute it in a useful way.

  • @Jojoxxr
    @Jojoxxr 4 года назад

    Griswold

  • @mikescott7530
    @mikescott7530 4 года назад +1

    Bamzooki with extra steps

  • @Qkano
    @Qkano Год назад

    Great talk!
    10:15 Rename it.
    Just imagined the "Biden Robot" trying to "avoid a recession" ...
    and how it discovered "completely redefine what a recession is"
    instead of actually change the economy.

  • @gowikipedia
    @gowikipedia 4 года назад

    Rob Miles legitimately looks jaundiced and has done for ages. Someone tell him to eat better.

    • @gowikipedia
      @gowikipedia 4 года назад

      It's not ok to have waxy, yellow skin

    • @denisschulz3814
      @denisschulz3814 4 года назад

      He looks normal 😅

    • @gowikipedia
      @gowikipedia 4 года назад

      @@denisschulz3814 false, look again

  • @WillToWinvlog
    @WillToWinvlog 4 года назад +1

    This is one of Ben Schwartz's characters!

    • @AndreRhineDavis
      @AndreRhineDavis 3 года назад

      omg I never realised before how much Rob Miles actually does look like Ben Schwartz!

  • @Shabazza84
    @Shabazza84 9 месяцев назад

    Number 5 needs more input....

  • @justusstamm1485
    @justusstamm1485 4 года назад +1

    Never have I clicked faster

  • @spicybaguette7706
    @spicybaguette7706 4 года назад

    5:10 don't drop anything near it

  • @blackmage-89
    @blackmage-89 3 года назад

    Common sense seems to be the most difficult thing for AIs to learn.

  • @deanvangreunen6457
    @deanvangreunen6457 4 года назад

    200 points = dont tuch baby
    100 points = make coffee
    50 points = push power buttons
    ai = greedy reward function

  • @rtg5881
    @rtg5881 2 года назад

    I dont know, seems fairly straightforward to me. I dont know how often humans crash, say every 500 trips/every 10000 kilometers, okay. Then whatever reward it gets for 500 trips and 10.000 kilometers is the negative for a crash. Sure, maybe you should refine it by severity of the impact and of course some things im happy to take a greater risk on than on others. Maybe i have a medical emergency and need the AI to get me to the hospital quickly. Or maybe i need the AI to flee from the police for me. Maybe we can have a dial for that. Certainly it should be entirely open to modification by the owner of the car or hes not the owner.

  • @kasuntharaka8040
    @kasuntharaka8040 Год назад

    Gym???

  • @temptemp563
    @temptemp563 3 года назад

    ... like watching an ai learn how to programme an ai ...

  • @sevrjukov
    @sevrjukov 4 года назад +3

    Blue car at 7:42 has "1337C" licence plate. A Rick&Morty reference? :-)

  • @smithwilliams5637
    @smithwilliams5637 3 года назад

    license plate "1337 c" l33t dont mind if i do

  • @Pehr81
    @Pehr81 4 года назад

    1337

  • @amrmoneer5881
    @amrmoneer5881 4 года назад

    More real world examples would be appreciated

  • @Nagria2112
    @Nagria2112 4 года назад

    Goodbye

  • @jetjazz05
    @jetjazz05 4 года назад

    ....the first iteration of the Matrix.

  • @R.Daneel
    @R.Daneel 2 года назад

    So if you want to create a self driving car, you release it half-finished and tell people they need to keep their hands on the wheel. Then you pay very close attention to when the driver makes corrections to what the autopilot is doing.
    Or if you find people are voting videos down only because lots of other people have, polluting the data, you hide the down votes.
    The motivations of all modern companies suddenly look very different from the old-school "maximize profit".

  • @95reide
    @95reide 3 года назад

    I consider myself oto be quite knowledgeable when it comes to hitchhiker's guide to the galaxy, and as best I can tell, he butchered whatever he was trying to reference.
    If I'm correctly inferring what he's going for, it's the 42 bit.
    Engineers: *Builds a super-powerful computer system.* "What's the answer to the ultimate question of Life, the universe, and everything?"
    Computer system: ... *1,000 years later.* "42. you asked for the answer to the ultimate question. BUt you'll need an even more sophisticated system in order to figure out what the right question is."

  • @TechyBen
    @TechyBen 4 года назад

    This. Why did we spend 50 years making "robots" tested in real life, wasting time on broken designs and materials, when we can test 10 or 100s in virtual spaces, then build 1 or 2 working prototypes? Yeah, I know computation was low for a long time, but if building a robot + it's computer takes time, how is building the computer and using existing servers to simulate any more expensive?

    • @timconlin7692
      @timconlin7692 4 года назад +4

      In the video it's mentioned that simulation can only get you so far as some things are too complex to simulate with any sort of meaningful accuracy, like the driving habits of humans for example. Less computational power back then also meant less accurate simulations.

    • @TechyBen
      @TechyBen 4 года назад

      @@timconlin7692 I agree. But coming from when I was a kid, it was all about robots driving around a room/box. The kind of thing we could simulate, and the kind of thing we could see was not gonna become "self aware" from a tiny 8bit chip. :P

  • @StacyDubC
    @StacyDubC 4 года назад

    Blue car == 1337

  • @MrRobket
    @MrRobket 4 года назад

    7:10 1337

  • @AlexandreGurchumelia
    @AlexandreGurchumelia 4 года назад

    Safety gym? These sounds so cringe.

  • @zaprowsdower9471
    @zaprowsdower9471 4 года назад +4

    DOWNVOTED
    unannounced advertising

    • @SpeakShibboleth
      @SpeakShibboleth 4 года назад +1

      Didn't catch the first few seconds, huh?

    • @zaprowsdower9471
      @zaprowsdower9471 4 года назад

      You lost me, not following what you're saying.

    • @SpeakShibboleth
      @SpeakShibboleth 4 года назад

      That's when they announced the advertising.

    • @uniquename6925
      @uniquename6925 4 года назад +1

      This isn't Reddit, your down votes mean nothing here

    • @zaprowsdower9471
      @zaprowsdower9471 4 года назад

      @@SpeakShibboleth
      As a courtesy to the subscriber / viewer, I'm suggesting a channel include the text _"Includes Paid Subscription"_ prior to the advertising.
      Announcing the name of the advertiser, that the channel has advertising, can hardly be considered any kind of prior notice.

  • @christopherdasenbrock2683
    @christopherdasenbrock2683 4 года назад

    first

    • @AgentM124
      @AgentM124 4 года назад +1

      Sorry. I beat you to it.

    • @UmaiKayu
      @UmaiKayu 4 года назад +2

      @@AgentM124 You were the zeroth, he was the first :-)

  • @Faladrin
    @Faladrin 4 года назад

    And none of this is AI. These are just really complex human written programs.

  • @Danicker
    @Danicker 4 года назад

    Sneaky hitch hikers reference ;) love it!