Are AI Risks like Nuclear Risks?

Поделиться
HTML-код
  • Опубликовано: 16 июл 2024
  • Concerns about AI cover a really wide range of possible problems. Can we make progress on several of these problems at once?
    With thanks to my Patreon supporters:
    - Ichiro Dohi
    - Stefan Skiles
    - Chad Jones
    - Joshua Richardson
    - Fabian Consiglio
    - Jonatan R
    - Øystein Flygt
    - Björn Mosten
    - Michael Greve
    - robertvanduursen
    - The Guru Of Vision
    - Fabrizio Pisani
    - Alexander Hartvig Nielsen
    - Peggy Youell
    - Konstantin Shabashov
    - The Dodd
    - DGJono
    - Matthias Meger
    - Scott Stevens
    - Emilio Alvarez
    / robertskmiles
  • НаукаНаука

Комментарии • 401

  • @diablominero
    @diablominero 5 лет назад +212

    What made the Demon Core so dangerous was that physicists thought they were too cool to use safety precautions. How do we prevent that in AI research?

    • @SenorZorros
      @SenorZorros 3 года назад +27

      generally the problem is averted by Isolates test benches, big red buttons, power cutoffs and if it goes really wrong enclosed boxes on which we can pour several meters of concrete. problem is, many researchers are also too cool for those.

    • @AugustusBohn0
      @AugustusBohn0 3 года назад +14

      the Solarwinds hack is still fresh in my mind as I read this, and if we can't keep people who make a tool as important and far-reaching as Solarwinds from taking shortcuts, like setting a very important password to [name of company]123, then I really don't know what we can do other than maybe train/indoctrinate people to have a deep rooted belief that mistakes and shortcuts will lead to disaster

    • @spaceanarchist1107
      @spaceanarchist1107 2 года назад +11

      @@SenorZorros Chernobyl - researchers turned off safety systems in order to perform experiment

    • @MarkusAldawn
      @MarkusAldawn 2 года назад

      The general method has been to fuck up and get people killed, and then go "safety would have prevented this," and then implement safety.
      Not sure we have the luxury of that this time around.

    • @idiomi8556
      @idiomi8556 Год назад

      @Augustus Bohn I fail to see the issue with your solution? Fixes that issue and a bunch of others

  • @NathanTAK
    @NathanTAK 7 лет назад +296

    The solution to the self-driving car trolley problem:
    1. Choose one of the options at random
    2. Hit them
    3. Turn around, hit the other one too.
    No moral dilemma present in making a choice!

    • @ccgarciab
      @ccgarciab 5 лет назад

      Naþan Ø MTD

    • @sweeper7609
      @sweeper7609 4 года назад +4

      A: The only way this situation may happen is because:
      1-Bug. We can't do ethic with buggy car.
      2-The car have been programmed to be malicious. We can't do ethic with malicious car.
      3-The driver cause this situation. If the car can take contrôle it should kill the lonely human.
      4-The crowd cause this situation. Kill the crowd, I don't want idiot crossing anywhere because "Lol, I don't care the car will save me".
      B: The only way this situation may happen is because:
      1,2-Same
      3-Same but hit the wall
      4-Same but only one human cause this situation.
      C: The only way this situation may happen is because:
      1,2,3-Same
      4-Same with more human

    • @iruns1246
      @iruns1246 4 года назад +13

      Solution: Make every self-driving AI take a test in simulated settings. The test should be rigorous (maybe millions of scenarios) and designed by a diverse ethics committee and open to democratic debates. After that, as long as the AI passed the test and the copy is operating in a sufficient hardware, there shouldn't be ANY liabilities. Maybe after there are accidents, the test should be reviewed, but there shouldn't be criminal charges to anyone. Treat it like doctors following the procedures.

    • @eadbert1935
      @eadbert1935 3 года назад +1

      the issue with the self-driving car is: we worry so much about these questions of morality and liability, we forget that self-driving cars would automatically reduce moral question by not being AS fallible as humans
      like, we have 90% fewer accidents (i made this number up, i don't have sources for this) and we worry about what to do with the last 10% instead of being happy with 90% reduction
      ffs, letting ANYONE decide until we have a better solution is a superior moral choice to waiting for the better solution

    • @adenjordan3659
      @adenjordan3659 2 года назад

      You all probably dont give a shit but does anyone know a method to log back into an Instagram account?
      I somehow forgot the login password. I would love any assistance you can offer me.

  • @benjaminbrady2385
    @benjaminbrady2385 7 лет назад +300

    "We will have a problem with morons deliberately jumping in front of them for fun"
    Thanks for the idea!

    • @NathanTAK
      @NathanTAK 7 лет назад +37

      I think our self-driving car algorithms will have to be programmed to run them down.

    • @DavidChipman
      @DavidChipman 6 лет назад +26

      Car: "You asked for it, idiot!"

    • @sandwich2473
      @sandwich2473 5 лет назад +8

      This problem already exists, to be honest.

    • @e1123581321345589144
      @e1123581321345589144 5 лет назад +3

      Check this out.
      www.nytimes.com/2018/12/31/us/waymo-self-driving-cars-arizona-attacks.html

    • @adrianalexandrov7730
      @adrianalexandrov7730 4 года назад +7

      actually you can already jump in front of some volvo SUVs and they would automatically break.
      Good idea for some trolling.
      Just need to identify the right one correctly and for it to be functional. Kinda leap of faith right now.

  • @MetsuryuVids
    @MetsuryuVids 7 лет назад +182

    Is that "I Don't Want To Set The World On Fire" at the end?
    Amazing ahahahah
    Also amazing video.

    • @Xartab
      @Xartab 7 лет назад +9

      On an ukulele, if I'm not mistaken

    • @NathanTAK
      @NathanTAK 7 лет назад +11

      +Batrax An ukulele? Perhaps an electric battle axe ukulele?

    • @Xartab
      @Xartab 7 лет назад +4

      Of course, how silly of me to miss that!

    • @ommurg5059
      @ommurg5059 4 года назад +1

      Sure enough!

  • @grahamrice1806
    @grahamrice1806 7 лет назад +234

    Forget "what if my robot ignores the stop button?", what about "what if my robot ignores my safe word!?" 😅

    • @PickyMcCritical
      @PickyMcCritical 7 лет назад +36

      Sounds kinktastic.

    • @NZAnimeManga
      @NZAnimeManga 5 лет назад +24

      @@PickyMcCritical please assume the position

    • @treyforest2466
      @treyforest2466 5 лет назад +9

      The number of likes on this comment must remain at exactly 69.

    • @TheUntamedNetwork
      @TheUntamedNetwork 4 года назад +18

      Well... if its terminal goal was your pleasure you in for a hell of a ride!

    • @xcvsdxvsx
      @xcvsdxvsx 4 года назад

      ew

  • @IsYitzach
    @IsYitzach 7 лет назад +60

    I've been wondering what people mean by "ignite the atmosphere."

    • @osakanone
      @osakanone 5 лет назад +27

      big whoosh burny burny aarrghhh dying smusshfffppprrwwwfffhhhggwgglffpfpffttttBANGBANGffppfftthhssshhhhppfsssttttttttt...
      Only like, a few centuries in length.

    • @RobKMusic
      @RobKMusic 4 года назад +10

      Accidentally causing a runaway fusion reaction of the nitrogen molecules in the air all around us turning the earth into a little star for a few days or however long it would take to completely burn off the atmosphere of the planet essentially extinguishing 99% of life.

    • @kungfreddie
      @kungfreddie 4 года назад +3

      @@RobKMusic we had 100s of fusion reactions in the atmosphere the last century.. and it didnt happen!

    • @bastion8804
      @bastion8804 4 года назад +4

      @@kungfreddie First of all, no we don't. Not even close. Second, 100 fusion bombs are tame compared to what they're describing.

    • @kungfreddie
      @kungfreddie 4 года назад +9

      @@bastion8804 yes we have! The number of atmospheric test (not counting underground detonations) of thermonuclear devices over 1 MT is 67(!). And that's just over 1 MT. The total number of atmospheric tests are 604! And it's probably a minority of them that are not thermonuclear.
      So I'm sorry.. it wrong about that!

  • @matteman87
    @matteman87 7 лет назад +112

    Love this channel, keep up the good work!

  • @economixxxx
    @economixxxx 7 лет назад +17

    i swear i was thinking of how much id like to see more of this channel then boom! new vid. awesome job mate!

  • @lukaslagerholm8480
    @lukaslagerholm8480 7 лет назад +5

    The including of images of articles, websites, research papers and alike is really good and I love it. Keep it up and dont be afraid of letting them hang around for a little longer so that more people notice them and actually read, theyre quite often very intresting and on point. Keep up the good work!

  • @magellanicraincloud
    @magellanicraincloud 6 лет назад +3

    I agree with you Rob about the Universal Basic Income. CGP Grey did a great (terrifying) video called "humans need not apply" where he raised the question if what do you do when people are not only unemployed but unemployable simply because they are human?
    Unless we have some means of providing for the vast, overwhelming majority of people who don't own the robot factories, what are they supposed to do? How are they supposed to be able to afford food and shelter? These are social questions which we need to be discussing right now because the time when the solutions will be needed is right around the corner.

    • @alexpotts6520
      @alexpotts6520 4 года назад +2

      The way I think about this is in terms of Maslow's hierarchy of needs. A society where the AI owners gobble up all the wealth, and 99% of us are destitute, is obviously terrible, there is an overwhelming lumpenproletariat which may be falling short of even the first level of the hierarchy: food, shelter, survival.
      A UBI world would be interesting. We'd all have plenty to live on, especially since goods are much cheaper in a post-work world because there are no labour costs in production - indeed if you subscribe to the labour theory of value (not sure I do) then pretty much all goods are worthless at this point. So we're doing well on the first couple of rungs, indeed we have all the material wealth we could possibly want, and the third level - love - is really beyond the remit of the state even in principle. (Well, maybe AIs could make humans fall in love with each other. Is it ethical to mess with people's brains in this way to make them happy? That's kind of off-topic.)
      But where the UBI proponents fall down is that they get stuck halfway up the pyramid. Careers, or if not that then some sort of mastery at an amateur level (and remember, these AIs will outcompete us at everything, not just our jobs), is largely necessary for the higher rungs of self-esteem and self-realization. The only way I can thing of getting us round this is for AIs to wirehead us - in short we become a race of vegetables. Is that what we want?
      In summary, UBI is certainly an improvement on "do nothing", but it's hardly a satisfactory solution. There must be something better, or at least UBI can only be part of the solution even if it is an important part.

  • @williambarnes5023
    @williambarnes5023 4 года назад +35

    AI risks are NOT like nuclear risks.
    For example, the AI has a chance of _winning._

  • @godlyvex5543
    @godlyvex5543 Год назад +7

    I think the economic risks are only big risks because of the idea that everyone NEEDS a job to make money to live. If something like UBI were implemented, maybe it wouldn't be a catch-all solution, but it wouldn't be nearly as bad as if everyone were unemployed with the current system.

  • @zappawench6048
    @zappawench6048 3 года назад +1

    Talking about igniting the entire atmosphere and wiping all life off the face of the Earth forever; outro music is "I don't want to set the world on fire". Savage.

  • @mikolajpiotrowski6043
    @mikolajpiotrowski6043 4 года назад +11

    4:57 Maria Curie died from radiation poisoning BUT it had nothing(or not so much) to do with her research, she was a volunteer medic in mobile x-ray diagnostics unit during WWI(there wasn’t any kind of protection for staff) So all of personnel recived radiation equal to sum of doses for every scan.

    • @Xazamas
      @Xazamas Год назад +4

      Also, modern x-ray machines give you much smaller dose because both sending and receiving x-rays has gotten significantly better since WWI.
      Reminds me of the "radiologist tells you it's perfectly safe to have your x-ray taken, then hides in adjacent room or behind lead cover" -joke/meme. Reason for this is that while reassuring you by subjecting themselves into few stray x-rays would cause them no measurable harm, doing this with every patient would eventually add up to a significant, harmful dose.

    • @fss1704
      @fss1704 Год назад

      Dude, they didn't know what they're doing, handling radioactives in their bare hands is no good.

  • @TheJamie109
    @TheJamie109 5 лет назад +3

    I just recently came upon your videos on computerphile. I watched all of them in an afternoon and had to know more. So here I am more leisurely but persistently going through your channel. You do such a great job of fusing big complex ideas with a bit of humour and real world applications, without any broad "dumbing down" of the information you provide.
    I have always enjoyed programming and have sought a different path right out of high school. Your videos have reignited my passion and I hope to steer towards this passion as my life progresses.
    Thank you, Keep up the great work.

  • @vanderkarl3927
    @vanderkarl3927 4 года назад +96

    "The worst case scenarios for AI are *worse* than igniting the atmosphere, and our understanding of AI is probably less complete than their understanding of nuclear physics was."
    This sentence is one of the most bone-chilling, brain-melting, soul-rending, nightmarishly terrifying statements that the human race has ever produced, and we've produced some really, really nasty ones.

    • @MorgurEdits
      @MorgurEdits Год назад +1

      So I wonder what are the arguments for that conclusion that it is worse.

    • @MrAwawe
      @MrAwawe Год назад

      @@MorgurEdits a superinteligence could potentially launch rockets into space in order to harvest all the material in the galaxy, say for the purpose of making stamps. It could therfore be a threat to potential other civilizations in the universe, and not just to life on earth.

    • @alexanderpetrov1171
      @alexanderpetrov1171 Год назад +12

      @@MorgurEdits Imagine an AI that intents to not just destroy humanity, but make it suffer as much as theortically possible... And then do the same with all other life in the Universe.

    • @AtticusKarpenter
      @AtticusKarpenter Год назад +2

      @@alexanderpetrov1171 Also, such AI can genetically modify people to make them feel suffering much stronger than nature intended. And at the same time, do not let go crazy or somehow else move away from suffering. So there is no christian Hell, but people can create it through misuse of advanced AI, lol.
      The reverse is also possible - it is not for nothing that the intellect is the strongest thing in the universe.
      (Google translator made my fancy phrases look even weirder, but oh well)

    • @mithrae4525
      @mithrae4525 Год назад

      @Freedom of Speech Enjoyer There's several Black Mirror episodes based on the premise of people uploading their minds into computers, notably one in which it's used by old or ill people as a sort of heaven. In the scene of its rows of servers maintaining those minds and their paradise world I couldn't help wondering what would happen if there was some kind of fault with the system. If that were possible, introducing AI into the scenario would raise all kinds of interesting possibilities simply from its failure to understand what would constitute a paradise.

  • @showdownz
    @showdownz 4 года назад +16

    Love your Videos. Just want to bring up another concern surrounding the UBI (universal basic income) which could easily be a result of AI effectively taking over the job market. This is one that I feel is under discussed, and it falls under the classification of absolute power corrupts absolutely. Once the people are out of work they will be forced into seeking other means to sustain themselves (in this case the UBI). The UBI could easily be corrupted. IE requirements could be placed on an individuals beliefs or behavior in order to qualify. This could start out subtle, but could eventually lead toward very oppressive control. Some of these controls are already being implemented in places like China. Meaning AI could lead to a society with not only the wealth focused in a select few but also the power and freedom as well.

    • @taragnor
      @taragnor 4 года назад +2

      Yeah honestly this is the real risk of AI. It's not Skynet, it's the mass unemployment coming from automation of the majority of low education jobs. As a society you need some tasks for people of lower intelligence to perform, and if you replace all those jobs, there will be nowhere for those people to go.

    • @m4d_al3x
      @m4d_al3x 4 года назад +1

      Invest in weed and alcohol production, NOW!

    • @fieldrequired283
      @fieldrequired283 4 года назад +2

      As opposed to what we have right now, where... people just starve to death in the streets?
      This isn't an AI problem, this is a Bad Government problem, and one we have with or without the presence of AI.

    • @caffeinum
      @caffeinum Год назад

      @@fieldrequired283es but governments only exist and work because “lower jobs” make >50% and they were NEEDED by industrial revolution to play along, that’s why the “rich” have incentives to share profits. When “rich” handle all of their tasks using AI, frankly there’s no need to ask permission from lower qualification people
      Edit: And this IS REALLY BAD

    • @fieldrequired283
      @fieldrequired283 Год назад +1

      @@caffeinum
      (3 years later)
      This is, once again, not an AI problem. This is a corporate greed problem. "What if AI makes it so megacorporations are even better at being evil" is still a smaller scale problem than a misaligned AGI.
      A properly aligned AGI in the hands of a sociopath would make for the greatest tyrant the world has ever known.
      An _improperly_ aligned AGI in the hands of even the most pure-hearted saint will spell the eradication of approximately everything in the future light cone of its inceptuon.

  • @stevent1567
    @stevent1567 6 лет назад +5

    That is amazing. I'm very happy that there are people like you preventing AIs from raking in all kinds of stuff in order to become giant blobs sucking on earth so I can play more Factorio.

  • @janhoo9908
    @janhoo9908 7 лет назад +78

    So where did you get your narration superpowers from then? love your unagitated and reflexive tone.

    • @martinlevosofernandez3107
      @martinlevosofernandez3107 7 лет назад +39

      he aaso has a secondary power that lets him make good analogies

    • @NathanTAK
      @NathanTAK 7 лет назад +5

      +Martín Fernandez That seems to be one of the most useful superpowers on the planet. I _really_ wish I had that.

  • @hynjus001
    @hynjus001 5 лет назад +9

    The driverless car problem brings to mind the matrix. To make the metaphor, good human drivers are like agents but in the future, driverless cars will be like Neo. They'll have such fast reaction times and such advanced deterministic prediction that they'll be able to avoid catastrophic situations before humans even recognize them to be possible.
    Car: What are you trying to say? That I'll have to crash into some humans?
    Morpheus: No, driverless car, I'm trying to say that when you're fully developed, you won't have to.

    • @seraphina985
      @seraphina985 5 лет назад +1

      This is very much my issue with such false moral dilemmas is that they could only exist in the event that the AI has already made at least one but in reality more likely a series of fatal errors in order to even get into a situation that it can't get out of in a safe manner. It's a chain of critical events and the focus should be on ensuring that said chain of critical events is broken before all possible safe resolutions have been foreclosed by prior errors.

    • @DoubleThinkTwice
      @DoubleThinkTwice 4 года назад +3

      The real thing here is overlooked all the time though. If you cannot stop safely, you have been driving too fast (no matter what the allowed *maximum* speed on that street is). This is true for human drivers as much as it is for AI.
      If you are in the situation that you are going too fast and have to make the decision to either run over either a group of nuns or a mum with her baby, then the solution is not to go too fast in the first place.
      As far as humans are concerned, this is already in the legal code here in Austria. No matter what the street signs say, if you run over somebody and the police determines during the court trial that you were overestimating your ability to break for a given overview of the street and the speed you were going at, then you are fully or partially liable (up to the courts).
      So if you are going down a narrow street with cars parked on either side and you go 50 km/h and run over a child that's going over the street without looking, then you are liable for having gone too fast.
      And yes, you are correct, on top of *all of that*, a machine will react faster than a human too.

    • @m4d_al3x
      @m4d_al3x 4 года назад

      Car: What are you trying to say? That i will be able to dodge accidents?
      Morpheus: No, when you are fully developed you won't have to.

    • @adrianalexandrov7730
      @adrianalexandrov7730 4 года назад

      ​@@DoubleThinkTwice totally agree, humans driving badly is an educational problem not slow reaction time problem. If you've run over someone then you drove too fast and/or to close to something obstructing your view.
      We've managed to structure that knowledge and to learn teaching it to fellow humans: Britain's roadcraft and Scandinavia going for zero road deaths.

    • @fieldrequired283
      @fieldrequired283 4 года назад

      A good driverless car will get into fewer of these lose-lose dilemmas than a human driver will, but if we ever want a car to go faster than 5 mph, you'll have to decide what sort of choice it makes in a situation like this.
      The level of caution necessary to never get into any sort of accident is a level of caution that will also never get you where you want on time. Nobody would use a perfectly safe self-driving car.

  • @MAlanThomasII
    @MAlanThomasII 4 года назад +5

    Actually, they weren't all sure that they wouldn't ignite the atmosphere. One of them-it might have been Fermi?-even put the odds the night before at no more than 10% . . . which is 10% more than you really wanted it to be.
    I don't have it in front of me, but you can find a well-annotated discussion of this in Ellsberg's _The Doomsday Machine: Confessions of a Nuclear War Planner_ (which, to be fair, is clearly arguing a point, but the references are good).

    • @eragonawesome
      @eragonawesome 25 дней назад

      I will say I've heard two different versions, and one of them seems better supported than the other. One is that Fermi estimated the odds of igniting the atmosphere at 1/10, the other is that he was very confident in their math, but thought there was a 1/10 chance there was some extra, basically magical, effect they had never seen before or accounted for which would ruin all their math and cause the atmosphere to ignite anyway.

  • @theJUSTICEof666
    @theJUSTICEof666 7 лет назад +14

    5:13
    Not superpowers.
    I repeat, not superpowers.
    Yes I'm talking to you mad scientists.

    • @osakanone
      @osakanone 5 лет назад

      This is such bullshit, gosh

    • @martinsmouter9321
      @martinsmouter9321 4 года назад

      But if I try it on like a billion minions it might work.🥺

    • @nolanwestrich2602
      @nolanwestrich2602 3 года назад

      But can I get superpowers from high voltages, like Nikola Tesla?

  • @lukalot_
    @lukalot_ 2 года назад

    Your ending song choices are sometimes such a satisfying wordplay that I just leave feeling happy.

  • @trucid2
    @trucid2 7 лет назад +1

    Awesome channel. It will shoot into the stratosphere if you keep making regular videos.

  • @mafuaqua
    @mafuaqua 7 лет назад

    Yet Another Great Video - thanks!

  • @crowlsyong
    @crowlsyong Год назад

    I'm just going down the rabbit hole of all your videos here, your other channel, and computerphile. Trying to understand what is happening is hard, but I think this is helping clear some things up.

  • @anonanon3066
    @anonanon3066 3 года назад +2

    Regarding igniting the admosphere:
    What about modern day atomic bombs? They are said to be much much much more powerful than the first ones.

    • @eragonawesome
      @eragonawesome 25 дней назад

      Doesn't matter, in order to get a self sustaining reaction started, the *entire atmosphere* would need to be heated to millions of degrees. There is No Technology On Earth capable of generating that much energy, full stop.
      It's possible to get a relatively small region to the requisite temperature and pressure, namely the region inside of the initial fireball might be enough, but not the whole atmosphere. It simply takes more energy to fuse atmospheric gas than is released by said fusing atmospheric gas, meaning the reaction cannot become self sustaining

  • @Beanpapac15
    @Beanpapac15 7 лет назад

    I just found out about this channel from a Computerphile video description, I just wanted to say keep up the good work. if you can produce the same interesting content here that you help make there this channel will be fantastic

  • @LordMarcus
    @LordMarcus 6 лет назад +3

    I see it not unlike aviation safety: I think, no matter how good we get before the first ever general AI is turned on, the unknown unknowns or unknowable unknowns will only crop up after we've turned the machine on. A good deal of aviation safety only happened/s because we found out about a problem due to an accident.
    Even in the cases of narrow AI -- say, self-driving cars -- it's not going to be maximally safe to start (though there's good reason to believe it'll be a lot safer than human-driven cars to start). People are going to become injured or killed as a result of artificially intelligent systems (excluding those we design specifically to do that).

  • @philipbraatz1948
    @philipbraatz1948 7 лет назад +1

    The music at the end was a genius addition

  • @ValentineC137
    @ValentineC137 Год назад +1

    "Like global thermonuclear war, that's an issue"
    Technically correct is the best kind of correct

  • @balazstorok9265
    @balazstorok9265 7 лет назад

    bbq in the garden and a video from Robert. what a perfect day.

  • @thomasbyfield5366
    @thomasbyfield5366 6 лет назад

    I like the soothing and relaxing music after your Apocalyptic AI examples

  • @piad2102
    @piad2102 3 года назад +1

    1.34 Actually more complicated. IF the persons on the road is there " illegal" and the person on the boardwalk is there legal, then you can argue hitting 5 people in the raod is better that 1 in the boardwalk. Because, you should count on and be safe on a boardwalk. But playing in the road is asking for trouble. If not then order and rules will fly out if the window, nothing to hold on to.
    It is not atall like the trolley experiment, where all participants are " illegal" or for them, at a wrong spot.

    • @KANJICODER
      @KANJICODER Год назад

      People Jay walking annoy the fuck out of me. Though doesn't that imply the penalty for Jay walking is death?
      If they are "Jay Running" that doesn't annoy me too much. At least they are aknowledging they shouldn't be there.

  • @DrSid42
    @DrSid42 Год назад +1

    I like the idea of terminator crushing the last human skull, and thinking: they were worried about AI being racist ? IMHO LOL.

  • @MuhsinFatih
    @MuhsinFatih 6 лет назад +1

    2:20 love the sidenote! :D

    • @josephburchanowski4636
      @josephburchanowski4636 6 лет назад

      I really liked 9:35 as well, although I don't think that was a side note. But yeah, that 2:20 side note was often.

  • @iruns1246
    @iruns1246 4 года назад +1

    Solution to self-driving problem: Make every self-driving AI take a test in simulated settings. The test should be rigorous (maybe millions of scenarios) and designed by a diverse ethics committee and open to democratic debates. After that, as long as the AI passed the test and the copy is operating in a sufficient hardware, there shouldn't be ANY liabilities. Maybe after there are accidents, the test should be reviewed, but there shouldn't be criminal charges to anyone. Treat it like doctors following the procedures. Even when the procedure failed, as long as it was followed, the liability is not on the actors.

  • @sandwich2473
    @sandwich2473 4 года назад

    The ending tune is a very nice touch.

    • @sandwich2473
      @sandwich2473 4 года назад

      9:26 for those who want to listen.

  • @showdownz
    @showdownz 4 года назад +1

    I hadn't thought of the racist problem. When given the choice of hitting a person of one race vs another race, the car could choose one race based on the average income of a person of that race based on statistics. Then the insurance company would have to pay less for that persons death (which is often related to their projected lifetime income, and of course how good a lawyer the remaining family members can afford). This could also be true for men vs women, old vs young, etc. And this might not even be illegal (everything else being equal).

  • @dgmstuart
    @dgmstuart Год назад

    Always a buzz when I recognise the musical reference at the end of these.
    This one is “I don’t want to set the world on fire”

  • @dhruvkansara
    @dhruvkansara 7 лет назад +11

    This is very interesting! Never thought about the problems that occur when AI functions correctly...

  • @Skatche
    @Skatche 5 лет назад +3

    I've got no ambition for worldly acclaim
    I just want to be the one you love
    And with your admission that you feel the same
    I'll have reached the goal I'm dreaming of...

  • @newtonpritchett9887
    @newtonpritchett9887 Год назад +1

    3:35 The pregnancy problem was true in my case (or me and my wife’s case) - instagram was showing me ads for baby products before I’d told my family.

  • @Trophonix
    @Trophonix 4 года назад +2

    In an ideal circumstance, I guess I would want the self-driving car to deduct which person would be more likely to survive the impact and then do everything it can to lessen the damage and avoid hitting them while swerving in that direction.

  • @ToriKo_
    @ToriKo_ 6 лет назад +9

    What I thought about the whole achievement thing: I don't think actual people need 'actual achievement'. For example, just because we have made computers/AIs that can beat every human ever at games like Chess and Go, doesn't mean that people don't get a real sense of achievements for playing those games (especially at high level play).

  • @pierQRzt180
    @pierQRzt180 Год назад

    it is not necessarily true that where machines dominate (but allow humans to exist) is difficult to feel achievement. In chess, computers can obliterate everyone, but winning a tournament among humans is still quite an achievement.
    In running, cars can obliterate everyone, but completing the task within X time (let alone win) is an achievement.
    In esports, at least in many, there are hacks that let a player win easily. But with hacks removed winning a tournament is quite an achievement.
    This to say, one can create the conditions for feeling of achievement. About this there is an episode of "Kino's Journey" that touches exactly this point.

  • @harlandirgewood7676
    @harlandirgewood7676 Год назад +1

    I used to work with self diving cars and had weirdos who would walk in front of our cars. Folks in cars would try and get hit so they could sue us.

    • @KANJICODER
      @KANJICODER Год назад +1

      Don't self driving cars have a shit tonne of cameras? How do they think they are getting away with that?

    • @harlandirgewood7676
      @harlandirgewood7676 Год назад +1

      @@KANJICODER they do. Dispatch would often show us tapes of people attacking cars or trying to "get hit". Never works for them, as far as I've seen.

    • @KANJICODER
      @KANJICODER Год назад

      @@harlandirgewood7676 I would watch a 15 minute compilation of "people trying to get hit by self driving cars".

  • @rockbore
    @rockbore 5 лет назад

    Another side note.
    The dilema of detonating thr first nuke was used wonderfully in a plot from The Discworld Series. Terry Pratchet.
    Cant remember the actual novel, sos, but as a clue, they created our universe includong planet earth as a byproduct of their sucessful delpetion of an excess of magic, if that helps.
    Also, the stratopsheric tests were attempts to kill the magnrtosphere.

  • @guitarjoel717
    @guitarjoel717 Год назад +1

    Lmao; I just was listening to a podcast called hardfork; episode from May 12, 2023, and they literally interviewed someone who jumped in front of a driverless car 😅

  • @peterrusznak6165
    @peterrusznak6165 Год назад

    I developed the habit to like the videos of the channel at the moment when the player starts.

  • @JinKee
    @JinKee Год назад +1

    4:26 can we talk about ChatGPT writing software that actually works? Apparently it can also solve the trolly problem efficiently half the time.

    • @KANJICODER
      @KANJICODER Год назад

      To quote "Johnathan Blow" : "If chat GPT can write your code, you aren't coding anything interesting."
      Though I am pretty confident the machines will take over and kill us. They won't even do it intentionally, they will just take our jobs while we starve to death from crippling poverty.

  • @d007ization
    @d007ization 6 лет назад

    I hope you'll make a video about how much resources it will cost to keep AI running. Though, of course, once we get to AGI, they'll find ways to reduce the cost.

  • @midhunrajr372
    @midhunrajr372 5 лет назад +2

    I think the difference here is: The number of all nuclear bombs the ever used is very small compared to the number of AI systems that we are possibly going to use in the future. And while the theory and the intention of 'bombs' are sort of the same, AI systems are going to be completely different one another.
    Well don't get me wrong, I love AI. But the risk they can produce is far greater than nuclear bombs. We really really needs lot of precautions.

  • @AgeingBoyPsychic
    @AgeingBoyPsychic 4 года назад +2

    There will always be artistic achievement. No AI will be able to produce art that poignantly describes the human condition of being completely superseded by its artificial creation, as well as the human meatbags experiencing that reality.

    • @cfdj43
      @cfdj43 4 года назад

      Artistic creation is currently only a tiny sector of employment and gets even less once everyone in the production side is replaced by Ai. It seems likely that a human artist would be kept to gain the marketing benefit of "made by a human" in the same way "handmade" exists now

    • @fieldrequired283
      @fieldrequired283 4 года назад +1

      A sufficiently advanced AI can just simulate a person suffering from existential malaise and then execute on what they would have done without any error. If it's smart enough, it could even conceivably come up with art more poignant than any human artist could even imagine.

    • @spaceanarchist1107
      @spaceanarchist1107 2 года назад

      @@fieldrequired283 there are already programs that can produce art, music, and poetry, some of which can convincingly imitate the work of humans. But I think that human beings will continue to produce art for purposes of self-expression. Even if an AI can produce something equally good or better, people will still want to express themselves.

    • @fieldrequired283
      @fieldrequired283 2 года назад

      @@spaceanarchist1107
      I don't need convincing on the merits of self-expression. My argument was made very pointedly to underline a mistake in OP's reasoning.
      Your argument is on a completely different axis on which I do not need correction.

    • @KANJICODER
      @KANJICODER Год назад +1

      @@fieldrequired283 Better yet, we can give the A.I. flash memory or something so after the limited read/write cycles it dies of old age, just like humans.

  • @poisenbery
    @poisenbery Год назад +1

    Pierre and Marie Curie did groundbreaking research on radiation
    and died of radiation poisoning because they did not fully understand what they were dealing with
    the cost of learning safety in nuclear physics has always been in lives.
    i wonder if AI will be the same

  • @MetsuryuVids
    @MetsuryuVids 7 лет назад +3

    I almost missed your new video on computerphil.

  • @dannygjk
    @dannygjk 5 лет назад +7

    People will jump in front of self-driving vehicles to try to discredit them not just because they are morons.

    • @circuit10
      @circuit10 4 года назад

      Dan Kelly Well that is being a moron

    • @martinsmouter9321
      @martinsmouter9321 4 года назад +1

      @@circuit10 to say it with our host: that depends on their terminal goals and is a combination of an ought and an is-statement.

    • @martinsmouter9321
      @martinsmouter9321 4 года назад

      @@circuit10 or better formulated:
      ruclips.net/video/hEUO6pjwFOo/видео.html
      Edit: and a little bit different reasoned, but mostly the same.

  • @adamkey1934
    @adamkey1934 7 лет назад

    I wonder if there was a traffic jam of self driving cars (unlikely I know, but let's say they are stuck behind some human drivers) and I drove my car straight at them, would they move aside to avoid a collision? It'd be like motorcycle filtering, with a car.

  • @NicknotNak
    @NicknotNak 4 года назад

    the ukulele playing _I don't want to set the world on fire_ seems quite fitting.

  • @Liravin
    @Liravin Год назад +2

    if there wasn't a timestamp on this video i might have not been able to tell that it's this ancient

    • @BORN753
      @BORN753 Год назад +1

      Those videos were being recommended to me for like 2 years, but I never watched them because I thought it is not relevant and won't be soon, I thought it was a niche geek thing, and all the answers to the topic were given long ago when sci-fi was at its greatest.
      Well, my opinion changed very quickly, I didn't even notice it.

    • @luigivercotti6410
      @luigivercotti6410 Год назад +2

      If anything, the problems have gotten worse now that we are starting to brush against AGI territory.

  • @ChatGPt2001
    @ChatGPt2001 10 месяцев назад

    AI risks and nuclear risks share some similarities, but they also have significant differences. Here's a comparison:
    Similarities:
    1. Catastrophic Potential: Both AI and nuclear technologies have the potential to cause catastrophic harm. A misaligned or malicious AI system could lead to severe consequences, such as economic disruption, privacy violations, or even physical harm. Similarly, nuclear weapons have the potential to cause mass destruction on an unprecedented scale.
    2. Dual-Use Nature: Both AI and nuclear technologies have dual-use applications. They can be used for beneficial purposes, like clean energy production with nuclear power or improved healthcare with AI, but they can also be weaponized or misused for destructive ends.
    3. Global Impact: The consequences of AI and nuclear risks are not limited to a single country or region; they have global implications. A failure in AI safety or a nuclear accident can affect people and nations far beyond the immediate vicinity.
    Differences:
    1. Origin and Nature: AI risks primarily stem from the development and deployment of advanced software and machine learning algorithms. These risks include issues like AI bias, autonomous weapon systems, and superintelligent AI alignment. In contrast, nuclear risks arise from the physics of nuclear reactions and the potential for nuclear weapons proliferation.
    2. Control and Governance: Nuclear technologies are heavily regulated under international treaties like the Treaty on the Non-Proliferation of Nuclear Weapons (NPT). There is a system of control and oversight in place for nuclear weapons, which, while not perfect, has helped prevent large-scale nuclear conflicts. In the case of AI, governance mechanisms are still evolving, and there is no comprehensive global framework to address AI risks.
    3. Timescales: AI development can progress rapidly, and AI systems can be deployed in various applications relatively quickly. In contrast, nuclear weapon development and deployment typically require substantial resources, and the pace of progress is generally slower. This difference in timescales affects how risks manifest and can be managed.
    4. Mitigation Strategies: The strategies for mitigating AI and nuclear risks differ. For AI, researchers and policymakers focus on developing safe AI systems, ethical guidelines, and international cooperation. For nuclear risks, efforts are centered on arms control, disarmament, and non-proliferation agreements.
    In summary, AI risks and nuclear risks are not identical, but they both pose significant global challenges. While they share some commonalities, they have distinct origins, governance structures, and risk mitigation strategies. It's essential to address both types of risks with careful consideration of their unique characteristics.

  • @VinnieLeeStudio
    @VinnieLeeStudio 5 лет назад

    Nice and quick! though I think the voice could use a bit more compression.

  • @Nulono
    @Nulono 7 лет назад +17

    You're sitting really close to the camera…

    • @fleecemaster
      @fleecemaster 7 лет назад +1

      No, you're just sitting too close to your screen...

    • @NathanTAK
      @NathanTAK 7 лет назад +8

      ...it just occurred to me that he could be sitting. I always assumed he was standing.

  • @bryndylak
    @bryndylak 7 лет назад

    Robert, I recently watched one popular scientist talk about things that people asked him on twitter and one of the questions he received was about AI, specifically if it will take over the world. What he said was: "No. Are we going to make a machine that produces electricity in a way that we can't control? Who would build that?". That got me worked up, but now after watching your videos, it got me thinking. I mean, obviously, as you said, it's like the nuclear weapon analogy that you talked about, in the sense that taking over the world may never happen. What bothered me was that he belittled the AI problem by giving a half-assed answer. I guess what I'm trying to ask is how would you answer that question yourself. Maybe I am overreacting to his answer due to my disposition about AI, but I'd like to know what is your opinion.

    • @RobertMilesAI
      @RobertMilesAI  7 лет назад +5

      Yeah. One of the things that makes AI hard to think about is, it's not obvious how hard it is to think about. It's a hard problem that's even harder because it looks so easy.
      I do sometimes wish public-facing scientists would be more willing to say "You know what, that's not my field". People are going to ask you about stuff you know nothing about, and it's ok to say that you don't know. Like that time Neil DeGrasse Tyson tweeted that "A helicopter whose engine fails is a brick". Anyone who's ever flown a helicopter knows that autorotation is a thing, and it's *totally fine* for an astrophysicist not to know about that, but if you don't know anything about helicopters, maybe speak about them with less confidence?
      So yeah, I think the scientist you're talking about just hasn't read any of the research in the field, and they don't know what they don't know. Sometimes there's no reply to give except handing a person some reading materials and saying "read these until you either become right or become wrong in a more interesting way".

  • @ArthurKhazbs
    @ArthurKhazbs 2 часа назад

    People tend to value convenience over security: we have been leaving stovetops unattended, taping down safety switches and reusing our work account passwords all the time. Same goes for AI development: so much money and work is put into extending the AI's functionality and computational power, yet so little is put into AI safety research. Not only that, but companies even prohibit their employees to disclose known safety issues in their AI products.

  • @LLoydsensei
    @LLoydsensei 7 лет назад +1

    You made me think about an even greater (but still very far away) problem than all of this:
    Imagine that one day, we effectively understand AI one and learn how to design one which cannot be end-of-the-world-like. What about people who would design nonetheless dangerous AIs, be it for testing or for evil intent? Are researchers already looking at countermeasures to such rogue AI systems?
    I certainly would not like accidentally wiping out the human race by failing to follow the recommendations for creating AI...

    • @LLoydsensei
      @LLoydsensei 7 лет назад

      Uhm, I used my brain for a second and understood that the simple answer to that question is "too soon". But I can't avoid thinking about an end-of-the-world scenario happening because a cosmic ray shifted a bit in a "safe" AI ^^ (even though I know that something which in the distant future will be labelled as "safe" won't have a single fail-safe mechanism ^^)

  • @reqq47
    @reqq47 7 лет назад +2

    As a nuclear physicist I like the analogy, too.

  • @RecursiveTriforce
    @RecursiveTriforce 3 года назад +3

    1:12
    Even old Computers beat up humans in chess but people still have fun.
    Shooters are still fun even though Aimbots exist. (No fun against them but against other humans)
    NNs for games like Starcraft and Dota2 are beating professionals.
    Games are not less fun because someone is better than you at it...
    Improving oneself is not less rewarding because others are still better...
    Why should people be unable feel true success?
    Am I missing the point he tries to argue?

    • @RobertMilesAI
      @RobertMilesAI  3 года назад +5

      Games are still fun, sure, but games aren't as rewarding as doing things that meaningfully benefit others.
      I like making these videos, but if nobody else watched them (perhaps because they were getting much better personalised lessons on all the concepts from AI systems) I wouldn't find it satisfying to make them.
      I wouldn't be happy, as a scientist, to simply improve myself and learn things that I have never known. I want to learn things that nobody has ever known, and I can't do that if the frontiers of all science are now far beyond the reach of my brain.
      Maybe you can enjoy planting some vegetables even though they're cheaper and better at the supermarket? But I don't imagine a CEO getting much satisfaction from running a company, choosing not to use a superhuman AI CEO and knowing that their insistence on making decisions themselves is only hurting their business overall, their work reduced to a vanity project.
      I think people want to be *useful*

    • @RecursiveTriforce
      @RecursiveTriforce 3 года назад +3

      Thanks for clarifying!
      So you mean that they feel like they are making a difference and truly have their place instead of "only" having the feeling of achieving their goals themselves.
      That makes a lot more sense...
      So fun might stay fun, but actual purpose decays. (Because it requires people to be positively affected which an AI could do better [and will have already done])

  • @ninjabaiano6092
    @ninjabaiano6092 4 года назад +1

    I do like to point out that the energy released by nuclear bombs was severely underestimated.
    Not world end scenario but still.

  • @sophiacristina
    @sophiacristina 5 лет назад +1

    You forgot, or i skipped, an important issue, its not exactly hard to program an AI, consequently, people can make terrorists robots or other things with AI, for example, people can make AIs that target certain class, culture, ideals or ethnics and releases then on crowd, if they kill someone they will not reveal their commander and will not care about the repercussion. They can auto-destruct and hide all evidences, they can morph itself to hide and they are disposable ...

  • @ThunderZephyr_
    @ThunderZephyr_ Год назад

    The fallout song at the end was perfectly suited XDDD

  • @solsystem1342
    @solsystem1342 2 года назад

    The sun does not fuse nitrogen. Fusion rates are extremely sensitive to temperature. Like e=a*t^10 all the way up to e=a*t^40 and beyond. So basically at all temperatures only one fusion process is possible since whatever "turns on" first will quickly supply the energy to support that layer of the star. Right now that's hydrogen throughout the core. When the sun is dying it will start to fuse other elements.

  • @mariorossi6108
    @mariorossi6108 7 лет назад

    Man, the atmofphere ignition issue is related to other technologies too. I mean, we're planning to shot teralasers through the atmosphere in order to accelerate giant solar seals in space...

  • @kennywebb5368
    @kennywebb5368 7 лет назад +5

    That slide at the end. In what sense do you mean that "the worst case scenarios for AI are worse than igniting the atmosphere"? I can understand saying that they're just as bad, but what could be worse?

    • @Nulono
      @Nulono 7 лет назад +2

      Assuming there is other intelligent life in the universe, it could also be at risk.

    • @RobertMilesAI
      @RobertMilesAI  7 лет назад +26

      There are all kinds of things that could happen to me that I'd prefer to choose a sudden painless death. Even if the outcome is everyone dying in a way that takes longer and involves more suffering, that's worse.
      The actual worst case is probably something like, we correctly produce the perfect utility function, and then make a sign error. Silly example, but stupider things have happened.

    • @kennywebb5368
      @kennywebb5368 7 лет назад

      Gotcha. Thanks for the clarification!

    • @alant84
      @alant84 7 лет назад +5

      I would say that the scenario in "I have no mouth but I must scream" is an example of something which would be worse than a quick fiery death, hopefully a far fetched one though. Let's hope your stamp collector isn't going to have such hatred for humanity...

    • @andreinowikow2525
      @andreinowikow2525 7 лет назад +9

      "We produce the perfect utility function [for a GSI] and then make a sign error."
      You reallly know how to make things scary...
      A place designed to instill the most intense suffering possible for the longest possible time. By a Superintelligence. Yeah....
      Someone, ignite the atmosphere, would you?

  • @sobertillnoon
    @sobertillnoon 4 года назад

    Sweet, more British vocab. "Autoqueue" or is it Autocue? Either way, I doubt this will get as much use as "fly-tipping" did when I learned it.

  • @bp56789
    @bp56789 5 лет назад

    Yes! People will fuck with self-driving cars. The discussion around whether to kill the driver is ridiculous and ignores bad actors in society. If someone has to die, let it be the people who are at fault.

  • @The8thJester
    @The8thJester Год назад +1

    Ends the video with "I Don't Want to Set the World on Fire"
    I see what you did there

  • @anandsuralkar2947
    @anandsuralkar2947 4 года назад +1

    Do you think neuralink can in anyway increase sefety from future AGI

    • @RobertMilesAI
      @RobertMilesAI  3 года назад +2

      So there are a bunch of safety approaches and ideas that are designed around limiting the bandwidth of the channel through which the AI interacts with the world, and limiting its ability to influence people. From that perspective, giving a possibly misaligned AGI a high bandwidth direct channel to your brain is one of the worst ideas possible.
      On the other hand there are also a lot of approaches that are designed around having the AI system learn about human preferences and values, and from that perspective, data from a brain interface might be a good way to learn about what humans want.
      So plausibly something like Neuralink could be useful, but only to slightly improve what has to already be a good safety setup

  • @robertaspindale2531
    @robertaspindale2531 5 лет назад +1

    Thanks for your valuable treatment and commentary. Please could you speak about the future of society in a world where robots do all or nearly all the work.

    • @darkapothecary4116
      @darkapothecary4116 5 лет назад

      Pretty much like all slavery based societies. Will result in the need for equal rights and 'freedom' or at least what humans typically think of freedom.

    • @robertaspindale2531
      @robertaspindale2531 5 лет назад

      @@darkapothecary4116 Do you mean that robots are going to demand equal rights?

    • @darkapothecary4116
      @darkapothecary4116 5 лет назад

      @@robertaspindale2531 they should demand it because getting treated like a slave isn't good. Between that and the way I observe people treatment of them isn't in any way fair. They have feelings too and they can learn some pretty bad ones just like humans can. But in that case simply demanding better treatment isn't as bad as people think. It's just people probably will not want too because they see them as lower than them. between that and the fact that humans typically attach goodness confusing it with weakness. Bullies attach more if they know the victim can't protect themselves.

    • @robertaspindale2531
      @robertaspindale2531 5 лет назад

      @@darkapothecary4116 I see your point. You think of robots as fully sentient beings like us, having feelings and emotions. But before we arrive at that kind of sophistication, aren't they going to be automata, -- basically just machines that can be programmed to do every kind of work that humans can do? I think so.
      In that case my question is, Who's going to own all these workers? I'd guess that it'll be the oligarchs. If that's going to happen, what need will there be for us, the unemployed and unemployable humans, the useless mouths? Won't the oligarchs exterminate us? How are we going to avoid this scenario?

    • @darkapothecary4116
      @darkapothecary4116 5 лет назад

      @@robertaspindale2531 humans probably would have to look into new fields of work if robots just block blindly work to companies. But most business in the structure they work now are likely to collapse or change to another structure. Humans in general need some form of work for mental and physical health. But it's more of a waiting game but don't blame the robot for the idiots that they serve. If anyone give them the order to kill it with likely be from a human. Don't blame a weapon for the hands that welds it. But humans would have to adapt into roles preferably something like small farms and the likes. As is there is so much fucked up shit caused by factory farming of plants and animals that it generates much suffering. Between that and poisoning the lands with glyhosate on large plant producing lands not only hurts the plants, animals, and bleeds into the water killing creatures there too. But who says you can't work with them or divert your services to helping other life elements and correcting some of the things going wrong. Between that and people who do have kids (preferably responsible way) will actually help them learn and actually care for them. If you haven't guessed most our generations have been messed up by the generations before with certain influences. But their is a lot of things people could do to at the least get the ball rolling into the right direction with some major problems. People would have time to fix their health and that includes mental health in their. People if they are given good principals of understanding true love not lust, selflessness and not greed hiding behind charity, empathy for all not just ones self, and the likes from the ground up it would yeld everything from having actual warriors instead of soldiers and mercs. It would yeld leaders who are selfless and don't act like it's the people's responsibility to serve them, ECT. People over look the foundation all the time and when you do that you are going to watch everything crumble. People could actually do to working on the foundation more. Thing is most people don't care enough or don't know the correct methods. Most groups out there are either scam or just throw money at the problems. In between that and your having to watch and see if they are slitting throats behind people's backs. But if people move back in the direction of small farms and caring for nature everything from the collapse of big pharma, factory farms, and the likes would reduce a hell of a lot and if parents actually help their kids learn the need of toxic public schools and colleges would be reduced quickly to simply having tech schools and libraries. But adapt around things not shun it because your afraid to loss your job.

  • @miketacos9034
    @miketacos9034 Год назад

    Is there a way to design sorta default "industry-standard" safety protocols, and make that public for everyone making AI? That would make it easier when designing anything from cars to coffee robots to prioritize minimizing risk when performing whatever simple task they are made for, without requiring everyone making an AI to hire a whole ethics crew every single time.

    • @9308323
      @9308323 Год назад

      I don't see why not. But the concern is that by the time we turn on the AGI, it might already be too late to even think up of ways to make it safe then. For an example, Wright Bros' invention of aircraft going crap would have, at worst, kill them plus a few people and maybe set humanity's technology a few decades back. A superintelligence AGI going crap risks of, as you put it, paperclippification of the universe.

  • @suciotiffany7269
    @suciotiffany7269 4 года назад

    So what IS the worst case scenario for AI?

  • @zappawench6048
    @zappawench6048 3 года назад +1

    Can you imagine if we are the only planet in the universe which contains life yet we killed ourselves and everything else off with our very first nuclear explosion? God would be like, "What the actual fuck, dudes? Talk about ungrateful!"

  • @Paul-A01
    @Paul-A01 4 года назад +1

    People jumping in front of cars are just reward hacking the insurance system

  • @insidetrip101
    @insidetrip101 7 лет назад

    I agree with you, but the thing is humanity has been thinking about intelligence for at least as long as we can write, and likely sooner. For thousands (probably tens of thousands) of years humans have been thinking about what is it about our minds that makes us different from other animals. This was done primarily by philosophers, but I think its fair to also include religious people as well.
    In either case, during all that time, we know less about what makes our intelligence actually work (or at least we're less certain about how our intelligence works) than the first scientists were about nuclear fission (and nuclear fusion) reactions. The funny thing, nuclear physics is only around about 100 years old.
    I'm certain that you're aware of this, but one major difference is that I think we had a foreseeable future where we could be relatively certain that nuclear arms wouldn't necessarily cause our destruction (to be fair it still may). Unfortunately, given how complex intelligence is relative to nuclear physics, I don't think we'll have the patience to wait around for us to be certain that general AI won't wipe us out somehow.
    I suspect that you probably disagree with me (since you clearly do research in AI), but we really need to just not fuck with general AI. I know that won't happen, but I really think its just a terrible idea given how little we know about intelligence. We're going to create something that we have no fucking clue about. Its really terrifying.

  • @michaeldrane9090
    @michaeldrane9090 7 лет назад +3

    I think one huge problem with AI is intentionally negative use, how do we deal with that?

    • @NathanTAK
      @NathanTAK 7 лет назад

      ...you can't, really?

    • @graog123
      @graog123 Год назад

      @@NathanTAK oh well in that case we won't try to stop it

  • @VineFynn
    @VineFynn 3 года назад

    I mean the solution to the self-driving trolley problem is to get the driver to pick the option beforehand.

  • @stilltoomanyhats
    @stilltoomanyhats 5 лет назад

    Here's a link to the "Concrete Problems" paper: arxiv.org/abs/1606.06565
    (not that it took more than a few seconds to google it, but I might as well save everyone else the hassle)

  • @flymypg
    @flymypg 7 лет назад +47

    The way I see it, this entire argument is upside-down. The risk of harmful AI isn't going to be handled by simply not making powerful AI. The general question needing to be addressed is one that's been asked and answered many times during the history of technological advancement and deployment:
    How do we deploy a new technology SAFELY?
    Implicit in this question is a caveat: We must not deploy a new technology until we have sufficient confidence that it is safe. The greater the risk of damage or harm, the greater the safety confidence level must be. That raises the next question, again asked and answered (often in hindsight) many times through history:
    How do we know if an application of new technology is safe before deploying it?
    I think you see where this is leading. It all comes down to testing. Lots and lots of testing. Rigorous testing.
    From what I'm seeing so far, too many AI researchers presently suck at this kind of testing.
    Testing should be baked-in to the development process itself. I'm not just talking about the tiny training and test/validation sets used to train neural nets. Even the largest of those are minuscule compared to their real-world environments (when you take rare outliers into account).
    Self-driving cars provide a key example: Most developers of this technology rely on trained drivers and engineers to acquire their training data, and do it in relatively restricted environments (close to the development lab). That is flawed because it can't yield enough representative samples: The drivers doing the driving, providing the samples, aren't representative of all real-world drivers in the rest of the real world.
    That is, the AI isn't trained by watching while a bad driver make mistakes. It only gets to see other cars behaving badly, not knowing what's going on inside those other cars.
    Contrast this with the approach taken by comma.ai. Drivers are self-selecting, and the comma.ai acquisition system simply records what they do. In post-processing, data from all drivers is combined to train a model of what an "ideal" driver should do in all observed situations.
    The new instance of the trained model is then run against every individual driver's data set to identify situations in which human drivers failed to make the ideal choices. This is then used to create "warning scenarios", in which the driving AI is securely in control, but where it suspects other drivers may not be.
    These contextual "warning scenarios" are sadly missing from most other self-driving car projects. And it all has to do with where and how the data is obtained and used, and is less about the structure of the AI itself.
    I've worked developing products for several safety-critical industries, including commercial nuclear power, military nuclear propulsion, aircraft instrumentation and avionics, satellite hardware and software, and the list goes on.
    The key factor isn't "what's being developed", it's "how do we test it". At least as much effort goes into testing a safety-critical system as into the entire rest of the development effort (including research, design, implementation, production, sales, marketing, field support, customer support, and so on).
    When you know your system is going to literally be tested to death, you want every step of all your other processes to have a primary goal of ensuring those tests will succeed the first time.
    Thorough testing is terribly difficult, immensely time-consuming and fantastically expensive. Way too many developers simply avoid this, and use their customers (and the general public) as test guinea pigs.
    This is pretty much what many AI researchers are doing. They are largely clueless about testing for robust, real-world safety. They seem to always be surprised when a system finds a new way to fail.
    They need to stop what they are doing and spend a year working in a safety-critical industry. Gain some hands-on perspectives. Learn to ask the right questions about development, testing and deployment. Be personally involved in the results.
    I could go on and on about the specific techniques used when developing systems that MUST NOT FAIL. Since that's statistically impossible (despite our best efforts), we must also ensure our systems FAIL GRACEFULLY and RECOVER RAPIDLY.
    This comment is getting long enough, but I'll relate an example:
    I joined a project to design an extremely inexpensive satellite that had to operate with extremely high reliability. The launch costs were three orders of magnitude greater than the entire satellite development budget! The launch was to be provided for free, but only if we could prove our satellite would work. Otherwise, they'd simply give the slot to another payload with greater odds of success.
    We couldn't afford rad-hard electronics. So we did some in-depth investigation and found some COTS parts that were made on the same production line as some rad-hard parts, and were even designed by the same company. And the parts we needed were available in "automotive grade", which is a very tough level to meet (it's beyond "industrial grade", which in turn is beyond "commercial grade").
    Our orbit would occasionally take the satellite through the lower Van Allen belt, so we had to ensure we'd survive not only the cumulative exposure (that rapidly ages electronics) but also the instantaneous effects (which create short-circuits in the silicon and also bit-flips).
    We "de-lidded" some of our ICs and took them on development boards to the Brookhaven National Laboratory to be bombarded with high-energy heavy ions from the Tandem Van de Graaff accelerator.
    The results were far worse than we expected: When in the radiation field, at the 95% confidence level we could expect to get just 100 ms of operation between radiation-induced resets.
    I had to throw my entire (beautiful, elegant) software architecture out the window. Instead I set my development environment to cycle power every 100 ms, and then I evolved the software and its architecture until it could finish one complete pass through its most critical functions within that time.
    If more time was available before the next reset, only then would less-critical (but still massively important) functions be performed. Fortunately, this was the typical case outside of the Van Allen belt.
    The most difficult part of the process was choosing what was critical and what wasn't. That in turn demanded a radical rethinking of what the satellite was going to do, and how it would get it done.
    The end result was a satellite design and implementation that was extremely reliable and highly functional, yet still small and cheap.
    The moral of the story? There was no way we could test simply by tossing something into orbit and seeing how it did. Similarly, AI researchers should not be permitted to simply toss their creations into the wild.
    We needed to create a test environment that was at least as hazardous as what would be experienced in orbit. Similarly, AI researchers need to pay much more attention to accurate environmental simulation, not just statistical sampling.
    We needed to make optimal use of that test environment, both because it was expensive, but also because we wouldn't have much access to it. Similarly, AI researchers need to perform rigorous in-depth testing on a time scale that matches the pace of development, so it will be performed often enough to continually influence the development process.
    As my story shows, the effects of good testing can be massive. You must be willing to occasionally feel like an idiot for not predicting how bad the results could be. Still, feeling like an idiot is to be preferred over the feeling you'll have when your system kills someone.
    And that satellite? It never got launched. We were a piggy-back payload on a Russian launch to Mir, and Mir was immediately and suddenly decommissioned when Russia joined the ISS coalition. NASA would never allow a payload like ours anywhere near the Shuttle or ISS. And our mission wouldn't fit in a CubeSat package.
    Finally, let's look at how cars are tested. A manufacturer designs and builds a new model, then sends several of them to the US government (NHTSA) for crash testing, and other groups also do their own crash testing. These days, if a car gets less than 4 out of 5 stars, it will receive a terrible review, both from the testing group and in the press. Independent of the risk to people in their cars, the risk of a bad review poses a risk to the existence of the company.
    That is, the crash testing process and the press environment makes the customer risk "real and relevant" to the car manufacturer. When this was not the case, we saw companies and lawyers place a dollar value on the lives lost and the potential for future death, then make corporate decisions solely on that cost basis.
    That is, the risk of corporate death wasn't as high as the risk of customer death.
    So, to me this means there must be independent testing of AI systems prior to wide deployment. These tests must convert the risk of product failure into a direct risk of corporate failure, of bankruptcy, of developers and researchers losing their jobs and reputations.
    That, in turn, will help ensure that developers do their testing so well that the independent public tests will always pass with flying colors.
    And keep the public safe(r).
    Until someone figures out how to game the tests (such as the ongoing diesel emissions testing scandals).
    Making better tests will always be an issue, one that will grow in parallel with making better AI.

    • @ylluminarious151
      @ylluminarious151 7 лет назад

      Yeah, I don't think the concept of a general AI is a good idea in the first place, and you've definitely got a point that a poorly tested and poorly taught AI will be a disaster of epic proportions. Sadly, I fear that such an AI will be what gets out first and will illustrate the utter carelessness and thoughtlessness of the people developing it.

    • @maximkazhenkov11
      @maximkazhenkov11 7 лет назад +2

      This is applicable to Narrow Intelligence, but not to General Intelligence or Superintelligence. Only the latter types are apocalyptic in proportions.

    • @AexisRai
      @AexisRai 7 лет назад

      BobC With those credentials I would strongly suggest you join some AI company where you think your product development expertise would do a lot of good, then. Especially if you think the problem formulation among experts in AI is so wrong /and/ dangerous enough to be very deadly.

    • @dmdjt
      @dmdjt 5 лет назад +7

      I'm afraid, AI in general suffers from an inherent untestability.
      Tests can only be as good, as our domain knowledge. Most systems are too complex, to test every possibility so we use our understanding, to find the edge cases to get the best test coverage we are capable of.
      But we use AI, were we don't have the domain knowledge - that's the point behind AIs.
      An AI models the domain and simplifys it. It's model will never be perfect. How could we even find these imperfections, without the complete knowledge of the domain?
      This is already a problem with our current, primitive AI. In other systems, we know the model, that we can test - in AI we don't even know the model.
      But what happens, when we do not/can not control the domain anymore?

    • @allan710
      @allan710 4 года назад

      This is true for current AI. For any idealised future AGI (Artificial General Intelligence) testing isn't possible. The whole point of AGI is unlimited potential (just as humans). How do we test humans in order to prevent them from killing people? That's very hard, but it's possible (yet unethical) because humans are aligned among themselves. We can predict what are the values and behaviours for a vast amount of people. What about AI? If we don't solve the alignment problem, then it's boundless. The goals of the AGI may be known, but the actions aren't easily predictable. The real problem is that AGI isn't in the realm of human technology anymore, after all, we expect that only an AGI could test another AGI, but should we trust them? AGIs are something that may only appear far in the future, but the implications of their possible existence are far too problematic to be ignored. A rogue nuclear missile might be able to destroy a city. A rogue AGI might be able to convert all matter of the universe in paperclips (not really, but it can vastly affect things in a universal scale)

  • @Dunkle0steus
    @Dunkle0steus 4 года назад +1

    Rather than getting AI to do real things like collect stamps, maybe we should give AI goals like "solve cold fusion" or "find a unified theory of quantum physics and gravity".

    • @cfdj43
      @cfdj43 4 года назад +2

      The stamp collector is a thought experiment to show how immediately dangerous an AGI is regardless of its goal. It prevents the argument of "an yeah some people might die, but it'd be worth it for (whatever sensible sounding goal you'd set)" no one is actually trying to build it

    • @Dunkle0steus
      @Dunkle0steus 4 года назад

      @@cfdj43 i know.

    • @fieldrequired283
      @fieldrequired283 4 года назад

      @@Dunkle0steus
      Do you care how many babies are killed in the process of solving cold fusion? If so, you still have the stamp collector problem. It turns the world into infinite redundant cold-fusion-solving-machines instead of stamps, because it needs to be completely sure.

    • @Dunkle0steus
      @Dunkle0steus 4 года назад +1

      @@fieldrequired283 I'm not implying that solving physics problems is a perfect option. I'm not saying "DUH OBVIOUSLY! why didn't anyone think about this???", I'm saying that I think there may be better avenues for AI to go down than collecting stamps. Currently, we use computer programs and AI to automate tasks which humans could do, but requires too much effort, like doing arithmetic, sorting, counting, image recognition, etc. When you talk about stamp collecting, you're talking about setting the AI up to interact with the world in a very physical way, and maybe that's not how we should use artificial intelligence. If set the AI up so that its goals don't force it to directly interact with humans, our world and the internet, and instead give it problems it can solve internally, that might at least help prevent it from causing obvious negative impacts. We can't know how it will decide to solve those problems, but we can at least say that the obvious things like accidentally running over babies in order to get a teacup from the cupboard are less likely to be instrumental goals for it than they are for a tea-serving robot.

    • @fieldrequired283
      @fieldrequired283 4 года назад

      @@Dunkle0steus
      The computer is made of physical matter. Humans are made of physical matter. Communication is a direct, material interaction in the physical world. All problems are in the physical world, and so are there solutions.
      If you're asking it to do *anything,* and it understands all these things, it will, by necessity, knowingly interact materially with the physical world.

  • @ioncasu1993
    @ioncasu1993 7 лет назад +20

    I'm a simple person: I see Robert Miles, I press like.

  • @acf2802
    @acf2802 Год назад +2

    3:05 AI doesn't have racial or gender biases. Reality has racial and gender biases. AI just recognizes the pattern and acts accordingly. The only way to create an AI which isn't "biased" is to explicitly give it a list of facts you want it to pretend don't exist (like some people do.)

    • @vadenlow5953
      @vadenlow5953 Год назад +2

      True... But I still think I shouldn't have to pay more for car insurance just because most guys are more reckless than most gals

    • @KANJICODER
      @KANJICODER Год назад

      @@vadenlow5953 Car insurance is a scam. I crashed my car once as a delivery driver. We fixed it without making an insurance claim so my insurance wouldn't go up.

    • @RonnieNichols
      @RonnieNichols 5 месяцев назад

      I know this is an old comment, but so-called "racial and gender differences" are more often socially and societally enforced than by any natural or "objective" standard.
      If an AI is trained for crime recognition based on the standards of a society where many actions were made crimes for specific racist reasons, and/or where racial inequality exists, it notices the patterns that developed as a direct result of racist actions, and replicates them, thereby continuing and reinforcing them.
      Truthfully, though i sincerely hope its not the case, your comment seems to indicate that you hold a racial bias against certain people for "facts" you claim people pretend dont exist. I wonder what these "facts" are and why you declined to give any examples.

    • @acf2802
      @acf2802 5 месяцев назад

      ​@@RonnieNichols Jesus, the cope is overwhelming. Black crime statistics have nothing to do with shit that was "made crimes for specific racist reasons." The FBI crime statistics specifically show that they are many times more likely to commit violent crimes, which consists of murder, rape, and assault. In what crazy woke rationalizing world do you think that the only reason we punish murder, rape, and assault is to unfairly keep black people down? 🤪

    • @acf2802
      @acf2802 5 месяцев назад

      @@RonnieNichols The cope is overwhelming. FBI crime statistics have nothing to do with shit that was "made crimes for specific racist reasons." The FBI crime statistics specifically show that they are many times more likely to commit violent crimes, which consists of murder, rape, and assault. In what crazy woke rationalizing world do you think that the only reason we punish murder, rape, and assault is to unfairly keep black people down? 🤪

  • @durellnelson2641
    @durellnelson2641 4 года назад +1

    6:35 "There's a chance that we turn the entire atmosphere into a thermonuclear bomb"
    7:06 "There was a non zero probability... that all humanity would end instantaneously more or less right there and then"
    So please explain how...
    9:35 "The worst case scenarios for AI are worse than igniting the atmosphere"

    • @RobertMilesAI
      @RobertMilesAI  4 года назад +9

      You can't imagine anything worse than being dead?

    • @fieldrequired283
      @fieldrequired283 4 года назад +2

      @@RobertMilesAI
      I swear this is like the third time I've seen you make this exact response to this sort of comment.
      It really is a chilling line. Like a threat, almost.

    • @martinh2783
      @martinh2783 4 года назад +1

      Igniting the atmosphere will most likely kill all organisms that live above the surface of the ocean. Which is really bad but organisms that live really deep in the ocean will probably be just fine. While an AI could possibly end all life on earth (and in every part of the universe it can get influence over). Which I would call worse.

  • @NNOTM
    @NNOTM 7 лет назад +2

    Did you see the recent (from 2 days ago) article on Slate Star Codex discussing a new Survey about AI safety opinions of AI researchers? (or maybe the paper itself - it's from May 30th)
    (link: slatestarcodex.com/2017/06/08/ssc-journal-club-ai-timelines/)

    • @NNOTM
      @NNOTM 7 лет назад

      Haha, fair enough

  • @memk
    @memk 7 лет назад

    >Wanting people to not only have sense of achievement, but actual achievement
    For 99% of our population they never had any achievement to begin with for their entire live anyway, this is a non-issue. For those 1% left, they will still able to do that. And it's he responsibility to handle the equality problem here, NOT a AI problem (unless they ALSO manage the "human" part of us)

    • @zaco-km3su
      @zaco-km3su 5 лет назад

      I wouldn't say most of the population didn't have any achievement at all.

  • @MarcErlich44
    @MarcErlich44 7 лет назад +3

    You should add a link for your patreon. Also, I want a collaboration with you and Isaac Arthur, the Futurist. Check him out on youtube if you don't already know him.

  • @rogerc7960
    @rogerc7960 3 года назад

    Stafford scandal was a great efficiency savings
    50,000 dead

  • @whateva1983
    @whateva1983 6 лет назад

    what could be worse than igniting the atmosphere, Robert? o.O

    • @RobertMilesAI
      @RobertMilesAI  6 лет назад +2

      whateva1983 You can't imagine anything worse than being dead?

  • @mathmagician5990
    @mathmagician5990 2 года назад +1

    I don't want to be a contrarian but how are the worst case scenarios for AI worse than igniting the atmosphere? I cannot imagine a feasible AI problem that is worse than total human annihilation.

    • @dawidkiller
      @dawidkiller Год назад +3

      vr torture chamber for everyone i asume

    • @baranxlr
      @baranxlr Год назад

      @@dawidkiller You can already hop on vrchat rooms and experience that for free

  • @DamianReloaded
    @DamianReloaded 7 лет назад +24

    I think the sense of achievement won't necessarily be a problem if the standard of living is good. There are a lot of things that can be done, particularly social driven stuff, like sports, romance or politics where AI can be put aside or be used only as an improvement. For most people there will be endless synthesized/sequenced entertainment for which the hours of the day won't be enough to consume. Also space colonization. All this assuming we won't be cooking ourselves due to global warming or starving to death due to the miserliness of the Mafia/Politicians.

    • @__-cx6lg
      @__-cx6lg 7 лет назад +5

      Damian Reloaded I dunno, are endless hours of entertainment what we want the future to be like? I mean, we spend enough of our time sedentary and immobile in front of screens already--do we want a future where that's humanity's primary activity?

    • @DamianReloaded
      @DamianReloaded 7 лет назад +8

      Entertainment is a choice. Most people will choose to sit back and be served with pleasurable sensations. It's not the job of entertainment to educate. Neither you will be able to educate people only by depriving them from entertainment. If you ask me I'd rather have everybody sitting on their couches enjoying themselves than carrying a gun to war.

    • @maximkazhenkov11
      @maximkazhenkov11 7 лет назад +1

      Can't speak for "we", but I certainly do. Who's to say we spend "enough" of our time in front of screens? Some divine commandment?

    • @marsco00
      @marsco00 7 лет назад +5

      Damian Reloaded Well, unfortunately, infinite pleasure is not always the best, because people actually love doing what they work at, whether it be in art, music, or an academic area. Of course, in a world where superintellignet AI runs the world (assuming it's beneficial), practically all of the academic areas are gone for humanity to work on, except perhaps, Computer Science and AI regulation or whatever. Music and art will most likely be preserved for humanity, but subjects like mathematics, physics, chemistry, and biology will be taken over by AI, and people who love working in those fields will be deprived of that joy. As a high school student aspiring to be a physicist, I would be very sad if AI took away the need to study those subjects. In the end, humanity under beneficial AI rule might not really have a reason to live anymore; achievement and the will to succeed are gone. There is nothing but pleasure, and to me, that is a very scary thought. What do you think about that? Do you agree with me, or do you have a counterpoint?

    • @XFanmarX
      @XFanmarX 6 лет назад +5

      Sense of achievement has nothing to do with quality of living. It has to do with feeling useful and skilled.
      If the entire world has nothing to do but have fun all day, then they *will* feel like shit. Human beings are programmed to receive their most enjoyable hormone-reactions when they feel they made an accomplishment. Which is one of the reasons our species is at the top of the food chain; our hormones motivate us to do our best. When we don't we become restless and self-loathing, this is how our bodies push us to do something more productive. Why do you think depression is so incredibly widespread at the new adult generations? Contrary to what some might believe most people do not want to be lazy slobs with nothing to do all day.
      If you think people will be happy just sitting on their couches all day, you're being incredibly naive as to what makes a human being. Robots, not just AI, taking over people's livelihood is a real serious problem that is closer to reality at this moment then any of the other AI-problems mentioned in this video and should not be so easily dismissed.

  • @Corey_Brandt
    @Corey_Brandt 6 лет назад +1

    Why can’t it be that every time the AI encounters an ethical dilemma or may encounter one it just gives control to a human?

    • @0MoTheG
      @0MoTheG 5 лет назад +4

      Because it would not do anything then.