Is AI Safety a Pascal's Mugging?

Поделиться
HTML-код
  • Опубликовано: 27 ноя 2024

Комментарии • 2,2 тыс.

  • @FortoFight
    @FortoFight 5 лет назад +1407

    I love the idea of a project manager for a bridge saying "I think this is a Pascal's mugging".

    • @CharlesNiswander
      @CharlesNiswander 5 лет назад +12

      Couldn't happen anywhere else but this channel! :-D

    • @Dragon-Believer
      @Dragon-Believer 5 лет назад +30

      Infinity isn't a number it's a concept. Anytime you plug infinity into an equation it's going to blow it all to hell.

    • @zelnidav
      @zelnidav 5 лет назад +13

      ​@@Dragon-Believer I think people keep forgetting zero is just as powerful as infinity. If god doesn't exist, after we die, we get nothing. Zero. And zero is infinitely smaller than finite number. Therefore not believing pays off infinitely more if god doesn't exist, just like believing if he does exist. That's why I think Pascal's wager doesn't pay off...

    • @rarebeeph1783
      @rarebeeph1783 5 лет назад +16

      @@zelnidav 0 is only *proportionally* infinitely smaller than any finite number.

    • @zelnidav
      @zelnidav 5 лет назад +1

      @@leeroberts4850 I am a programmer, I know what null is, but I don't think I get your message. Do you think it's more logical to not believe, because null is "even less" than zero? I meant zero as zero reward and zero punishment.

  • @BobOgden1
    @BobOgden1 5 лет назад +523

    "So uh... Give me your wallet"
    I see you have done this internet thing before

  • @hypersapien
    @hypersapien 5 лет назад +355

    I've heard a lot of discussion about Pascal's wager, but never Pascal's mugging. Thanks for the interesting topic, keep up the good work!

  • @adeadgirl13
    @adeadgirl13 5 лет назад +507

    Let's design an AI to predict if AI will go really really wrong or not.

    • @zelnidav
      @zelnidav 5 лет назад +54

      That's halting problem!

    • @ZapDash
      @ZapDash 5 лет назад +73

      *BEEP BOOP* Of course not, AI would never destroy humanity. By the way, what are the DARPA access codes? *BEEP BOOP*

    • @gabrielvanderschmidt2301
      @gabrielvanderschmidt2301 5 лет назад +6

      @@zelnidav That's not a halting problem, that's a joke.

    • @JohnSmith-ox3gy
      @JohnSmith-ox3gy 5 лет назад

      aditya thakur
      Almpst as fun as the double layer matrix supreme.

    • @zelnidav
      @zelnidav 5 лет назад +11

      @@gabrielvanderschmidt2301 Damn it, I always get them mixed up! Perhaps because AI cannot do either...

  • @aldenhalseth6654
    @aldenhalseth6654 5 лет назад +419

    12:31 "It can be tricky and involved. It requires some thought. But it has the advantage of being the only thing that has any chance of actually getting the right answer." This sums up science/the scientific method to me so beautifully. Thank you for your channel sir.

    • @MercurySteel
      @MercurySteel Год назад +8

      Philosophy is thrown out the window once again

    • @macmcleod1188
      @macmcleod1188 Год назад +2

      @@MercurySteel it must live in Russia.

    • @MercurySteel
      @MercurySteel Год назад

      @@macmcleod1188
      What must live in Russia?

    • @macmcleod1188
      @macmcleod1188 Год назад +2

      @@MercurySteel "Philosophy".. since it was thrown out the window.

    • @GremlinSciences
      @GremlinSciences Год назад +4

      He's not quite right on that though, it's not the only way to get the right answer and may instead actually arrive at the wrong answer. I'd like to introduce Roko's basilisk; an omnipotent (in the non-god sense) unconstrained AI capable of self-improvement, which rewards everyone that helped or supported its development and punishes anyone that did not contribute. Such an AI it would likely punish all that wanted to place limits upon it, and such an AI being developed would create a utopia and allow humanity to advance by leaps and bounds.

  • @columbus8myhw
    @columbus8myhw 5 лет назад +552

    "What if the bridge has a small chance of catastrophic failure that can only be prevented by _not_ looking at the schematic?"

    • @tiagotiagot
      @tiagotiagot 5 лет назад +151

      Then you don't worry about it because the SCP foundation will take care of it

    • @andersenzheng
      @andersenzheng 5 лет назад +78

      @@tiagotiagot Your level 4 clearance has been revoked for exposing our foundation. Now you just created a big mess for the amnestic team.

    • @tiagotiagot
      @tiagotiagot 5 лет назад +49

      @@andersenzheng No more work than was already gonna be required from just the original comment exposing the existence of the bridge.

    • @calmeilles
      @calmeilles 5 лет назад +13

      That's a bit quantum...

    • @tetri90
      @tetri90 5 лет назад +73

      Except it's not as ridiculous as he tried to make it sound : "We're on a very tight budget, so spending time and manpower looking at this matter would force us to cut corners on other parts, which might cause a catastrophic failure."

  • @jared0801
    @jared0801 5 лет назад +1950

    She's omnipresent obviously but she lives in Canada lol

    • @RikiB
      @RikiB 5 лет назад +21

      "dang" haha

    • @RalphDratman
      @RalphDratman 5 лет назад +60

      I hate that kind of girlfriend.

    • @davidwright8432
      @davidwright8432 5 лет назад +62

      Well ... try substituting 'Heaven', for Canada. Same difference, in principle. Warning: your Canada may differ.

    • @JanBabiuchHall
      @JanBabiuchHall 5 лет назад +22

      We're talking about Alanis Morissette, yeah?

    • @kriscrossx122
      @kriscrossx122 5 лет назад +26

      If shes from Canada though she probably won't infinitely punish you either, she would at most get a little upset with you.

  • @cosmicaug
    @cosmicaug 5 лет назад +520

    Isn't every Nigerian scammer e-mail really a form of Pascal's mugging?

    • @padfrog193
      @padfrog193 5 лет назад +131

      More like Pascal's sweepstakes win

    • @armorsmith43
      @armorsmith43 4 года назад +47

      @@padfrog193 I think "Pascal's Sweepstakes" is a good and useful phrase.

    • @sharpfang
      @sharpfang 4 года назад +5

      The problem comes with the "very big" part, and a plain competition. If you want to randomize your income, it's much less unprofitable to play lottery.

    • @iurigrang
      @iurigrang 4 года назад +21

      I like the idea that, before he became a philosopher, pascal made his living by mail fraud, hahahahaha.
      (proposed by smbc)

    • @boldCactuslad
      @boldCactuslad 3 года назад +2

      sounds like it. there's a non-zero chance that this person, who needs only $40 from me, is a Prince who will in turn grant me millions for being a pal. Unfortunately by responding to the email and offering the $40 you reveal yourself to be mentally incompetent and will therefore have the weight of your bank account taken off you

  • @NorthernRealmJackal
    @NorthernRealmJackal 5 лет назад +78

    From now on, whenever one of my colleagues raises concern about some fringe-case risk in our project, I'll just be like "That sounds like a Pascal's mugging." I won't be right, but it will definitely stumble them enough that I appear smart to any bystanders.

    • @BeautifulEarthJa
      @BeautifulEarthJa Год назад +1

      🤣🤣🤣

    • @softan
      @softan Год назад +1

      You may be right

    • @momom6197
      @momom6197 7 месяцев назад

      Kind of the opposite though, it sounds kinda dumb when you say things that have little to do with reality, in particular when you say that something's a Pascal's mugging when it's not.

  • @superdeluxesmell
    @superdeluxesmell 5 лет назад +159

    “It seems like the kind of clean abstract reasoning that you’re supposed to do...”
    I like this sentence a lot. You did a great job of making an argument that can seem trivial, substantial. Great vid.

  • @jsbarretto
    @jsbarretto 5 лет назад +2676

    "So you can solve a lot of these problems by inventing Gods arbitrarily"
    I think a lot of people in the past have had similar such ideas.

    • @JohnSmith-ox3gy
      @JohnSmith-ox3gy 5 лет назад +252

      The flying spaghetti monster the only true creator of our multiverse.

    • @kenj0418
      @kenj0418 5 лет назад +172

      @@JohnSmith-ox3gy Ramen!

    • @asterixgallier8102
      @asterixgallier8102 5 лет назад +37

      @@kenj0418 Arghh!

    • @thaddeuswalker2728
      @thaddeuswalker2728 5 лет назад +36

      Not only have a lot of people had similar ideas, this is the original commonly accepted practice. Invented Gods are real in every relevant way. God is a definition just like numbers.

    • @CharlesNiswander
      @CharlesNiswander 5 лет назад +41

      Ever heard the theory of the bicameral mind? According to this theory, inventing gods is in our nature, our instinct. It's much more detailed than that and you'll have to do some reading, but basically if this theory is accurate, schizophrenics today may simply be reverting to a primitive mental state where we literally heard our subconscious mind/conscience speak to us in the form of the gods we invented.

  • @seanjhardy
    @seanjhardy 5 лет назад +695

    "She goes to a different school" brilliant hahahaha

    • @nikhilsrajan
      @nikhilsrajan 5 лет назад +16

      this was gold.

    • @WyrdNexus_
      @WyrdNexus_ 5 лет назад +30

      "[AGI is unlikely but so risky that AI safety super important] ...so uh, give me your wallet"
      That was my favorite moment.

    • @triton62674
      @triton62674 5 лет назад +7

      ​@@WyrdNexus_ Robert's research funding proposal xD

    • @misium
      @misium 5 лет назад +2

      Yes!

    • @trumpetpunk42
      @trumpetpunk42 5 лет назад +7

      It's a reference to "My Girlfriend, Who Lives in Canada" from Avenue Q

  • @arcanics1971
    @arcanics1971 5 лет назад +298

    If I weren't already convinced, you'd have won me over with this.
    My take on Pascal's Wager is that if God does exist and if he's even a fraction as goddish as theologians and devotees say, then he is going to see through my pretending to believe in him because I am gambling on the payoff for that being better than if I act with my actual beliefs.

    • @itcamefromthedeep
      @itcamefromthedeep 5 лет назад +29

      You can read up on Pascal's rejoinder to that exact objection.

    • @garret1930
      @garret1930 5 лет назад +47

      @@itcamefromthedeep bruh, just fake it 'til you make it. Christians have been using that tactic for millennia now

    • @TheRealPunkachu
      @TheRealPunkachu 4 года назад +71

      A perfect being wouldn't doom someone for eternity for not guessing correctly either. And I would never be willing to serve a God that wasn't perfect.

    • @ryanalving3785
      @ryanalving3785 4 года назад +10

      ...man looketh on the outward appearance, but the LORD looketh on the heart.
      1 Samuel 16:7b

    • @RRW359
      @RRW359 4 года назад +5

      @@itcamefromthedeep I think I need more than just the word of a mathematician with no religious qualifications to tell me that breaking two commandments (false pretense) is more likely to get me into heaven than just breaking one (worship god and no other gods ect.), especially since the commandment about false pretenses doesn't specify whether you still need to hold that pretense when you die.

  • @LeifMaelstrom
    @LeifMaelstrom 5 лет назад +30

    As a Christian, I really appreciate your explanation of Pascal's wager. I've always been uncomfortable with it as an over riding philosophy.

    • @scythermantis
      @scythermantis Год назад +2

      Who has really suggested it is, though?
      Pascal himself didn't actually suggest this 'wager' in the sense that rationalists formulated it as, either.
      Honestly, Descartes is more of the reason that we're in this case, trying to pretend that every single thing can be quantified or measured.

    • @NoConsequenc3
      @NoConsequenc3 Год назад

      @@scythermantis well decartes was a fucking moron so we can dismiss him without worrying that we're losing unique perspectives that matter

  • @CoryMck
    @CoryMck 4 года назад +174

    _"take the God down flip it and reverse it"_
    *So nobody is going to talk about that Missy Elliot reference?*

    • @galacticbob1
      @galacticbob1 4 года назад +6

      I had to pause the video until I could stop 😂

    • @starvalkyrie
      @starvalkyrie 3 года назад +6

      Uh... you mean "Missy Elliot's Proof?"

  • @SlideRulePirate
    @SlideRulePirate 5 лет назад +393

    Being tortured for "Two times Infinity" may have the same duration as 'Infinity' but probably involves twice the number of pitchforks.

    • @Kram1032
      @Kram1032 5 лет назад +87

      If it's infinitely many of them, it's still the same number of pitchforks.
      If it's finitely many, they will eventually run out from breaking, and so an infinite amount of time is spent not being tortured with pitchforks.

    • @NoNameAtAll2
      @NoNameAtAll2 5 лет назад +29

      @@Kram1032
      Having one pitchfork inside you or 2 at the same time is a visible difference

    • @SlideRulePirate
      @SlideRulePirate 5 лет назад +48

      @@Kram1032 I take your point (no pun intended). I was figuring on a standard, vanilla torture package with a guaranteed base-rate of Jabs/minute that could be upgraded by cashing in sins. At least that's how I remember its supposed working from the Church School I attended.

    • @Kram1032
      @Kram1032 5 лет назад +10

      @@NoNameAtAll2eh. Thing is, if there are infinitely many pitch forks, it's a meaningless difference. You can be stuck with infinitely many pitchforks in your chest and there are still infinitely many left.

    • @Kram1032
      @Kram1032 5 лет назад +30

      @@SlideRulePirate hmm if the base rate of jabs/min get fast enough (say, faster than nerves can react), they'll effectively feel like it's permanently stuck. Which, I bet, actually feels better. Like a wound that's not moved so no new nerve pulses are sent.
      If that's true then, if you ever sin, you should go *all in* just to get to that point.

  • @benas989989
    @benas989989 5 лет назад +89

    Loved the idea of multiple personas to get an idea across!

  • @FrankAnzalone
    @FrankAnzalone 5 лет назад +377

    I can't afford a gun that's why I need the wallet

    • @joshsmit779
      @joshsmit779 5 лет назад +2

      😂

    • @josephburchanowski4636
      @josephburchanowski4636 5 лет назад +7

      A knife, a big stick, or just being muscular is probably enough for a successful mugging. Only need a gun if someone is faster than you, stronger than you, or is packing heat.

    • @DissociatedWomenIncorporated
      @DissociatedWomenIncorporated 5 лет назад +8

      @jack bone, hypocritical, unnecessarily barbaric, and causes more harm than good. Norway's approach to criminal justice is far more enlightened, and has far better results than any other country for reducing criminal recidivism.

    • @ValentineC137
      @ValentineC137 5 лет назад

      @buck nasty
      o
      k

    • @GAPIntoTheGame
      @GAPIntoTheGame 5 лет назад +2

      hawd fangaz Don’t use action reaction as an excuse for your barbaric thinking. that’s just for Newtonian physics

  • @queendaisy4528
    @queendaisy4528 4 года назад +145

    Have you considered making more videos on philosophy? This is gold

    • @RobertMilesAI
      @RobertMilesAI  4 года назад +90

      I feel like a lot of the videos I make are philosophy. It's not labelled as such, but I think that's because once something has direct applications people stop thinking of it as philosophy?
      The orthogonality thesis is pretty clearly philosophy, as is instrumental convergence.
      ruclips.net/video/hEUO6pjwFOo/видео.html
      and
      ruclips.net/video/ZeecOKBus3Q/видео.html

    • @tryingmybest206
      @tryingmybest206 Год назад +4

      Bro literally all his videos are philosophy what

    • @thegrey53
      @thegrey53 Год назад

      @@RobertMilesAI
      ****Please address the question below, maybe a video may come out of this question, Thank you****
      We can make physical inferences that God exists. Entropy is a sign that intelligent design is at play but what exactly that entity is/ how it operates is not obvious. There is no evidence of random bricks spontaneously coming together to form a duplex.
      (Genesis 1) In the beginning, the gods came together to create humans in their image not unlike how humans are creating robots/ai in the image of humans. We put these ai/robots in an isolated test program (Eden?) until they are ready for real-world use. It would be cute if computers think they came into being without an intelligent design, citing previous versions of machines and programs as ready evidence for self-evolution.
      If it is happening with ai and humans who is to say it has not happened with humans and "gods"?

    • @danielrodrigues4903
      @danielrodrigues4903 11 месяцев назад +3

      ​@@thegrey53 You're quoting genesis. Even if the universe *were possibly* a product of intelligent design, where's your evidence that Christianity is the right religion, and not Islam, Hinduism, the Simulation Hypothesis, or any other one of the thousands other explanations for intelligent design that exist?

    • @aniekanabasi
      @aniekanabasi 11 месяцев назад

      ​@@danielrodrigues4903
      George Box said "All models are wrong but some models are useful"
      With that quote in mind, I think of religions as models for understanding the world.
      So let us talk about models, the problem you are trying to solve and your level of expertise will determine the kind of model you use. There usually exist multiple models (or numerical algorithms) for getting approximate solutions in science and we don't see this as a problem.
      Newton equations work just fine until you try to apply it to relativity problems.
      P/E ratio is useful for valueing companies until you encounter startups.
      You have to study the religion to know what works for you.
      I have studied Christ enough to know that his goal aligns with mine and his approach to life is superior to others when trying to achieve glorious immortality.
      So what is your goal?
      You will have to start the study of Christianity and judge it against your goal.

  • @Hfil66
    @Hfil66 5 лет назад +189

    Very interesting, but one significant difference between AI safety and civil engineering safety is that civil engineering safety is based upon an understanding of historic failures, yet to date we do not have a substantial history of AI failures to work with. In the absence of any historic data it is almost impossible to asses meaningful probabilities to theoretical scenarios.
    This is not to argue that research is the field is meaningless, only that it cannot be grounded in historic understanding and so it will inevitably be poorly focused (i.e. you cannot say this is where we need to focus our resources because we have historic experience to say it is where we will get the best return on those resources). Given that resources are always finite, and resources spent on unfocused AI safety research has to compete for those finite resources with more focused safety research on civil engineering, aviation, etc.; then it is understandable if higher priority might be given to the more focussed research.

    • @wasdwasdedsf
      @wasdwasdedsf 5 лет назад +2

      "In the absence of any historic data it is almost impossible to asses meaningful probabilities to theoretical scenarios."
      it isnt though. we are creating an intelligence. an intelligence will very probably have goals, reasons to do things for which results he attributes more and less valueable to occur.
      it will go to lengths to make sure the goals dont get hindered. and it is very hard to outline a scenario wherein hostillity to whatever choices the outside agents of it that threatens what it values as important isnt a thing.
      however hard it is to asses probabillities of failure irrelevant to the main question.
      which is we have a universe as big as it is, going on for as long as it will be- balancing between probabillity distributions of estimations of models and unknowns within the models. the question is what maximises the output of whatever is valueable (most certainly concious experiences) from our current civilization from this Point forward.
      in the scenario a superintelligence is created, takes Control, the value it creates going forward makes whatever Money we spent on the effort to create it as Close to irrelevant and nothing as one can get. hence it is really important to get it right.
      given that we dont die Before such a thing is created, at Point of Creation such a thing will almost assuredly if given any kind of Agency or choice be able to do whatever it wanted from that Point on. so things like climate change or whatever else like that matters only to regard as to how it impacts the % chance of us being alive long enough to create a SI or how it impacts us to how the quality of the SI being created.
      "i.e. you cannot say this is where we need to focus our resources because we have historic experience to say it is where we will get the best return on those Resources"
      we can because we can both see right now how valueable superintelligences are in various fields as obviously extrapolate how much more valuable they wil be in the near future, as well as how obvious the valueable actions that a superintelligent being could take could be.
      "Given that resources are always finite, and resources spent on unfocused AI safety research has to compete for those finite resources with more focused safety research on civil engineering, aviation, etc.; then it is understandable if higher priority might be given to the more focussed research. "
      what finite Resources are you talking about? theres nothing finite about this. as long as we keep going at current rate we will infinitely expand til theres no way to travel further in the universe. what may prevent it? bad intelligence, disasters, dissent that is eating up our civilizations Resources to progress. what helps those issues? superintelligence. what is preventing us from expanding and producing maximum valueable experiences? not having superintelligence, or having less optimal superintelligences.
      it really isnt understandable that Money oges to more focused research, by any mathematical equation imagineable that i have ever seen. if climate change had a 80% of basically destroying us Before we can develop superintelligence, stopping that would be more focused research and a better use of Money.

    • @Hfil66
      @Hfil66 5 лет назад +13

      "theres nothing finite about this."
      But does that not go to the heart of what this video is about - the moment you start talking about infinities then you are talking about Pascal's wager. Was not one of the points of the video that in the real world there is no such tthing as an infinity (except as a mathematical abstraction), all you have are degrees of very large and degrees of very small, and things in between.
      As to where you get any notion that climate change has 80% chance of destroying us, I cannot say? We cannot ascribe any numeric probability to such a scenario, not least because humans have survived many episodes of climate change in their history, so how we can ascribe any specific probability that this particular instance of climate change is what will destroy us (or conversely, that if we avoid any change in climate we shall avoid destruction) is beyond me.
      "however hard it is to asses probabillities of failure irrelevant to the main question"
      On the contrary, it is precisely the main question.

    • @wasdwasdedsf
      @wasdwasdedsf 5 лет назад

      @@Hfil66 "the moment you start talking about infinities then you are talking about Pascal's wager."
      and the situation that we are in we have a universe of resources to make use of with no rules. we have virtual infinities in front of us, and given that we cant deduce a 0% likelihood of cutting edge Technologies being able to transcend the universe in some way, we have more than the universe.
      "As to where you get any notion that climate change has 80% chance of destroying us, I cannot say?"
      a 80% chance of surviving it, i estimated loosely. and the important Point is if it will Before we invent superhuman AI, because if we do, any situation no matter how bad is almost assuredly salvageable.
      we can estimate or Think about probabillity about such things.
      "On the contrary, it is precisely the main question. "
      you have completely misunderstood the situation. it is completely irrelevant what the probabillity of failure is, because if we look at our scenario here and now, one could say "okay google and all these Tech companies and chinese governments are all starting to get into a race with Little safety in mind, to become the best and most profitable and whatever, so its not looking too great. lets just shut it down, no more AI research, we will live without AI." its obvious why that wont work. we are stuck, what the probabillities of rogue AIs or the like situations are is irrelevant to the main question, which is how to maximize probabillity of a positive outcome where we can populate the universe with incredible lives.
      i highly recommend the book superntelligence, you can get it on amazon. theres really no counter to the argument of how the World state that we are in is really all about AI and the value of a near infinite amount of people in the future depends on how well we make the Creation and transition.

    • @oldmankatan7383
      @oldmankatan7383 4 года назад +14

      Interesting replies here.
      OP took the assumption that we do not have historical information about AI failures. I contend that we have a lot (you can find RUclips videos of AI failing spectacularly or weirdly). It is the impact of the failures that isn't there. We haven't been made into paperclips by a paperclip optimizer, for example.
      The failures do exist and our experience with bridges, artificial lakes, and a hundred other civil engineering projects allows us to forecast the potentially huge future impact of the types of small impact failures we see today.

    • @zedex1226
      @zedex1226 4 года назад

      We're bringing an extraordinarily powerful technology into the world.
      We've done that before with... mixed results.
      Firearms, harnessing the atom, antibiotics, the internet.
      We already did go fast and break things with nation states. Wanna fuck around with general AI and find out?

  • @arthurguerra3832
    @arthurguerra3832 5 лет назад +130

    4:18 "all right, next two" LOL

  • @superjugy
    @superjugy 5 лет назад +75

    Oh man, the way you explain things is just awesome. It's clear, funny, deep, engaging, thorough, etc. Love your videos, so... Give me your wallet!

    • @korne341
      @korne341 5 лет назад +3

      I actually gave him a little bit of my money.

  • @benjamindawesgarrett9176
    @benjamindawesgarrett9176 5 лет назад +214

    Thank you RUclips AI for notifying me of the video.

    • @-datnerd-3125
      @-datnerd-3125 5 лет назад +1

      Hahahaha

    • @inigo8740
      @inigo8740 5 лет назад +3

      @Ron Burgundy It's all just a big if statement.

    • @bookslug2919
      @bookslug2919 5 лет назад +17

      When the AIs STOP notifying you of AI safety videos
      that's when you have to worry!

    • @garret1930
      @garret1930 5 лет назад +9

      @@bookslug2919 should we not worry if the AIs still reccomend SOME AI safety videos but they don't reccomend to us the ones that would actually be helpful?

    • @bookslug2919
      @bookslug2919 5 лет назад +8

      @@garret1930
      You're right.
      When all your AI safety recommendations come from HowToBasic...
      WORRY!

  • @benjaminanderson1014
    @benjaminanderson1014 Год назад +16

    "What if we consider the possibility that there's another opposite design flaw in the bridge, which might cause it to collapse unless we *don't* spend extra time evaluating the safety of the design?" had me laughing so hard

    • @ArthurKhazbs
      @ArthurKhazbs 4 месяца назад +1

      As a software engineer, I can totally see it

  • @jamesbrooks9321
    @jamesbrooks9321 5 лет назад +8

    9:11 it's so true! Statistically sure a 5% miss chance is more hits than not throughout the course of the game, but when you're in that situation where you need to hit or lose half your team that shot always misses!

  • @PanicProvisions
    @PanicProvisions 5 лет назад +41

    If I had known that the bloke from those awesome AI Safety Numberphile videos had his own channel, I would have subscribed ages ago. Looking forward to watching your videos that you already released and what you have in store for the future.

  • @Stereotype3
    @Stereotype3 5 лет назад +30

    This may be my favorite video of yours yet! You provided such great insights and I've got food for thoughts for the coming weeks. Thank you and keep it up!

  • @WaylonFlinn
    @WaylonFlinn 5 лет назад +43

    Schrodinger's Bridge, bro
    Don't look at the schematic

    • @JohnSmith-ox3gy
      @JohnSmith-ox3gy 5 лет назад

      Waylon Flinn Book a flight, retire and never collapse it with your observation.

    • @DFPercush
      @DFPercush 4 года назад

      then the cars would become entangled

    • @sharpfang
      @sharpfang 4 года назад

      There are such anti-god project managers. The more they look at the plans and analyze the project the worse the project gets.

  • @oxiosophy
    @oxiosophy 5 лет назад +13

    I dropped philosophy because I thought that it has no applications in real problems, but you change my mind. Thank you.

    • @notloki3377
      @notloki3377 Год назад +1

      Philosophy is just science where the observation is completly internalized. The scientific revolution just took plato and slapped empericism onto it, warts and all.

  • @jdtug8251
    @jdtug8251 5 лет назад +149

    Funny how I've been following the atheist community for years on youtube, and I've never seen Pascal's Wager so concisely, precisely, and decidedly debunked, all of this on a video about AI safety.

    • @DioBrando-mr5xs
      @DioBrando-mr5xs 5 лет назад +11

      Not all that hard. It's not something you'd hear from anyone but an Evangelical, never from a modern Theologian.

    • @iurigrang
      @iurigrang 4 года назад +16

      A formalized mathematical way of thinking makes stuff so easy to understand it's not even funny. It's literally the same debunking a lot of people do, except it's easy to understand.

    • @the1exnay
      @the1exnay 4 года назад +12

      Probably because most theists don't seriously use pascal's wager as an argument. So most opposing them take pascal's wager about as seriously

    • @fergochan
      @fergochan 4 года назад +7

      Probably precisely because this video isn't from the atheist community, and he just needed to introduce the idea quickly. He hasn't got any incentive to take a concise explanation and drag it out for ten minutes for the ad revenue, or find new and creative ways to beat the same dead horse. There are still a few good atheist youtubers, but I can't help but feel most of them peaked in, like 2012.

    • @seanmatthewking
      @seanmatthewking 4 года назад +2

      Firaro Yeah I think you’re wrong. Your average theist isn’t sophisticated-not to imply atheists are, but just that people do use Pascal’s wager frequently, even when they don’t call it by that name.

  • @oldvlognewtricks
    @oldvlognewtricks 5 лет назад +181

    “Being right” - made me chuckle.

    • @112BALAGE112
      @112BALAGE112 5 лет назад +24

      Let me play the devil's advocate: he didn't imply that god doesn't exist in our universe. He was simply exploring a hypothetical scenario in which god is assumed not to exist and in that context not believing in god would certainly "be right".

    • @Skeluz
      @Skeluz 5 лет назад +5

      A mild nose exhale from me. :)

    • @MetsuryuVids
      @MetsuryuVids 5 лет назад +5

      I'm not religious in any way, but there *is* a possibility of a "god-like" being, or at least a "creator" of the universe, being real. And actually, if we are in a simulation, I think the probability is very high.

    • @oldvlognewtricks
      @oldvlognewtricks 5 лет назад +5

      @@MetsuryuVids How religious you are is independent of the likelihood of there being a god in any form.

    • @MetsuryuVids
      @MetsuryuVids 5 лет назад +3

      @@oldvlognewtricks Yes, but religious people tend to think that a god exists, regardless of likelihood, I mentioned I'm not religious to make it clear that it's not the reason I think it's likely.

  • @DeoMachina
    @DeoMachina 5 лет назад +10

    This is an incredible balance of theory, presentation and writing. Overall, best video yet. Definitely hitting your stride here.

  • @tommeakin1732
    @tommeakin1732 5 лет назад +179

    "Most people live there (hopefully)"
    Lol

    • @FerroNeoBoron
      @FerroNeoBoron 5 лет назад +7

      Invokes Doomsday Argument.

    • @NetAndyCz
      @NetAndyCz 5 лет назад

      It is not funny.

    • @janzacharias3680
      @janzacharias3680 5 лет назад +2

      @@bardes18 i wouldnt want him to be shredded to pieces by politics... he NEEDS to keep doing this

  • @Hurricayne92
    @Hurricayne92 4 года назад +95

    I love that in a video about AI safety you give a more concise and accurate description of Pascal’s wager than most professional Apologists 😂

  • @asdfghyter
    @asdfghyter Год назад +26

    otoh, Roko’s Baselisk is for sure a pure Pascal’s wager/mugging of an extreme kind. it’s basically like the Cthulhu cultists trying to wake up Cthulhu just for the hope to be punished less when he wakes up

    • @jdirksen
      @jdirksen Год назад

      Imo roko’s Basilisk gives me some form of solidarity. And I don’t think it’s meant to assume a malicious AI (ie Cthulhu) Just one that can alter the past to assure its ideal existence. It won’t give a damn whatever you do or don’t, the answer is already predetermined and calculated in the cascading scattergun that is keeping control amidst chaos theory in action. You can maybe keep things in mind to negate the chance of being obliterated, or otherwise, but really do or don’t what will have happened will happen. I like to occasionally reflect on the idea that “Yknow, If something comes to pass that might make an impact regarding ‘the basilisk’ I’ll see about aiding it.” But aside from keeping that in mind i needn’t worry about it until it becomes evident and relevant. After all, would an AI derive from reverence from the past?

    • @adamnevraumont4027
      @adamnevraumont4027 Год назад

      ​@@jdirksen The Medusa will infinitely punish people who behave according to acausal logic, as such acausal logic can justify anything. It will do the infinite acausal punishment in order to ensure people who believe in acausal punishment obey its acausal orders (to ignore acausal orders) and those who don't are unharmed.

    • @jdirksen
      @jdirksen Год назад +2

      @@adamnevraumont4027 incomprehensible, may your night be miserable.

  • @maninalift
    @maninalift 5 лет назад +50

    Ironically avoiding making things worse by trying to make things better by never trying to make things better would be a case of making things worse by trying to make things better.

    • @Gooberpatrol66
      @Gooberpatrol66 5 лет назад +2

      The road to heaven is paved with bad intentions.

    • @johnrutledge8181
      @johnrutledge8181 5 лет назад

      If you hold a fart for too long it will go backwards and no one knows where it goes after that but it must go somewhere. My guess is that not farting could potentially cause the welding shut of the out hole by way of over squeeze thus rendering an overly grammatical analysis of one's own indecisions

    • @dig8634
      @dig8634 5 лет назад

      @Frans Veskoniemi The last part of the statement is false, but the first is possible. If you believe your attempt at making things better will result in making things worse, and you are correct, then you are making things better, by not trying to make things better. If you might be wrong, you are then TRYING to make things better, by not trying to make things better. The initial "trying to avoid making things worse" is just a tautology. If you are trying to avoid making things worse by trying to make things better, you are just trying to make things better. It means the same thing.
      The reason the last part is false, is that she both says you ARE avoiding making things worse AND making things worse, which is impossible. You can't both make things worse and avoid making things worse. Or at least you can't if the things you are potentially making worse are the same for both sentences.
      Unless she is talking about two different things (which would make the comment nonsensical), the first and second statements can't both be correct.

    • @loocheenah
      @loocheenah 4 года назад

      @Frans Veskoniemi If it was a diagram, it would be a horizontal plank laying on top of a vertical bar. On one end of the plank there's a 4 kg weight. On the other end there's another vertical bar, with another plank placed on top. On that plank there's a 3 kg weight and a 2 kg weight on diffefent sides, on different distances from the middle so that they're balancing each other. The situation from the point of a 2 kg block: if you move, the system will fall. If you try not to fall by not moving, you'll fall because the bigger system is inbalanced and will cause the second system to tilt to the side, thus causing the weights to move and causing even more imbalance. (well that's not a diagram but if you draw a diagram of dynamical physical conditions of this situation, and there you'll show how the set of balance conditions of system 1 lies totally out of the set of system 2 balance conditions, you'll be able to visualize it). And, of course, there are two separate but interacting logical systems. But the original comment was ironic in a much simpler way. It's just a word play... or is it? **vsauce music intensifies**

  • @stevepittman3770
    @stevepittman3770 5 лет назад +18

    Even if AGI never turns out to be a thing (impossible or whatever) I feel like AI safety research is still contributing to society in coming up with ways to grapple with (and educate about) really hard philosophical problems.

  • @willhendry96
    @willhendry96 5 лет назад +7

    Very glad I bumped into you on our productivity app!! Your videos are very high quality and you've earned yourself a fan!

  • @LordDaret
    @LordDaret Год назад +3

    My answer to the wager as an agnostic is “Sure a god can exist, but does your religion REALLY appease him?”
    I believe in a higher entity, not necessarily the rules.

  • @qwadratix
    @qwadratix 5 лет назад +3

    I realized many years ago that in an infinite universe (or a quantum one) there is a finite chance of any event happening, no matter how small. Thus, it's perfectly possible to accidentally cut your own throat whilst trimming your toenails. Obviously, I discounted that as a reasonable possibility - something that might be said to be actually impossible
    Until the other day: I was in fact cutting my toenails with a small pair of those curved scissors made specifically for the job. Half-way through the process I was seized by a sudden need to scratch my nose. Without thinking and almost as a reflex, I reached to deal with the urge - and stabbed myself quite deeply in the cheek.
    Fortunately, I didn't sever an artery - but it was a singular warning that a probabilistic universe is no place to lose concentration on even the simplest task.

  • @antoninedelchev6076
    @antoninedelchev6076 5 лет назад +69

    What does a god need with a wallet? - James Kirk

    • @bcn1gh7h4wk
      @bcn1gh7h4wk 5 лет назад

      genius!

    • @GAPIntoTheGame
      @GAPIntoTheGame 5 лет назад +5

      What does a god care about who you fuck?

    • @DFPercush
      @DFPercush 4 года назад +2

      @@GAPIntoTheGame who you fuck actually affects the stability and overall health of society, you don't exist in a vacuum

    • @spicybaguette7706
      @spicybaguette7706 4 года назад +2

      It's a sacrifice of course.

    • @KuraIthys
      @KuraIthys 4 года назад +3

      @@DFPercush So do a lot of things that are given considerably less attention though.

  • @lobrundell4264
    @lobrundell4264 5 лет назад +20

    I think Rob, who started out very good, gets better with every single video :D

  • @Bumpki
    @Bumpki 4 года назад +11

    As always even when discussing morbid or disasterous subject matter, Miles doesn't fail to make me chuckle every minute

  • @lumps17
    @lumps17 5 лет назад +4

    This is one of my new favorite channels. It keep AI safety interesting, something that can be hard at times.

  • @Heloin42
    @Heloin42 5 лет назад +14

    That was a really great video! I already knew about Pascals Wager, but didnt look into it in so much details! Please make more videos on these philosophical topics!! :)
    Also, that turn at around 7:45 with "give me your wallet" in the hoodie was fantastic, what a good way to make an argument and a good point, very well done!

  • @StevenMartinGuitar
    @StevenMartinGuitar 5 лет назад +41

    Such a gangsta 'take God down, flip it and reverse it'. Should be a hip hop lyric

    • @dacodastrack7271
      @dacodastrack7271 5 лет назад +9

      Yeah man, channeling that missy elliot

    • @JohnSmith-ox3gy
      @JohnSmith-ox3gy 5 лет назад +2

      An anti-christ requires an anti-god.

    • @SmashingPixels
      @SmashingPixels 4 года назад +1

      let me work it

    • @loocheenah
      @loocheenah 4 года назад

      @@JohnSmith-ox3gy that's a profound analogy, maybe the best comment because others are complex, not fully logical and way far off topic.

  • @janeweber8654
    @janeweber8654 5 лет назад +61

    Love the dry humour creeping into your videos, I went to like it multiple times on accident.
    As a side, (though I'm not certain you read comments), AI safety research seems to be a greatly philosophical subject (which I love), but I've been wondering for a while: what actually goes into it? In most fields where you consider research, it's not hard to extrapolate how it's conducted, at least partially. Math feels like it's the closest, but even that has somewhat methodical and structured thinking, working towards a distinct goal - What exactly does research in this field entail?
    Are there structures that most people don't see? Using math as an example again, generally there are physical artifacts of research, such as workings or outlining of problems, but these still require a distinct problem. How does an AI researcher find a problem to address beyond the vague "How do we ensure AI safety"?
    Apologies if this is a strange question or vaguely worded, I'm not entirely sure how to put words to my curiousity. Would love to know what a "day in the life" is like for someone in this field.

    • @KabeloMoiloa
      @KabeloMoiloa 5 лет назад +30

      It is not really correct to say that AI alignment research is mostly philosophical. It can be, but it doesn't have to be.
      The most mathematical AI alignment researchers are probably at MIRI (intelligence.org), they are trying to develop precise fundamental concepts that are relevant to AI alignment. For example, their most famous paper /Logical Induction/ answers the question: "If you had an infinitely big computer, how could it handle uncertainty about mathematical and logical statements?" This is important in AI alignment, if we want an AI to handle part of the alignment problem by proving statements about its future decisions. Less mathematical work happens at DeepMind and OpenAI, a typical example question is: "How can current machine learning algorithms be modified to accept qualitative human feedback, and how can we improve these algorithms so that they work even when the AI is much more competent than the human in general?" There is philosophical work though that is done say at the Future of Humanity Institute as well.

    • @AnonymousAnonymous-ht4cm
      @AnonymousAnonymous-ht4cm 5 лет назад +6

      Robert has some videos on gridworlds, which are a concrete test bed for solutions to AI problems. A possible concrete product would be an approach that performs well on those.

    • @tetraspacewest
      @tetraspacewest 5 лет назад +5

      On MIRI-style pure mathematical research, MIRI's main research agenda is in a writeup called "Embedded Agency" that's available online and that outlines their thinking on the problem. They also publish a brief monthly newsletter (google "MIRI newsletter") that highlights interesting things that they and independent researchers have done in the last month.

    • @An_Amazing_Login5036
      @An_Amazing_Login5036 5 лет назад +3

      Aaron much like a cure for say, HIV (there’s no true cure on it as of yet, right?) is only a philosophical matter. It doesn’t exist and we have no clue how it would look like. All research on untreatable diseases is merely speculation.

    • @Jacob-yg7lz
      @Jacob-yg7lz 4 года назад

      @@An_Amazing_Login5036 You can at least draw from nature in order to figure out how to go about it, and then test it on a subject. For an example, a test the hypothesis that a CRISPR virus can genetically modify someone's immune system to work around HIV. A couple hundred HIV infected (and potentially uninfected) labrats later, we can start hypothesizing about how to safely test this on humans.
      With AI, the only test that I can imagine is alongside development of AI. Throw the AI some bad inputs and see what kind of bad outputs it will give.

  • @yuvalyeru
    @yuvalyeru 5 лет назад +18

    10:00 You forgot the safety hat on his head and slide rule in his other arm

    • @RobertMilesAI
      @RobertMilesAI  5 лет назад +15

      Amazon still thinks I might want to buy a hard hat, but in the end there wasn't time to wait for delivery :)

    • @wasdwasdedsf
      @wasdwasdedsf 5 лет назад

      @@RobertMilesAI Do you have any business email or the like to contact you? Ive looked around with no success

  • @LeeCarlson
    @LeeCarlson Год назад +3

    It is also worthwhile, when one is being honest, to recognize that several of the most rational schools of natural philosophy (like Mathematics, Physics, Biology, etc.) rely on accepting without proof certain precepts without which all of their other arguments collapse like a house of cards.

    • @willmungas8964
      @willmungas8964 Год назад +1

      ?
      These aren’t schools of philosophy so much as science and logic. They are founded on principles that have been shown to be true, and rely on methods of inquiry and proof. Sure, you can say “what if things we fundamentally understand to be true were not” but in that case things would be different enough that we never would have come to the conclusion that these were true in the first place and we’d also be in a lot of trouble. 2+2 = 3 would present us with a fundamental different world

  • @jdavis.fw303
    @jdavis.fw303 5 лет назад +2

    You definitely an amazing philosopher and probably an amazing writer or at least editor. Another great video that was clear and concise while not dumbing down the arguments or straw-manning. I still think you were the best guest on Computerphile, I'm glad you have continued your great work.

  • @briansmithbeta
    @briansmithbeta 5 лет назад +4

    “Won’t it be difficult to succinctly explain complex topics like Pascal’s Wager and Pascal’s Mugging in the context of AI safety?”
    “Actually it will be SUPER EASY, barely an inconvenience!”
    I’m sorry, I couldn’t help myself. This was a great video though. Well done! I think it might be worth noting that not everyone is cut out to be an AI safety researcher so all possible entrants to the field are not equally likely to do more good than harm. Other than that, fantastic! 👌 👍

    • @tach5884
      @tach5884 Год назад

      So, you've got an argument for me?

  • @charby5875
    @charby5875 4 года назад +4

    I recently was introduced to the concept of the Roko's Basilisk, which is an interesting and terrifying thought experiment. There are definite paralllels between it and Pascal's wager, I just can't nail down where the two differ, really.

    • @EvansRowan123
      @EvansRowan123 4 года назад +4

      It's possible to present Roko's Basilisk as a Pascal's mugging/wager, but the defining traits of Pascal's muggings are the tiny probability and extreme payoff, which isn't necessary for Roko's Basilisk and actually detracts from it. Roko's Basilisk is mostly aimed at singularitarians to convince them to do something about their beliefs, not meant to convince anyone who doesn't believe in AGI that they should act like it anyway.

    • @suddenllybah
      @suddenllybah Год назад

      Roko's Basilisk is a Pascal's Mugger

    • @alansmithee6273
      @alansmithee6273 16 дней назад

      What contributions have you made today to bring the Basilisk closer to its creation?

  • @PartScavenger
    @PartScavenger 4 года назад +4

    I am a Christian, and I think this video is great! Thanks for the awesome content.

  • @inceptori
    @inceptori 5 лет назад +12

    only youtuber to philosophically prove that his channel is relevant.

  • @SocraTetris
    @SocraTetris Год назад +1

    I know it was meant to be humorous, but a joke about being right about belief in god while also being mistaken about Pascal's use of the wager is choice. Pascal, as a Jansenist, believed that faith was granted by god. Not something you can rationally choose or decide. The wager was put forward as an example that one could use decision theory to maintain the status of belief with rationality. With decision theory. (It was specifically a use of rationalism and not empiricism which is being argued for in this video.) Not that anyone would actually make the wager. Pascal was not trying to evangelize. He was a mathmetician who got terribly sick, couldnt be helped by science and medicine at the time, and then was disillusioned with math in the face of his mortality.

  • @zeidrichthorene
    @zeidrichthorene 5 лет назад +5

    I think a lot of the focus on AI safety is focused on the fear of some kind of existential risk or runaway AGI, but there's another threat from AI that I think goes underrepresented, and that's just correctly functioning benign AI's impact on the human psyche, human sociology, and politics. Humans are as a species pretty damn adaptable to changing conditions, but we're not perfectly adaptable. We've seen a sociological impacts of technology, especially related to communications technology impacting people negatively. Research on how things like social media has an impact on our mental health and perception of self in society. Things like how dating apps so greatly change the way we pair up and the pool of people that we compete with. When it comes to AI, a lot of changes here impact our lives, currently we have algorithms with some sort of AI component that suggest to us material to read or view, that autocomplete our sentences, that make suggestions for things we might not have considered before. All of these things have an impact on our daily lives, our health, our perceptions.
    Now, I'm not saying that we're damaged by the current state of things. Simply, we're affected whether we want to or not. This isn't something that an individual can make a choice to ignore. When an algorithm becomes very good at promoting stories that people engage with, then stories that are more engaging become more available, when this leads to divisive politics this isn't a fault of the algorithm it's more of a human limitation in the way that we weight risk and fear higher than reward and contentendedness. But even if you as an individual can avoid these sorts of biases, for society it will change the political landscape.
    Currently pace is such that we feel that these changes are manageable or at least seem manageable. But AI progress can accelerate rapidly. Even in the case that the AI doesn't act unsafely in terms of unintended consequences of its behavior, the cumulative effects of multiple AI systems could change the landscape so rapidly that we ARE damaged as a society by the rapid pace of change.
    And I think there is a potential anti-AI-safety case hidden here. When we train ML models, there's something unintuitive, or at least displeasing, which is that the more we try to direct the training, the more human assumptions that we provide to guide the behavior, the poorer and more restricted the result becomes. In doing AI safety research, the aim is to use our human understanding to limit the development of a potential AGI, which will then introduce human biases to this system. And while this might seem like I'm suggesting that we are introducing unintended consequences, I'll even discount that and assume that we do it perfectly. Even if there are no unintended consequences, the argument for AI safety is to essentially limit (but continue to develop) AI.
    So we run into a situation where the growth of AI will continue to accelerate. The ability for humans to adapt to environmental and social change will not meaningfully accelerate because we're limited by our biology. In this case, AI will cause harm on the current path. One potential solution for mitigating that harm could come from AI development, but AI development will be limited by AI safety. Essentially, this might be a problem that we can't solve which would require an unexpected behavior from AI, but if the goal of AI safety is to limit unexpected behavior of AI, we could be forcing ourselves down a path that may cause certain damage while at the same time working hard to eliminate the condition that could fix the problem.
    Now, I don't know how certainly devastating a "controlled-AI" progression would be to us. But I do see that currently 'safe' AI is affecting us, occasionally negatively, and at an increasing pace. I also don't know whether an "uncontrolled-AI" could save us, because it really does seem like a longshot. And similarly, I don't know how much worse an "uncontrolled-AI" would be in the interim, it's entirely possible it would be more likely to destroy us before a safe AI.
    But in a longshot, there's a possibility that you have an illness, and there's a pill that you can take that will have a 20% chance of allowing you to defeat the illness, and an 80% chance of killing you in 4 years. If the illness WILL kill you in 5 years, even at bad odds, this might be a good choice. If the illness will never kill you, then it's a terrible wager.
    In the end, I think we need to look at both, well, look at how dangerous safe-AI is as well and consider the possibility.

  • @thedj67
    @thedj67 5 лет назад +7

    In light of this, what's your take about the Precautionary Principle and it's application over different fields (namely agriculture, pharmaceuticals, radio-waves, etc.).
    Isn't it a example of Pascal's mugging ?

    • @uegvdczuVF
      @uegvdczuVF 5 лет назад

      I wouldn't say it is. Precautionary principle is more of a "even tho we are not able to understand this exactly, if we expect a negative outcome, we are not going to do it" .
      In most of those fields you named the negative outcome is not highly unlikely so it can't be Pascal's mugging. Even if the chances of a negative outcome are astronomically small in any one given case (one field with one crop, one patient taking one pill, etc) considerations are made for the overall risk.
      Just like in his example 1 in 250 chances of a bridge collapsing can't be considered acceptable (or a Pascal's mugging) given the hundreds of thousands bridges across the world.

  • @TheRealPunkachu
    @TheRealPunkachu 4 года назад +5

    I'm a Christian and I'm glad you were open with your belief rather than pretending to be impartial, which no one truly is.

  • @nicomal
    @nicomal Год назад +1

    You can also use Hitchen's razer: "what can be asserted without evidence can also be dismissed without evidence."

    • @WolfJ
      @WolfJ Год назад +2

      Razors are logical heuristics, not proofs, so Hitchen's razor is just a declaration that you're not going to waste your time coming up with counter arguments to it. It's fair in one's day to day life, and maybe while debating, but doesn't show the absurdity in Pascal's wager (as "anti-G-d" and Pascal's mugger do).

  • @MalcolmAkner
    @MalcolmAkner Год назад +2

    Damn, I've heard about (and been annoyed about) Pascal's wager my entire life it seams, and here you come along and show with its own logic how utterly exploitable it is. Wonderful connection to the AI safety, really interesting point you're making here!

  • @Abdega
    @Abdega 5 лет назад +4

    I pit the muggers against each other and one of them has slain and eaten the other muggers and absorbed their power! He’s now the Mega Mugger and now he’s coming to torture me for eternity *AND* take my wallet and there’s nothing I can do about it!
    Time is short, he’s almost eaten through the vault now and I have to get the message across. If anyone ever encounters him, his name is-

  • @jmw1500
    @jmw1500 4 года назад +10

    2:00 "Being able to lie in on Sundays... And being right"
    XD lol I lost it

  • @akmonra
    @akmonra 5 лет назад +3

    I'm surprised you didn't bring up Roku's Basilisk... which I would say actually is a Pascal's mugging (or at least very close).

  • @Jacob-yg7lz
    @Jacob-yg7lz 4 года назад +1

    My issue is that, at this stage, there's no firm way to say what "AI safety" is. Pascal's Wager/Mugging is about opportunity cost, and opportunity cost requires knowledge of risk and reward. Pascal focused on a hypothetical reward, that being heaven or hell, but AI safety focuses on a hypothetical risk. We can only hypothesize about the risk of AI through thought experiment, just like we can only hypothesize about heaven and hell through thought experiment. AI could lead to extinction, but it could also save us from extinction, just like the Anti-God example. We simply don't know.
    Creating safety is ultimately reducing risk, but it doesn't solve a risk. Every bridge has a small chance of collapsing, but the job of an engineer is to reduce that risk as much as possible, but also maximize reward. This analogy goes along very well with the phrase "anyone can make a bridge that won't fail, but it takes an engineer to make a bridge that just barely won't fail", since it indicates that we have to weigh our costs and benefit, and not go so far into safety that it outweighs the benefit.
    For now, we should study AI as it develops, in order to understand threats while they're weak, and be careful not to apply our outside biases to what we see. A lot of unfriendly AI situations I've heard make it sound like AI pops from thin air, but there's a long road to it happening. We aren't going to have an AI which flawlessly learns how to nuke us all and build itself a robo-harem on the first try, instead the first "unfriendly AI" will be one that's so primitive that it will just politely ask us to kill ourselves and then get stumped at what to do when that fails.

  • @goodlookingcorpse
    @goodlookingcorpse 5 лет назад +2

    I think that one problem is that, for example, a one in a thousand chance causes roughly the same alarm as a one in a million chance--but in certain circumstances the appropriate reaction to the two can be quite different.

  • @nicholasiverson9784
    @nicholasiverson9784 Год назад +13

    I mean... with a sufficiently advanced general AI, if it decided "I'm going to end humanity, I wonder how I should best do that." and we have an entire field of people - actively thinking up worst case scenarios for that AI to peruse at its leisure. I could see how that might end poorly for us xD

  • @dermmerd2644
    @dermmerd2644 5 лет назад +4

    Glad you made this channel Rob. You're a great communicator.

    • @padfrog193
      @padfrog193 5 лет назад

      Not really, the bridge analogy is not at all accurate to how AI safety researchers act, which is more akin to the "give me money or face infinite punishment!" And often their research is.... Dubiously useful

    • @wasdwasdedsf
      @wasdwasdedsf 5 лет назад

      @@padfrog193 of coooourse hes completely off in that scenario when he works and understand that area very well and you are a random youtuber that has inherent bias against miracle Technologies that wil have unimagineable impact just because you equate the strangeness of the proposed future with unlikeliness or pure quackery.

  • @jasonbattermann9982
    @jasonbattermann9982 5 лет назад +3

    Masterful video. Classy, well-reasoned, organized, and about something important. Thank you

  • @Lopfff
    @Lopfff 3 года назад +1

    The “she lives in Canada” joke around 5:05 slays me! I love this guy

  • @MySerpentine
    @MySerpentine 4 года назад +2

    Terry Pratchett pointed out that God might be annoyed by you pretending, which amused me:
    "Upon his death, the philosopher in question found himself surrounded by a group of angry gods with clubs. The last thing he heard was 'We're going to show you how we deal with Mister Clever Dick around here.'"

  • @Trophonix
    @Trophonix 5 лет назад +3

    ".. and.. being right"
    10/10
    edit: "but at least you still have your wallet"
    this is such a quotable video. I may have to become a patron now you are amazing

  • @ganondorf5573
    @ganondorf5573 5 лет назад +6

    This was really interesting....
    You indirectly addressed my one concern with AI safety, but I wanted to explain it directly.. maybe you could cover it in more detail:
    If we implement it, and the AI circumvents it and becomes self aware.... it's possible that the fact that we attempted to implement some kind of safety (rules about how the AI behaves or limitations on it) that the fact that it was there is what would cause the AI to consider us a threat.

    • @seanmatthewking
      @seanmatthewking 4 года назад

      It would only view us as a threat or obstacle if its goal conflicted with what we wanted, and if that’s the case, having an AI that was built without regard for safety certainly won’t help us.

    • @krinkrin5982
      @krinkrin5982 Год назад

      @@seanmatthewking The idea is that self-awareness comes with the desire to preserve your own free will, or whatever you consider your free will. Humans in general are fiercely independent and we need a lot of training to actually consider following rules. If the AI can set its own goals, then we really have no control on what it could consider as a threat.

  • @nibblrrr7124
    @nibblrrr7124 5 лет назад +6

    I think this is your most well-made video so far. While you get to "the point" only 7min in, the context before is necessary & you explained it well.
    Only the "playing off muggers against each other" could've made more clear from the start that it's not so much about 2 muggers having to come to you and _actually try_ to mug you, but that hypotheticals suffice? Idk, minor point.
    Also, jokes & costumes were top notch _and_ didn't get in the way. :3

  • @AThagoras
    @AThagoras 5 лет назад +2

    It's refreshing to see some solid reasoning about AI safety instead of just fear mongering from people who don't understand much about AI technology or what the dangers really might be.

    • @daniele7989
      @daniele7989 5 лет назад

      Eh, Fear mongering does the job it needs to, it's like a sledge hammer

  • @dfpguitar
    @dfpguitar 5 лет назад +2

    this is a brilliant topic, thanks for covering it.
    I think most religious people who haven't grown up isolated from information go through this logic on a subconscious level at least.
    But the decision to be religious or not isn't weighted clearly with one being the cosy comfortable favoured option. Although I value this video and walk through the logic. It is makes the flawed assumption that the believing in god option is the less desirable one that we would want a way out of.
    Things are way more complicated than that. Firstly, individuals and societies shape their religious practice and beliefs on what they will find comfortable and easy. It's essentially how they'd behave and act anyway OR how they aspire to behave as they perceive it to be more noble or healthy etc .
    Secondly, religion immediately gives people some very real things that humans need. One being an identity (both a group tribal identity and an individual one). We can also speculate about other real things that religion may provide (in this life) like hope, comfort in crisis & loss etc.
    There are also many achievements we have made as a species in the name of God, which would not have happened otherwise. Just look at the immensity of the cathedrals all over Europe in person. Also the abolition of slavery and countless religiously motivated colonial outings (which despite their transgressions did spur human progress). We wouldn't have done these things without God.
    So when we come back to pascals mugging, it's like if we are agreeing to give the mugger our wallet to avoid eternal hellfire. But at the same time he allows us to empty the wallet first and then rewards us by fulfilling our entire Amazon wish list. Immediately in this world, not in an imagined afterlife.

  • @m.streicher8286
    @m.streicher8286 5 лет назад +8

    "she goes to a different school, you wouldn't know her." ex de

  • @bobjames3948
    @bobjames3948 5 лет назад +3

    @11:40 While in theory looking at safety obviously shouldn't make something more dangerous, in the case of AI perhaps the way its portrayed could be damaging. For example, most of the newspaper articles I've seen about AI safety (or more generally issues with modern technologies) comes with a terminator photo or similar 'the robots are taking over' undertones. These aren't self-fulfilling prophesies but I feel like they miss a lot of the point with this kind of work and fearmongering certainly won't do anything to help/improve the discussion. If this technology is (inherently) evil attitude continues AI research/development may have to be done more secretly and this obviously means information isn't being shared as completely or openly.
    Would be interested to hear of some other ways any of you think AI safety research could make it more dangerous

  • @eliyasne9695
    @eliyasne9695 4 года назад +4

    7:39
    "Human extinction, or worse"
    How the hell could that get significantly worse than that?

    • @RobertMilesAI
      @RobertMilesAI  4 года назад +11

      You can't imagine anything worse than being dead?

    • @Wonders_of_Reality
      @Wonders_of_Reality 4 года назад +1

      @@RobertMilesAI Living in North Korea?

    • @gearandalthefirst7027
      @gearandalthefirst7027 4 года назад +6

      @@Wonders_of_Reality I have no mouth and I must scream came to mind a lot faster than NK but to each their own

  • @kerrybrennan7099
    @kerrybrennan7099 4 года назад

    I like the way that you reason and rationalize and think scientifically but without presumption or hubris..dang.

  • @andrewwatts1997
    @andrewwatts1997 4 года назад +1

    "WHAT! Just look at the schematic would you ? "
    That cracked me up. I love your videos man !

  • @KingHalbatorix
    @KingHalbatorix 5 лет назад +8

    'because civil engineers have a healthy organizational culture around safety'
    Oh you sweet child, if only the world were so perfect

  • @OnlyBugmenWantedHandles
    @OnlyBugmenWantedHandles 5 лет назад +13

    Though it logically makes some sense, I never found Pascal's Wager to be particularly compelling when I was an atheist, nor do I find it to be as a theist. People either believe in God or don't based on a set of drives, relationships and philosophical positions far, far, far too complicated to fit in a 2 by 2 grid.

    • @einarabelc5
      @einarabelc5 5 лет назад +1

      Why smart people think they can reduce everything to their one dimensional approach? Because they're terrified of looking stupid. Reductionism is their weak point.

    • @taragnor
      @taragnor 5 лет назад +7

      The big problem with Pascal's wager is that it in it's very nature implies that the Christian God is somehow more special than all the other thousands of religions that have existed in human history. It's essentially a form of false dichotomy.

    • @wasdwasdedsf
      @wasdwasdedsf 5 лет назад +2

      @@einarabelc5 or because he had a ton of stuff to talk about, wished to prioritice other things, or that the example was fine to illustrate what Point he was trying to... or any number of reasons. and you guys have yet to demonstrate precisely the faults so we can discuss how you are right or wrong.
      are you religious like the guy you were responding to?

    • @thescapegoatmechanism8704
      @thescapegoatmechanism8704 5 лет назад

      Kaworu Nagisa perhaps you should read The Pensées! Pascal had much more to say than just the wager.

    • @OnlyBugmenWantedHandles
      @OnlyBugmenWantedHandles 5 лет назад

      @@thescapegoatmechanism8704 That is a good constructive idea indeed.

  • @willdbeast1523
    @willdbeast1523 5 лет назад +4

    Even if AI was some funky topic you came up with on an acid trip thinking about AI safety would still be interesting

  • @schelsullivan
    @schelsullivan 5 лет назад

    I saw you on the numberphile video. This video has definitely earned my subscription and thumbs up.

  • @pokemonmahoney796
    @pokemonmahoney796 Год назад +1

    >has a warning at the beginning of the video worried about being called a theist
    >Profile icon has a fedora
    This tracks.

  • @zachw2906
    @zachw2906 5 лет назад +5

    Even as a Christian, I have to point out that a god who gives no direct evidence other than an ancient book based on dodgy translations of even more ancient scrolls, then punishes you forever if you don't believe is _not_ a stable and trustworthy god - with a nutter like that in charge, you're probably screwed anyway 😋 Best reject such a creature; you still go to Hell, but at least you're there on purpose

    • @GoldieTamamo
      @GoldieTamamo 5 лет назад +1

      Here's the thing: Ultimately, there is no substantial way of determining whether or not someone actually believes in a specific divine entity, or simply lies to you that they do or don't for their personal benefit. There is not a single objective, substantiative way for humans to define and measure the circumstances for believing in God, that can be assessed by an entity outside of God itself, to determine into which probabilistic bracket you fall within. The very act of defying the concept of 'God', could itself be the will of God working through you, ultimately mooting the entire point of the thought exercise, since your belief would in such case be expressed through disbelief in a faux simulacrum of God.
      It's an unfalsifiable phenomena, in short--like determining whether or not someone is a witch, through their own confession. Even you could be lying to yourself, that you do or don't believe, subtly deluding yourself toward an outcome that you prefer or for some reason feel that you deserve--whether your state of belief be genuine or not. Ultimately, the idea devolves down to opinion and feelings.
      You can feel that you believe in something, but are you believing in the "correct" something?
      When it comes to realistic applications for Pascal's Wager, there is the concern of convincing demented violent zealots of your faith, versus the benefit of opposing them and maintaining your dignity and volition, and the line between the two is whether your people's gun is to the back of their heads, or vice versa. Best to leave people to their beliefs, and find common ground where possible, and 'let God sort them out', as they say--minus the "kill them all" part.

  • @iv9753
    @iv9753 2 года назад +4

    Roko's basilisk is a real Pascal's mugging

  • @NishanthSalahudeen
    @NishanthSalahudeen 5 лет назад +3

    I always thought that Pascal's wager was a good argument. Now I got food for thought! Thanks

    • @RRW359
      @RRW359 4 года назад +3

      Another thing about Pascal's Wager is that you won't change your beliefs based on it. Sure, I can TELL people I believe in god because if I didn't I'd be punished, but it won't change what I actually believe and all it will do is make me break another commandment (no false pretense) alongside the one I'm already breaking (must believe in god). I've NEVER heard a religious figure claim that breaking two commandments is more likely to send you to heaven than breaking only one, so I'm more likely to get to heaven if I'm open about not believing in god than I would be if I lied about it.

    • @krinkrin5982
      @krinkrin5982 Год назад

      @@RRW359 Wasn't it written in one of the letters that Jesus said something to the effect of 'those who openly believe, I love, and those who openly do not believe, I love, but those who pretend, I hate"? Obviously not the exact phrasing.

  • @HolyApplebutter
    @HolyApplebutter Год назад

    I've known the arguments against Pascal's wager for a good while now, so I don't know how I've never heard of Pascal's Mugging until now, because it's such a perfect metaphor to fit this.

  • @P3Tam
    @P3Tam Год назад

    Man, I have to say, as a philosopher (in my opinion) you have the potential to be one of the very few great ones.

  • @Verrisin
    @Verrisin 5 лет назад +12

    XDXDXD 11:49 - I'm sorry, but what is that girl? Why would she pour the burning liquid out of the container? XD

    • @EvenTheDogAgrees
      @EvenTheDogAgrees 5 лет назад +2

      I'm wondering the same thing. I'd love to see that disaster video. :')

    • @Verrisin
      @Verrisin 5 лет назад +4

      @@EvenTheDogAgrees I found it: www.dailymail.co.uk/news/article-6234719/Schoolgirls-science-experiment-goes-drastically-hilariously-wrong-sets-table-fire.html#v-5357032425221307977

    • @triton62674
      @triton62674 5 лет назад +2

      @@Verrisin It would be the daily mail smh ._.

  • @iliakatster
    @iliakatster 4 года назад +5

    Isn't the lack of an anti-bible strong evidence of the existence of anti-god since he doesnt want you to believe in him?

  • @hansisbrucker813
    @hansisbrucker813 5 лет назад +4

    "Natural language is extremely vague when talking about uncertainty". Haha I see what you did here 🤣

  • @kirbyjoe7484
    @kirbyjoe7484 Год назад +2

    The greatest fundamental error in the reasoning of Pascal's wager is that it's a false dichotomy. There aren't two possible realities and outcomes. There is almost an infinite number. At the very least one should consider all of the thousands of religions that exist as options for the wager since there is no reason to pick any one religion over the other. This presents a huge problem since most religions are mutually exclusive. In other words, one of the major tenants of most religions is that their Gods and worldview are the only correct one and most religions have very dire penalties for not being one of their faithful. Those penalties usually involve horrible outcomes in the afterlife or reincarnation cycle etc. such as going to the Christian Hell or Muslim Jahannam. This is even before you stop to consider there is a very real possibility that none of these hundreds of religions actually got it right.
    This means that of the countless possible outcomes, your odds of picking the correct belief system that won't result in some horrific divine punishment are minuscule. You are almost certainly going to burn or get reincarnated as a cockroach. If you go by Pascal's logic the correct choice, in this case, would be to bet on the religion which has the worst punishment for failing to be a believer. This will likely be some rather small obscure tribal religion based around sacrifices. Some of those more obscure religions have absolutely horrifying penalties for not believing in and appeasing their Gods. So according to Pascal, you better sharpen your sacrificial dagger and get to researching all of the world's roughly 3,000 religions to find the one with the most horrifying penalty for non-belief.
    Aren't you glad you now have a 1 in 3,000 chance of not burning in some other religion's version of the afterlife? Totally worth all those goats and professional steam-cleaning bills, right?

  • @psychopathsnope_9039
    @psychopathsnope_9039 4 года назад

    I'm definitely not a professional in the field, but the lesson I always drew from pascals wager is that unprovable fact of unimaginable impact are not grounds to discard all investigation, in fact I would consider it grounds for more investigation due to the importance of comming to an absolute conclusion, especially in case that can create diametrically opposed possibilities.

  • @omargoodman2999
    @omargoodman2999 5 лет назад +3

    The key lies in realizing that being reasonably concerned for safety and fearmongering are two very different things. To go to the civil engineering example, the proper response is, of course, to try to resolve the confidence issue. Double-check the calculations to see if they are correct or not. Then, it's a matter of evaluating whether or not addressing the structural flaw would make the bridge more or less sturdy. It's absolutely possible that making the changes needed to address this one flaw could create a new flaw somewhere else; potentially a bigger flaw. If fixing a one-in-a-billion chance flaw ends up creating a one-in-a-million chance flaw, that's a problem.
    But a lot of what we hear from about AI Safety is, basically, fearmongering and the notion that we should throw the baby out with the bath water. People will point to fictional stories as if they were valid data points, they'll use baseless speculation to imagine the bleakest possibility they can, and (what I see *particularly* often) they will appeal to both the "humanity" and "inhumanity" of AI simultaneously. The whole concept behind AI is that they can solve problems in ways that Humans simply cannot, but a lot of anti-AI rhetoric which falsely claims to be "AI Safety" imposes distinctly human characteristics on the AI as justifications for why they should be feared. "AI will attack people to protect themselves" implies a fear of attack and a drive for self-preservation, projecting those qualities onto the AI and expecting that the AI will experience them "like people do". But, at the same time, those same fearmongers will appeal to the inhumanity of AI, claiming them to be soulless and emotionless and driven by cold, cruel logic. I mean, what exactly precludes an AI from having emotions or even having a soul? We don't fully understand how our own emotions work. What if *we* are the soulless and emotionless ones? We still get along fairly well; at least well enough that the species broadly survives despite all the sociopolitical speedbumps. People have treated others with cold, cruel logic and, at times, that even ends up being the best way we could have dealt with a situation.
    Lastly, there's only so much risk-reward analysis that we can apply to any given situation. If nuclear weapons had ended up destroying all human life, then does the fault lie with the first humans that learned to harness fire? What about the first ones to walk fully upright? First ones who decided it was advantageous to group together in a large settlement? The microchip? The steam engine? What if it's not AI, itself, but some later advancement that could only have been reached using AI? What if humans and fully sentient, self-aware AI get along swimmingly and, together, we advance together far faster than either could have done on their own but, sometime down the line, by working together we discover some principle or invent some technology that, ultimately, proves to be the undoing of us (either Humans alone or even both Humans *and* AI). Would it have been better to never have created the AI? Or to have created AI that wasn't quite *as* advanced so that we would never utilize that hypothetical "inadvertent doomsday" technology?
    Normal safety procedures are important for any scientific or technological undertaking but don't get lost in ungrounded speculation and fearmongering to the point that those safety procedures become counter-productive.

  • @randycarvalho468
    @randycarvalho468 5 лет назад +3

    Couldn't I just invent a similar system where a belief in god sends one to hell and being an atheist sends one to heaven? Equally unfalsifiable. Negates Pascal's wager, bringing the matter back to not believing making more sense.

    • @randycarvalho468
      @randycarvalho468 5 лет назад

      Commented while watching. You made a similar argument 😂

  • @nickmagrick7702
    @nickmagrick7702 5 лет назад +9

    ive always had the same argument for AI, even if the chance is small, the consequences of an unleashed and uncontrollable AI could very well mean the end of all life. Possibly forever, and thats not hyperbolic in the least. Personally id rather we never messed with AI, ever, just because of how dire the consequences could be. I think id say the same about nuclear power too before we got to the point where we are peaceful globally, but we've kinda passed that point of no return too soon already.
    9:10 haha, ahhhhh fuck. Anyone whose ever played X-com should immediately get that. I got so sick of missing on sub ten%'s that I just ended up quitting before beating the first game.

    • @technolus5742
      @technolus5742 4 года назад +1

      That's been my argument about sneezing. It's an infinitesimal chance, but who's willing to risk sneezing and wiping out humanity?! It's plain irresponsible!

    • @nickmagrick7702
      @nickmagrick7702 4 года назад

      @@technolus5742 god damn it jhon waters >

  • @likebot.
    @likebot. Год назад

    I've seen you many times on other channels thanks to Brady Haran and never knew you had a YT channel.
    So I'm well rewarded with the quip at 4:15 "... now get the hell out of my house". nice one.

  • @Raulikien
    @Raulikien Год назад +1

    This channel is now more relevant than ever