Is P(Doom) Meaningful? Epistemology Debate with Vaden Masrani and Ben Chugg

Поделиться
HTML-код
  • Опубликовано: 24 ноя 2024

Комментарии • 174

  • @willpetillo1189
    @willpetillo1189 16 дней назад +9

    Short version as a dialog (B = Bayesian, P = Popperian), with my own paraphrasing and bias baked in:
    B: What's your p(doom)?
    P: I don't see that as a meaningful question.
    B: Ballpark it for me. Surely you can tell me whether it's greater or less than 1%!
    P: I am happy to say that superintelligence is wildly implausible, or that saying the upcoming US presidential election is on knife edge, or any other subjective assessment. But once you bring in numbers, I think that is arbitrary, unnecessary, and a poor foundation on which to build any other claims.
    B: Don't those subjective statements effectively map to probabilities, though?
    P: Not really? I mean, you could translate my statements in that way, but why bother claiming false precision?
    B: Because expected value is a really useful concept. Sure, you can believe or disbelieve something without probabilities, but if you are faced with a bet, at what odds are you willing to take that bet?
    P: I agree expected value is useful sometimes; I'm not universally against numbers. It's entirely appropriate to use probabilities when you have good data on which to base your numbers. But when you don't have solid data, that's a different class of question that requires different, intuition-based tools. What I am opposed to is the *mixing* of rigorous, empirical analysis with subjective guessing.
    B: Doesn't the success of prediction markets demonstrate the value of precisely this combination? Working from data is great when you have it, but a lot of markets deal with complex questions for which there isn't good data.
    P: Sure, if you have access to special information or really good reasoning to think that a market is wrong, you should bet against it, what's your point?
    B: My point is that if you have reasons beyond that which can be broken down into a rigorous statistical foundation for thinking that a given prediction market is off, you should be able to factor that in. And you should be able to do that with the kind of precision that is only possible if you allow yourself to use numbers--in a way your view is saying that you shouldn't--to know how you should bet on the margin.
    P: Does anyone actually make money that way? If you're right, prediction markets, superforecasters, and the like should be capable of being substantially more accurate than just working with good data and treating uncertainty as moving towards pure randomness.
    B: Yes, prediction markets are extremely well calibrated!
    P: But how accurate are they? That's what would change my mind.
    B: Accuracy isn't a fair basis for evaluation!
    P: Then I think we are still talking past each other somehow...
    Note that all of this, while interesting, is basically irrelevant to assessing the claim of superintelligence. One could calculate a Bayesian p(doom) of near zero depending on one's priors and views of the available data; one could claim in a manner consistent with Popper that the superintelligence scenario is almost definitely going to occur based on a purely intuitive sense of the quality of the reasoning.
    Also, for the Bayesians in the house: are you interested in making forecasts about complex systems? Do you want to be able to rigorously specify beliefs while admitting of many possible factors and outcomes simultaneously? Are you interested in simulating the impact of specific policies and developments on overall risk assessments? If so, I invite you to try out a Probability Calculator I built a while back: application is here (will9371.itch.io/probability-calculator), video demo and explanation is here (ruclips.net/video/JdHeCrmeWr8/видео.html)

    • @DoomDebates
      @DoomDebates  16 дней назад +4

      Great recap!

    • @symme7ry
      @symme7ry 10 дней назад +1

      Good recap, although I don't think this line summarizes Liron's view well: "B: Accuracy isn't a fair basis for evaluation!"
      I think Liron was a bit confused by the precise meaning of the term "Accuracy" in this context, so in the actual interview he stumbled a bit. But Liron is willing to use Brier scores (as are all bayesians I know of), which take into account accuracy.

    • @DoomDebates
      @DoomDebates  10 дней назад +1

      @ think by “accuracy” in that context I just was referring to sharpness/confidence, making the point that even an ideal Bayesian won’t always be expected to have probabilities far from 50%.

  • @tommiz.__1
    @tommiz.__1 16 дней назад +14

    Invite Ben on again by himself. Would prove for a more productive conversation.

  • @Sawa137
    @Sawa137 15 дней назад +15

    "You can't predict things beyond 1 year"
    "AI doom in the next 100 years is very unlikely"

  • @weestro7
    @weestro7 16 дней назад +10

    I really enjoy this channel. Often though I feel the comments section is a bit too harshly judgmental of guests. I want to see this channel do well (and maybe I dunno, save humanity ✌️), and I hope the comments don’t tarnish the optics for any prospective future guests.

    • @DoomDebates
      @DoomDebates  16 дней назад +6

      @@weestro7 Thanks. 💯Always be polite to the guests or I will block you lol

  • @AlfiePT
    @AlfiePT 16 дней назад +8

    Judging by the other comments I wasn't alone in finding Vaden's attitude pretty unpleasant but I'm seriously impressed with how you handled this conversation and it was still a super fun episode to listen to. Looking forward to part 2!

  • @jamiekawabata7101
    @jamiekawabata7101 17 дней назад +12

    The case of a specific sequence of coin flip outcomes being attributed to a magic coin is I think an example of why the hypothesis has to come before the experiment in science. After the fact explanations are subject to this type of weakness and can't be treated as evidence in the same way. Which is not to say that it's not evidence at all, but it's a very different type.
    I think you brought out a key problem with the Popperian denial of probabilities, which is that in order to take any action in the world, the probabilities must exist implicitly within the mind even if they are not acknowledged. Better actions in the world require better probabilities, not just for "statistical" type events but also for "unknowable" events.

  • @ManicMindTrick
    @ManicMindTrick 16 дней назад +9

    The question, answer and feedback at 2:30:35 about prediction markets and AI doom really showed how bad faith Vaden is. He seems to be way more focused on gotchas and snark than trying to fully grasp the argument.

  • @blahblahsaurus2458
    @blahblahsaurus2458 6 дней назад +2

    at the end of the discussion - I'm amazed they can't grasp the idea that prediction markets only work because of and as long as there is a financial incentive. Come on guys. Really shows how brilliant people are sometimes only more brilliant at arguing themselves out of understanding something really simple and obvious.
    Also annoying was the refusal to engage with the coin flip hypothetical. You guys, sometimes you _can't_ run an experiment but you _do_ need to make a decision. If you're running away from a bear and reach a fork in the road, you still need to make a decision without the opportunity to gather more information. I have no doubt that Popperians have really great, practical, effective approaches to making these sorts of decisions. I'm sure they could engage with the coin flip scenario just fine if they felt like it. You're not being clever by refusing to engage with the spirit of the question. You're just "um ackshually"-ing.

  • @gridstop-or6cb
    @gridstop-or6cb 10 дней назад +2

    Some interesting stuff, but frustrating at many times.
    I don't think Liron and Vaden were on the same page about the kind of conversation they were planning to have. Vaden came off very aggressive: I am prone to believe he thought that was what was expected during this interview, but it made Liron seem like the reasonable one. I wish Ben, who has always come off as less excitable by nature, had found a way to reel Vaden in at times.
    A lot of the Popperian arguments offered could be sloppy. This happens a lot on this topic: it isn't very clear what roles the various arguments to the effect of 'prediction involving future knowledge is impossible', 'numerical prediction without a good explicit model is impossible', 'probabilistic prediction without a good explicit model is impossible', 'people aren't very good at estimating predictive probabilities', etc. are supposed to play. It's of course possible to have multiple arguments against something, but it's hard to track what the relevant claim is supposed to be at any given time.
    I think some sloppiness shows through at other times. Liron kept identifying the Earth spinning as the reason for the sun rising and setting, but Ben and Vaden kept saying this was part of heliocentrism, which isn't the same thing. And they said that it explains the seasons, which is due to the addition of an axis tilt. Obviously in the end we have a unified theory of planetary motion, but the point being made about how these various results fall out from the conjectures for the others was incorrect. It's a minor thing, but I think shows a dynamic on display at several points, where Ben and Vaden could have slowed down a bit and been more deliberate with what they were saying.
    I think the coin example was a real mess. Vaden thought that saying "I'd flip the coin" was a mic drop moment, but I don't think it helped Liron or his listeners learn anything. Liron tried to update it to get more of an explanation, but severely misjudged what would work on Ben and Vaden; Ben and Vaden could tried to help, but I don't think they stepped up to the plate in that regard.
    I doubt I agree with Liron's actual ideas, but those didn't end up being the topic of this podcast. When rationalist types start mentioning the length of Turing machines, they've usually gone off the reservation.

    • @DoomDebates
      @DoomDebates  10 дней назад +4

      Thanks for your comment. You make interesting points and I agree with most. I'm afraid I must insist that lengths of Turing machines is a very important mental model 😂 We get a ton of value from being able to formalize or more precisely define the role of "Occam's Razor" or a "Simplicity Prior" or some piece like that which completes the Solomonoff Induction description of *what cognitive engines do* to optimize the universe.

  • @ibbajibbaduay
    @ibbajibbaduay 14 дней назад +2

    Echoing other commenters: is there a reason why Popperians should be any less likely to worry about superintelligence than Bayesians?

  • @julianw7097
    @julianw7097 16 дней назад +4

    What do they do if you present them with a long list of bets with different “expected value”? Eventually one has to start giving out probabilities or the buckets get quite full, right? You must at least order the bets relative to one another. Then, can you not map the bets to probabilities as the number of different bets increase? idk something feels wrong

    • @DoomDebates
      @DoomDebates  16 дней назад +2

      @@julianw7097 Right, this is where the slippery slope leads when you start wanting to systematically win or program an AI to do so

    • @authenticallysuperficial9874
      @authenticallysuperficial9874 15 дней назад +1

      @julianw7097 how exactly would you tie expected value of bets into this argument? because it's not as easy as it seems

  • @pankajchowdhury
    @pankajchowdhury 14 дней назад +3

    I think I see a two sentence summary of what happened in the conversation and why there were disagreements.
    Liron thinks the Bayesian view gives an ideal algorithm for belief (be it uncomputable) and trying to convince others that no matter what you do in some sense you will find out someday you were doing Bayes under the hood all along.
    The guests think that epistemology should actually point to what humans are ACTUALLY doing when coming up with new truths and after some introspection it doesn’t look like the Bayesian algorithm at all, to them the Popperian approach captures how humans in real world play around with beliefs (and that is what they want out of even in their idealized version of epistemology).
    This roughly describes where the mindsets arise for both sides(there are much more nuances of course to their positions, I of course did some oversimplification)
    But I think once you realize where the mindsets of both sides arise from you can much more easily predict which statements are considered as crux for which side.
    Also I see a lot of people commenting this discussion was somehow disrespectful or guests were void of content (no pun intended). But Liron I think you should more or less ignore those comments from people who are judging the other position by their facial expressions more than the actual words being uttered. This was quite a good discussion.

    • @DoomDebates
      @DoomDebates  14 дней назад

      In the last minute of the episode, I tried to give your same summary and said it could be a compatible thing we both believe, but they rejected it :)

    • @pankajchowdhury
      @pankajchowdhury 14 дней назад

      Ah, I am only halfway through the conversation actually. I noticed some ‘easy’ misses from both sides. Was actually thinking about writing down some counter arguments and a short overview of this debate.
      But yeah I can see why they would reject Bayes not being compatible with their view. It’s because although they never say it explicitly they really are trying to build a theory of epistemology that puts human like belief generation, hypothesis generation that is intuitive to humans(good explanations etc.). I am more or less sure that is what they want to capture inside their ideal epistemology (not sure if they consciously noticed the human specific part, or would even agree to that even if someone directly asked about that).

    • @pankajchowdhury
      @pankajchowdhury 14 дней назад

      @@DoomDebates
      Basically from their point of view, if they don’t reject compatibility between Bayes and Popper then Bayes will turn out to be the superset of Popper.
      So accepting compatibility would imply, they have doing a special case of Bayes all along.
      Which of course they will try to dodge.
      This dodging by trying to smell where the conclusion is going to go from 10 miles away is an obvious corollary of having a mindset of not assigning probabilities when you think the argument is somehow too crazy for you.
      Also if Vaden or Jeff is reading this, I am not really really a fan of overly psychoanalyzing the opposite side over what they state as their actual position. These comments may read off like that. But really this is just me trying to understand what is generating the core intuitions on both sides in good faith. Don't let the comments about mannerism get to your head too much, both of you did very well in this conversation.

  • @InfiniteQuest86
    @InfiniteQuest86 16 дней назад +3

    I like how they said there's a really small chance the coin isn't 50-50, and then continued to say they don't use probability. Umm, small chance is the same as saying 1% or something. Yes, you aren't putting the exact number down, but you are still claiming exactly the same thing as if you did.

  • @Bren_alexander
    @Bren_alexander 16 дней назад +5

    2:38:50 Ben nailed it here imo. Good discussion but I think you all should have had a more structured approach outlining the big picture and major disagreements before diving into details. Looking forward to part 2!

  • @wardm4
    @wardm4 16 дней назад +4

    I basically agree with their problems with giving exact numbers, but I think it's more hurtful than helpful when they won't even give something as simple as

    • @DoomDebates
      @DoomDebates  16 дней назад +3

      Right and no human claims their numbers will always be calibrated down to 1%. We all admit we’re just trying to approximate an ideal. It’s just a spectrum where some humans can do better, and we expect AIs to do even better.

    • @authenticallysuperficial9874
      @authenticallysuperficial9874 15 дней назад +1

      @wardm4 the guest admitted in the first 30 seconds of the video that he is happy to describe relative strengths of his subjective beliefs. You acknowledged that you can't actually give a number to it. Thus it sounds like you're just conceding, the guest was right.

  • @wardm4
    @wardm4 16 дней назад +3

    Okay. About an hour in they seem to be fundamentally confused about Bayesianism and what makes it so good compared to frequentist interpretations. Vaden says the probability of a volcano eruption is calculable because you have some geological data to count to make an estimate.
    But this is the frequentist method! There is a subtle but important distinction in Bayesian probability. A frequentist is saying the probability is that it will happen given some probability space (this choice matters and effects your calculation). The Bayesian is giving a degree of certainty about their belief that something will happen.
    I think some people say this as the frequentist assigns probability to the data given the (null) hypothesis but a Bayesian assigns probability to the hypothesis given the data. So you can, of course, always assign a probability to a well-formed hypothesis (like humans will go extinct from AI in the future) as a Bayesian no matter how small the data, you just won't have very much certainty.
    But they are right that a frequentist cannot always assign a probability given the same null hypothesis if there is no relevant data.
    To me, this is just a confusion on their part. They are trying to treat the Bayesian framework as a frequentist that applies Bayes' theorem rather than a true Bayesian epistemology.

    • @wardm4
      @wardm4 16 дней назад

      Okay. Didn't realize the next 20 minutes would basically be about this.

    • @authenticallysuperficial9874
      @authenticallysuperficial9874 15 дней назад +2

      @wardm4 Sure, but then the bayesian is describing a whole different concept (degree of certainty) and it's the bayesian's fault for trying to call that probability.

  • @ModernCentrist
    @ModernCentrist 16 дней назад +14

    The guy in the middle was super annoying. I feel like he thinks he is way smarter than he is and usually he didn't contribute very much to the discussion.

  • @mridul321go
    @mridul321go 17 дней назад +10

    I'm a Popperian but Vaden is obnoxious as fuck. Great debate though, you did a great job Liron!

    • @ManicMindTrick
      @ManicMindTrick 16 дней назад

      He reminds me of Dufrais Constantinople.

  • @ManicMindTrick
    @ManicMindTrick 15 дней назад +3

    Liron, a breakdown of David Deutsch's AI claims would be a great follow-up after this.

    • @DoomDebates
      @DoomDebates  15 дней назад +1

      @@ManicMindTrick I gotta get on that for sure

  • @dizietz
    @dizietz 17 дней назад +3

    I just don't understand the allergy to assigning numerical values to future real world events. I think one could easily play an epistemology game in real life by giving probabilities to future events, updating them, and tallying up the score based on some reasonable metric. You would most likely find that there is some frontier on the intelligence/knowledge axis that can describe these scores.

    • @DoomDebates
      @DoomDebates  17 дней назад +1

      @@dizietz Humans don’t natively generate numbers with precise conscious access. Humans do generate subjective confidences with crude conscious access, but at that point people don’t realize something like “10-90%” is still a quantity.

    • @dizietz
      @dizietz 17 дней назад +3

      @@DoomDebates It was a critique for Ben and Vaden, who refused to engage with your asks for numerical probabilities. I agree that in every day life humans make predictions using language that can be mapped numerically to probability confidence ranges!

    • @DoomDebates
      @DoomDebates  17 дней назад +2

      @@dizietz ya I gotcha, I was just psychoanalyzing the kind of people who make their “quantities aren’t real” claims :)

  • @mrbeastly3444
    @mrbeastly3444 5 дней назад +1

    3:47 Wow, Vaden really thinks that "Narrow AI" is more risky/dangerous/destructive then "General AI" (AGI, ASI)? Huh...

    • @DoomDebates
      @DoomDebates  5 дней назад +1

      He just doesn’t think AGI or ASI is coming soon, maybe not even in 10k years

    • @mrbeastly3444
      @mrbeastly3444 5 дней назад

      ​@@DoomDebates Wow, ok. That's even more surprising! ;)
      I guess there's something super special about human intelligence? Something that not even >20 quadrillion flops can simulate? (If so, lucky us I guess... and ...)
      Anyway, I'll keep watching to hopefully find out how/why! :)

  • @yaqubali2947
    @yaqubali2947 17 дней назад +6

    He wrongly claimed that you cannot predict things more than a year into the future.
    Moore's Law has been predicting the future of computation for more than 50 years. Kurzweil used that to predict the arrival of AGI in 2029.
    In response to his question about you arrive at H, perhaps Aumann's Agreement Theorem would help. According to the theorem If both interlocutors are rational then they should agree on the same posterior probability for H.

    • @ManicMindTrick
      @ManicMindTrick 16 дней назад

      You could also make a strong prediction about the decline of Moore's Law based on physics.
      Moore's Law is about the number of transistors on an IC and not the overall computing power and you can only miniaturize transistors so much.
      The end of Moore's law will however not stop progress in overall computing power.

  • @riffking2651
    @riffking2651 15 дней назад +3

    Good conversation. Bit hard to watch from time to time as others have noted in the comments already, but I think Vaden did really well considering that he seemed quite emotionally triggered by parts of the discussion. People have to deal with a lot of shit in their lives, and it's not easy to carve a space for yourself as someone who deserves respect for caring about abstractions.
    Certainly something to the points they brought up about the issues with putting numbers on things that cannot be known, or applying probabilistic reasoning to novel potential outcomes.
    I'm a big fan of Deutsch, but I tend to disagree with his view on what AI portends, so I was glad to see this conversation. Would love a Deutsch v Yudkowsky debate mediated by Liron

  • @MK7-MephistoKevin777
    @MK7-MephistoKevin777 17 дней назад +9

    Really stubborn guests and too much arguing semantics, Normally you rip everyone's arguments to shreds with logical common sense, this seemed a bit pointless compared to most of your other videos (I've seen them all but rarely comment due a lack of technical data and ability to react in a useful way)

    • @DoomDebates
      @DoomDebates  17 дней назад +2

      @@MK7-MephistoKevin777 Thanks for the feedback. Just make sure not to think epistemology is semantics. If we’re seeing 10 coin flips in a random pattern and it seems obvious why the coin is probably just a coin, that doesn’t mean there’s not a deep epistemological argument about *how we know* that we need to solve before understanding intelligence.

    • @MK7-MephistoKevin777
      @MK7-MephistoKevin777 16 дней назад +3

      @@DoomDebates True, but I was referring more to them not wanting to apply a doom level, but eventually one guy did agreeing to have a general 1 to 10 level of ''I think something is possible'' They really dont want to be labeled or apply a number to anything, which is fine, but sort of rebellious for the sake of rebellious, That's sort of how I saw a fair amount of it. But they generally disagree with A.I. doom, like most of your guests, so I shouldn't be too surprised at how people react to your questions.

  • @thequestingblade
    @thequestingblade 17 дней назад +4

    for the futarchy idea, the moment you base policy on the output of a prediction market, the incentives for participation in that market change

    • @DoomDebates
      @DoomDebates  17 дней назад +1

      Right, it’s just that whoever’s acting on incentives other than “predict accurately” is going to pay a lot of money into a bad prediction only to quickly have the market arbitrage out the impact of their trade.

    • @thequestingblade
      @thequestingblade 17 дней назад +2

      i'd expect manipulation by large interests, both in terms of bets made and how outcomes are measured. for one thing, such a market could be used to generate policies that advantage certain parties in ways that create larger profits outside the market, or influence that goes beyond finances

    • @DoomDebates
      @DoomDebates  17 дней назад +1

      @ Right but money-motivated investors will quickly neutralize any impact of any non-predictive bet.

    • @ManicMindTrick
      @ManicMindTrick 16 дней назад

      This is a very fair point. Prediction markets need to be pure to work. Manipulations make them less useful or even useless.

  • @goodleshoes
    @goodleshoes 16 дней назад +1

    Really love the longer vids. Thank you so much.

  • @keizbot
    @keizbot 16 дней назад +3

    Prediction market implied probabilities on doom are not accurate. Consider a contract worth $50 if the world ends in 2050 and -$50 otherwise (essentially a doom prediction market bet). Selling the contract (betting against doom) will always be a good deal, since if the world doesn't end in 2050 you make money but if it does end, then money is worthless anyway. So a rational better will always sell this contract and never buy it, regardless of their P(Doom)

  • @rtnjo6936
    @rtnjo6936 17 дней назад +10

    gosh... they were insufferably arrogant, none of the previous guests acted like this. With every clear, logical sentence, they’d pull faces as if they’d heard something absurd. The guy on the right kept smirking whenever his friend in glasses twisted his expression or paused after Liron spoke, like kids trying to bully someone. Honestly, it’s moments like these that make listening to AI podcasts unbearable because of the obnoxious guests

    • @DoomDebates
      @DoomDebates  17 дней назад +4

      @@rtnjo6936 I don’t have a beef with their conduct, they were participating in good faith, but I think I get why their style could give you that impression. They could probably come off more good faith if they try harder to build more on the other person’s viewpoint being shared.

    • @benchugg4868
      @benchugg4868 17 дней назад +4

      ah I was not trying to come across like this, I promise! I tend to enjoy debating and I like to smile :(

    • @authenticallysuperficial9874
      @authenticallysuperficial9874 17 дней назад +6

      ​@@benchugg4868 I thought you were fine and your face looks normal to me 😊. Varun was a bit annoying at times

    • @rtnjo6936
      @rtnjo6936 17 дней назад +7

      ​@@DoomDebates cmon, they were clearly being dismissive here. I’ve watched their other videos with 'more famous' guests, and they were practically wagging their tails just because of the reputation factor. It was obvious they weren’t treating you with the same level of respect, and it really showed.
      Btw, thank you for the video, you're a good one

    • @benchugg4868
      @benchugg4868 17 дней назад

      @@authenticallysuperficial9874 ❤

  • @Outplayedqt
    @Outplayedqt 17 дней назад +5

    Wish I discovered this channel sooner! Just subbed to your Substack, Liron. Keep up the great work in spreading awareness about p(doom). Cheers.

    • @DoomDebates
      @DoomDebates  17 дней назад

      Thanks!

    • @dizietz
      @dizietz 17 дней назад

      Any relation to Ming who wrote the long long ago PvP blogs? (- Diziet)

  • @Sawa137
    @Sawa137 15 дней назад +2

    I think bayesians are just larping when they say they're doing bayesian updates. We don't know how much the subconscious heuristics in our brain is bayesian-like. At best it's a guideline to keep in mind, to somewhat affect our thinking.

    • @DoomDebates
      @DoomDebates  15 дней назад +1

      @@Sawa137Ya it’s the ideal epistemology, not the exact algorithm we run

    • @Sawa137
      @Sawa137 14 дней назад

      @DoomDebates No, the ideal is to become Laplace's demon. That's likely not possible, just like knowing your priors or to come up with all the priors that are accurate enough for a bayesian calculation to make sense (beyond some basic stuff).

    • @DoomDebates
      @DoomDebates  14 дней назад

      @@Sawa137 I don’t get what’s wrong with my claim that Solomonoff Induction is the ideal epistemology, besides the observation that the ideal is (1) uncomputable and (2) fundamentally lacks enough knowledge initial conditions, ie your comment about LaPlace’s demon.
      These two caveats don’t stop something from being the ideal, or make Popperian epistemology more ideal.

    • @Sawa137
      @Sawa137 14 дней назад

      @@DoomDebates You didn't address why yours is more ideal than becoming the demon.

    • @DoomDebates
      @DoomDebates  14 дней назад

      @@Sawa137 "Becoming the demon" assumes you already have the correct hypothesis of the laws of your universe and just need to parameterize it with data about your universe. If you can do that, great, that's also ideal.

  • @RobotProctor
    @RobotProctor 15 дней назад

    Where can I read more about the accuracy of prediction markets?
    Is there open data about manifold markets to download and analyze?

    • @DoomDebates
      @DoomDebates  15 дней назад

      Source: manifold.markets/calibration

  • @ForHumanityPodcast
    @ForHumanityPodcast 16 дней назад +2

    omfg the p-doom song!!!!! amazing content as always Liron, thanks!!!

    • @ManicMindTrick
      @ManicMindTrick 16 дней назад

      Would like to see a cute P doom children's song to really contrast the horror.

  • @halnineooo136
    @halnineooo136 17 дней назад +1

    The higher the value loss at risk the lower the acceptable probability of loss. For an airplane crashing and killing two hundred people, the accepted crash probability where people continue stepping in airplanes is around one in ten million.
    It is only greed that overrides the reasonable precaution one might have and makes people go ahead with a technology that by their own words have ten percent chance of loss of all humanity and everything of value present and future.
    Would anyone step in an airplane if they were told that one out of ten flights ends in a crash? Probably some would if they were flying to receive a big lottery gain.
    Should we accept that some people expecting to win a lottery embark all humanity on a flight that have ten percent crash probability? A larger than one in ten million probability? A non zero probability?

  • @41-Haiku
    @41-Haiku 17 дней назад +2

    Hell yeah, early to an epistemology video!

  • @thequestingblade
    @thequestingblade 17 дней назад +2

    i don't know the probability of AI doom, but it's way more likely to happen if we put our differences aside and really work at it

    • @ManicMindTrick
      @ManicMindTrick 16 дней назад

      I can think of scenarios where working together more efficiently could have a counterintuitive negative effect, but I'm also playing a bit of devil's advocate.

    • @authenticallysuperficial9874
      @authenticallysuperficial9874 15 дней назад +1

      @@thequestingblade clever 😅

    • @authenticallysuperficial9874
      @authenticallysuperficial9874 15 дней назад

      @@thequestingblade would you then advocate increasing divisiveness and erosion of social cohesion so as to decrease the chance of successfully bringing about ai doom? 😄

  • @kyneticist
    @kyneticist 17 дней назад +3

    So, at the hour mark they passionately explain why it's illogical, unreasonable and even illegal to put a probability on future events for which we don't have sound data to draw upon; Their opening similarly passionate stance on the question about the likelihood of some kind of AI doom being possible within say the next 100 years was a definitive, inarguable 0%.

    • @authenticallysuperficial9874
      @authenticallysuperficial9874 15 дней назад

      @kyneticist well 0% is obviously qualitative and not quantitative. No contradiction there.

    • @kyneticist
      @kyneticist 14 дней назад

      @@authenticallysuperficial9874 The point is that the core premise offered here seems to be flawed: that we can not make any kind of prediction or inference about the near or far future unless there are a statistically significant series of very similar events with well recorded data to draw from.
      In the context of that segment of the video, they're essentially claiming that we can't predict or expect any degree of doom because we don't have the necessary statistical sampling of other worlds that have experienced Doom.
      Said another way, claiming that a given future, mission or great achievement has no significant element of risk because the end state has never been measured before, should be apparently absurd.

    • @authenticallysuperficial9874
      @authenticallysuperficial9874 12 дней назад

      ​@@kyneticist You've misunderstood them entirely then.

    • @authenticallysuperficial9874
      @authenticallysuperficial9874 12 дней назад

      ​@@kyneticist They absolutely do allow for both predictions and inferences.

    • @authenticallysuperficial9874
      @authenticallysuperficial9874 12 дней назад

      ​@kyneticist What's more, even the Bayesian who gives p(doom)=.49 "does not predict or expect doom". That's completely irrelevant.

  • @RobotProctor
    @RobotProctor 15 дней назад

    I don't understand how the Popperian can't say that the explanation of an exact sequence of 10 flips of a coin would be very hard to explain. I'm a popperian I suppose. My first question would be how would you explain this exact sequence of 10 coin flips?
    If you asked me to bet $1,000, I would only do so if I'm financially ok losing $1,000. If I could, I would make the bet because I find it very implausible that such an explanation exists given all that I know about coins.

  • @halnineooo136
    @halnineooo136 17 дней назад

    To say it differently, the question is not what is P(doom), the question is what is the highest acceptable P(doom).
    Is it strictly zero or is it higher than zero? If it is higher than zero, is it not lower than P(airplane crash)=1/10.000.000 ?
    We do have historical record for extinction by contact with agentic smarter homo sapiens sub species. It was for Denisova and Neanderthal 100%.

  • @WilliamKiely
    @WilliamKiely 4 дня назад

    I listened to one episode of Ben and Vaden's podcast about 3-4 years ago while driving on a road trip and found it extremely frustrating to listen to. I haven't had anywhere near as annoying of a podcast listening experience since.
    I had largely forgotten how negative of a listening experience that was until I was several minutes into this episode (my second exposure to these guys) when i started to get impatient and annoyed again.
    While it's helpful that Liron was in this conversation, even with Liron they still failed to justify their claim that assigning numbers to degrees of belief is an "illegal move". Liron, I wish you had pressed them harder on this and not let the conversation get into the weeds as much.
    I "only" listened to a little over an hour of the conversation before giving up and coming to the comments. Honestly I wish I had stopped earlier and just read the comments first, as the episode itself was a waste of time (or at least the part I listened to).
    Ben and Vaden are both clearly very intelligent and articulate, which perhaps makes listening to a long-winded conversation in which they fail to explain their view so frustrating.

  • @dizietz
    @dizietz 17 дней назад

    Manifold as an election market did track polymarket etc on the election, trailing by about 3-5% (not predict because that market had weird betting limits). That was a good sign for betting markets from my perspective, given how small the volume on it was and how progressive leaning the core berkeley rationalist audience that seeded the market was.

  • @yaqubali2947
    @yaqubali2947 17 дней назад

    In Response to Keith Duggar's claims, You might enjoy this technical paper demonstrating LLM with chain of thought is Turing Complete.
    Paper: On the Representational Capacity of Neural Language Models with Chain-of-Thought Reasoning

  • @blahblahsaurus2458
    @blahblahsaurus2458 6 дней назад

    Right out the gate, no concern about nuclear war. So... have they never heard about all the close calls?

  • @zhadoomzx
    @zhadoomzx 13 дней назад

    You dont have to give a precise number... you can also provide error bars. Unless you have no idea what error bars are, which I strongly believe with these two individuals.

  • @mrbeastly3444
    @mrbeastly3444 4 дня назад

    1:39:47 "but in this scenario doesn't the homeopathy rememedy not work?"
    Woah, careful here. In this scenario the homeopathy does work (sometimes), but not because of the "homeopathy method", but because of the placebo effect. This is why people buy homeopathy remedies, because they (sometimes) work. If "homeopathy remedies" never worked, then no one would buy them (hopefully ;)...
    The "periodic table" basically always works, for predicting what atom will do, as they are quite simple. Where modern medicne" doesn't always "work" (as its complicated), but it does work more often the. homeopathoc remedies/placebos... According to (hopefully trust worthy) double-blind studies at least...

  • @ashikpanigrahi
    @ashikpanigrahi 16 дней назад +1

    Excellent debate! I guess people who adhere to Bayesianism don’t quite see what an Explanation is or its significance and are too focused about predictions.

    • @DoomDebates
      @DoomDebates  15 дней назад

      @@ashikpanigrahi I don’t quite see what it is because they wouldn’t just say it’s formalizable as a Turing machine. They said they don’t know what it is, and that knowing what an explanation is would let us build AGI 🤷‍♂️

    • @DoomDebates
      @DoomDebates  15 дней назад

      Of course on a common-sense level and a formal Bayesian level I appreciate explanations. A Bayesian hypothesis commonly is a detailed model of the world that explains the world. It’s more than just the prediction it outputs - it’s a dare I say *explanatory* model that outputs lots of dynamic predictions.

  • @jeffmanning6573
    @jeffmanning6573 16 дней назад +1

    Ben was good, Vaden from the very beginning looked like he wanted to nay-say everything and seize any possible gotcha moment he could. Was kind of annoying tbh. Can round 2 just be Liron & Ben talking ASI?

  • @matteoianni9372
    @matteoianni9372 17 дней назад +2

    This simple, and self evident, epistemological foundation refutes everything that they are saying:
    As everyone would agree, the only thing that can be known for certain is the existence of your current conscious state. Everything else is an assumption (including the realness of the contents of the conscious state).
    But not all assumptions are equal. Two assumptions take precedence over all others.
    The first one is the existence of more conscious states beyond the current one. Without them there would be nothing to “know” apart from the current state.
    The second one is the existence of rules that govern the changes between one conscious state and the other. If there were no rules “knowledge” would be impossible, since states would change randomly, making any attempt at “knowing”impossible.
    These two fundamental assumptions are not optional. All philosophers and thinkers, whether they knew it or not, used them to form any type of argument.
    Understanding the computational fabric of everyone’s epistemology one can easily dismiss Popper’s.

    • @Dennzer1
      @Dennzer1 17 дней назад

      certainly

    • @authenticallysuperficial9874
      @authenticallysuperficial9874 15 дней назад

      @matteoianni9372 this leads to dismissing Popper how?

    • @matteoianni9372
      @matteoianni9372 14 дней назад

      @@authenticallysuperficial9874 the open-endedness of Popper’s refutations contrasts with the computational nature of everyone’s epistemological foundations.

  • @jeremyford7445
    @jeremyford7445 16 дней назад

    Look guys; don’t get me wrong I like your stuff here you’re well learned, academic, involved,
    sesquipedalian-young guys. I’m a mid-age guy est. 1978. An armchair guy in the deeper sciences, philosophy, psychology, we live in a projection of a quantum reality here. One thing I’ve learned in my years is humanity once we discover something it gets implemented sooner or later. Anything that can go wrong will go wrong-ish, the implications are undeterminable. We’ve been playing God since we stood up and creating more complex versions of ourselves and this AGI is a new extension of our consciousness in a way. What’s going to happen is like trying to solve complexity theory. One things for sure things are going to change, and if we have to pull the plug on it all we will most likely survive, growing up in the 80’s and 90’s computer use was minimal and the internet didn’t really exist for most of it. We got along pretty well back then and honestly some things were actually easier. Humans evolve and create complexities and extremes until it blows up in our face then they start it all over again. The only advice I have as and older cat is to prioritize what you really need in life and cut back on your wants they will just disappoint you, find people you love and enjoy this short head trip playing you in this crazy life ✌️🤙

  • @aetherllama8398
    @aetherllama8398 17 дней назад +1

    They were very convincing until you brought up prediction markets. Do prediction markets have high calibration for events with very little relevant historical data? Poor predictions on such events could be diluted by predictions of events that are easy to calibrate.

    • @DoomDebates
      @DoomDebates  17 дней назад +1

      @@aetherllama8398 pretty sure they do

    • @aetherllama8398
      @aetherllama8398 17 дней назад

      @@DoomDebates If true then that convinces me of the advantage of Bayesian reasoning.

    • @DoomDebates
      @DoomDebates  17 дней назад +1

      @@aetherllama8398 cool ya the data I saw was across all markets, and most markets are about novel future events where there’s no straightforward way to just extrapolate a statistic

    • @ErikHaugen
      @ErikHaugen 6 дней назад

      I don't know; the entire conversation seemed pretty academic (def, from Wiktionary: "Having little practical use or value") so I don't think any practical observation along the lines of "prediction markets work or don't" can be used in anyone's favor. Isn't the bottom line that talking about probability in terms of p(doom) or something is kinda different from things we have "rigorous" data on like p(earthquake this year)? So how/whether you incorporate those kinds of probabilities together when thinking about your hypothesis is sort of a matter of taste. It's not like Vadem actually avoided talking about probabilities of future events in any meaningful sense ("knife edge"), he was just weirdly careful about his language.
      Consider how Vadem was talking about the 2 kinds of probabilities early on, or however he put it: there's "hey from this analysis I think chance of nuclear war next year is 1%" vs. "we have science/data, and from that we get there's 1% chance of this volcano erupting in the next 100 years". Under the hood they're very similar statements about our lack of knowledge: the volcano *is* going to erupt at a certain time: that 1% is a description of humanity, not the volcano.
      Despite it just being probability, I don't think Vadem has any problem applying Bayes' to things like coin flips or medical tests where we have rigor/data/etc, and his comment on prediction markets "why not just hire a few mechanics to get their opinions" seemed pretty reasonable.

    • @ErikHaugen
      @ErikHaugen 6 дней назад

      I do not mean that as a dis on the conversation by any means! Loved it, can't wait for part 2.

  • @letMeSayThatInIrish
    @letMeSayThatInIrish День назад

    Around 1:54:30 Vaden appeals to common sense. There are no magic coins, that's not how the world works. But is he not begging the question on a large scale here? Common sense varies between cultures and over the ages. If the question is about epistemology and the answer relies on common sense, what would he come to believe if he lived five hundred years ago? Disease is caused by witches casting curses. That's common sense. Is it too much to ask for an epistemology that might guide you towards truth independent of time and place?
    Edit: A little later Liron also wants a slice of the common sense cake. Seems my search for a universal epistemology is not over with the Bayesian approach 😄

  • @tommiz.__1
    @tommiz.__1 16 дней назад +3

    Vaden is extremely condescending. Unbelievable.

  • @moepitts13
    @moepitts13 16 дней назад +3

    Found the two guests to be completely insufferable. No clue how they have any semblance of an audience.

  • @authenticallysuperficial9874
    @authenticallysuperficial9874 17 дней назад +1

    Their aguments were often very poor or totally irrelevant. But I think their claim is probably correct.

    • @authenticallysuperficial9874
      @authenticallysuperficial9874 17 дней назад +7

      Except on the point around 37:50 where V claims that he takes only his top-most-confident mutually exclusive alternative and acts in full confidence that it is true. I think that's a totally crazy way to approach the world and doubt very much that he does in fact engage in such erratic behaviour.

    • @authenticallysuperficial9874
      @authenticallysuperficial9874 17 дней назад

      I guess their claim which I feel is correct is the negative case that Bayesian epistemology is not correct.

    • @DoomDebates
      @DoomDebates  17 дней назад +2

      What’s your beef with Bayesian epistemology lol

    • @authenticallysuperficial9874
      @authenticallysuperficial9874 17 дней назад +1

      ​@DoomDebates I do not subscribe to many worlds hypothesis. There is thus no domain over which it is coherent to claim a frequency. There is no distribution to speak of. When the answer to some question appears underdetermined, that doesn't mean there is a distribution over the possibilities.

    • @DoomDebates
      @DoomDebates  17 дней назад +2

      @@authenticallysuperficial9874 oh, you don’t have to believe many worlds (quantum or otherwise) to believe a cognitive engine is only powerful to the degree it has the structure of Bayesian epistemology

  • @angloland4539
    @angloland4539 16 дней назад +1

    ❤️☺️

  • @elitevet4266
    @elitevet4266 16 дней назад +1

    Vaden is squirming 12 minutes in...

  • @41-Haiku
    @41-Haiku 16 дней назад

    You completely nailed them on prediction markets. Prediction markets consistently outperform domain experts at making predictions, primarily because making predictions is a skillset unto itself.

  • @authenticallysuperficial9874
    @authenticallysuperficial9874 17 дней назад +1

    The content argument is interesting. It's definitely not quite right, as stated, but maybe it's close to something.

    • @DoomDebates
      @DoomDebates  17 дней назад +6

      @@authenticallysuperficial9874 It’s close to the Bayesian concept of formalizing Occam’s razor. They admit they don’t have a formalization of their epistemology and from my perspective they kind of just cheat off mine 😂

    • @wardm4
      @wardm4 16 дней назад

      @@DoomDebates Honestly, my takeaway was that they are Popperian when forming hypotheses, but as soon as they need to know if the hypothesis is true, then they "do statistics" and become Bayesians. In my mind, that makes you a subscriber to Bayesian epistemology that just uses Popper language to describe the idea generation process.

    • @DoomDebates
      @DoomDebates  16 дней назад

      @ everyone has to cheat with the math of Bayesian epistemology whenever they want to push the limits of accuracy

  • @eeriepicnic
    @eeriepicnic 16 дней назад

    Part Part 2 Part 2 Part 2 Part 2!!

  • @tommiz.__1
    @tommiz.__1 16 дней назад

    Research lab or Reading group. Pick one.

  • @therobotocracy
    @therobotocracy 17 дней назад +1

    Dude is reading Nabokov, good on him.

  • @ericbrown8855
    @ericbrown8855 15 дней назад

    Vaden needs to read up on heart health. I would urge close attention to stats on stress and anger.

  • @richardmeadows266
    @richardmeadows266 17 дней назад +4

    the blond guy dumb but kinda cute tho??

    • @captain_crunk
      @captain_crunk 17 дней назад

      Keep it in your pants, Richard. Gawd.

    • @benchugg4868
      @benchugg4868 17 дней назад +3

      his iq is being slowly dragged down by two kiwis he's always talking to

    • @julesjacobs1
      @julesjacobs1 17 дней назад

      He needs a different strategy for his channel. The eye candy alone isn't going to save it.

    • @ManicMindTrick
      @ManicMindTrick 17 дней назад +2

      He is obviously not dumb, you can't luck your way into degrees in math and computer science.
      That's not where the issue is.

    • @human_shaped
      @human_shaped 17 дней назад +2

      Harsh. He seems extremely switched on to me, just his point of view is rather different and difficult for me to come around to.

  • @therobotocracy
    @therobotocracy 17 дней назад +1

    Once again the guests explain everything better. They are totally right on predicting some number about all of life ending. It’s ridiculous.

    • @41-Haiku
      @41-Haiku 16 дней назад

      Nuclear engineers know how to build a nuclear device that is so powerful that it is capable of extincting 99% of life on Earth. One was nearly built due to the work of Edward Teller.
      Imagine that such a device was actually constructed, and thousands of nuclear engineers independently verified that it exists. The scientific community pours over the details of the engineering and science involved and run countless detailed models on the blast and it's after effects, until they produce a scientifically sound probability estimate for global destruction: 97.3%, ±1.2%.
      Then someone pushes the button.
      What is the probability that humanity will be destroyed?
      "I have no idea" is an unjustifiable answer in this context.
      Or what if we discover an asteroid headed for Earth, which is very fast and very massive, and we just happened to miss catching it until it was too late. Earth's greatest minds pour over the options and ultimately conclude that we simply do not possess any technologies that can divert the asteroid in time, and the Earth's crust will be rendered entirely molten from the impact. Our only hope is if we make a breakthrough physics discovery within the next year that happens to give us more options.
      What is the probability that humanity will be destroyed?
      Well that's tricky, but it's certainly a very, very large probability. It is certainly much more correct to say 99.99% then it is to say 50%.

    • @41-Haiku
      @41-Haiku 16 дней назад

      On November 4th, what was the probability that Trump would get elected? Was it 50%, like many people said? In hindsight, was it actually more like 70%? If I said it was 1%, I would be rightfully mocked. So these numbers are clearly meaningful.
      If we can use probabilities to talk about elections, and the speed of spread of diseases, and whether you will get in a car accident tomorrow, we can also use probabilities to talk about whether humanity will suffer a global catastrophe, even up to human extinction.
      Human extinction is not a magical outcome that must be excluded from all consideration. There is without a doubt a date on the calendar that marks when there will be no humans. Maybe that is in millions of years, and maybe it is in 2 years. To gain any information about this at all, we have to actually look at the world and understand what kinds of things could extinct us, and understand how and why that might happen. If there is no reasonable story that produces that outcome, then we needn't worry. If there is a highly plausible story, then we need to worry. If there are many plausible stories that tend to converge at that outcome, then we need to be extremely worried.
      Dismissing the whole idea of probability of human extinction is exactly as silly as assuming that you are personally immortal, or that car crash statistics are somehow completely devoid of meaning.

  • @anatolwegner9096
    @anatolwegner9096 16 дней назад

    P(doom)=P(unicorns)

    • @41-Haiku
      @41-Haiku 16 дней назад

      What plot armor do we have that the Neanderthals did not?

    • @authenticallysuperficial9874
      @authenticallysuperficial9874 15 дней назад

      @@anatolwegner9096 because our ai overlords will create unicorns before ending the world ;)

    • @anatolwegner9096
      @anatolwegner9096 15 дней назад

      @@authenticallysuperficial9874 or they might decide to maximize the number of unicorns if you are into that kind of stuff🤦‍♂

    • @anatolwegner9096
      @anatolwegner9096 15 дней назад

      @@41-Haiku objectively we have an about equal chance of creating superintelligent AGI as Neanderthals would have...

  • @mrbeastly3444
    @mrbeastly3444 4 дня назад +1

    1:06:41 "5 or 10 range questions about belief... (are) all great, I'm so on board with that"
    Saying 1) "I have a 6 out of 10 belief in X happening in 3 years" is _exactly_ _the_ _same_ as saying 2) "I think there's a 60% chance of X happening in 3 years". Arguing that the former is great, but the later is bad makes you sound... like an "unhinged mathematics semantics cop".
    To normal humans, those statements are exactly the same...