2:06:12 Theoretically, if you put off kicking the kitten, then the only decrease in the fraction of [worlds weighted by their probability of being the real world] is the worlds where you put it off too long and you end up dead before kicking a cat, right? If you don’t kick a cat now, your cat-kicking will just be delayed in these worlds.
Would it encapsulate the set of all possible worldstates to say that, 1. in that branching model, each branch has some probability that a conscious being goes down such branch (that probability being dependent on what that conscious individual observes), and 2. the branching model works if you allow different branches to interact? (this technically would mean that they aren’t seperate universes, but it would be convenient to treat them as two seperate universes that can interact.) Can that description be used without manipulatipn or loss of information?
For any given set of options from which a moral agent is choosing something, is there ever a scenario, given our known physics, where the optimal choice varies depending on which interpretation of quantum mechanics is true, assuming, say, utilitarianism? (That is, if you are a utilitarian, is it at all practical to find out which interpretation of quantum mechanics is correct?)
2:05:52 If everything is worth half as many QALYs, that has the same change on your decisions as switching from using QALYs to Quality-Adjusted-Life-2_Years. (this applies to any positive real number, not just one half=1/2.) (For all practical purposes, it applies to any positive hyperreal too, since, if things super matter, then they matter, just more. And if things don’t matter, you might as well pretend they do.)
If someone can explain it well, I would be super curious what the Willow result by Google is implying regarding Quantum interpretations.
Sidenote: “observer” reminds me a lot of relativity and reference frames, namely how information is not lost in black holes.
2:06:12 Theoretically, if you put off kicking the kitten, then the only decrease in the fraction of [worlds weighted by their probability of being the real world] is the worlds where you put it off too long and you end up dead before kicking a cat, right? If you don’t kick a cat now, your cat-kicking will just be delayed in these worlds.
Would it encapsulate the set of all possible worldstates to say that,
1. in that branching model, each branch has some probability that a conscious being goes down such branch (that probability being dependent on what that conscious individual observes), and
2. the branching model works if you allow different branches to interact? (this technically would mean that they aren’t seperate universes, but it would be convenient to treat them as two seperate universes that can interact.) Can that description be used without manipulatipn or loss of information?
For any given set of options from which a moral agent is choosing something, is there ever a scenario, given our known physics, where the optimal choice varies depending on which interpretation of quantum mechanics is true, assuming, say, utilitarianism?
(That is, if you are a utilitarian, is it at all practical to find out which interpretation of quantum mechanics is correct?)
2:05:52 If everything is worth half as many QALYs, that has the same change on your decisions as switching from using QALYs to Quality-Adjusted-Life-2_Years. (this applies to any positive real number, not just one half=1/2.) (For all practical purposes, it applies to any positive hyperreal too, since, if things super matter, then they matter, just more. And if things don’t matter, you might as well pretend they do.)
Thanks! as a quick bit of feedback, I find a decent bit of value from the philosophy-based episodes.