The Problem of The Priors (Bayesian Epistemology)

Поделиться
HTML-код
  • Опубликовано: 20 окт 2024

Комментарии • 48

  • @robertwilsoniii2048
    @robertwilsoniii2048 6 лет назад +1

    Best video of the series. Thank you for not being like most cognitive science ‘zombies’ who seem to be unaware of the problem of the priors (and in the certainty of mathematics).
    I really enjoyed the series, and felt like I learned nearly as much as I would in a course dedicated to Bayesian Epistemology at my university, which is holding a course on this topic next term, which I have decided is not worth my time given that I feel as though I’ve learned everything I want to know about the topic from your series. It makes my elective course obsolete and a waste of time! So if one of your goals with RUclips is to replace traditional education, or at least provide broader access to high quality education to all people regardless of their academic ability or ability to pay tuition - well, congrats! At least by University of California standards you are doing just that.
    The only trouble is I don’t get recognized for learning the material since it won’t be on my transcripts. ¯\_(ツ)_/¯ Oh well. I’ve learned to stop caring about my transcripts long ago given that the internet is ripe and thriving with educational resources. The only problem is that most everyone else doesn’t see it that way, at least not yet. I think RUclips is undermining formal education. And that’s good because it means that bureaucracy is being challenged; and there’s little else I hate more that bureaucrats.

    • @CarneadesOfCyrene
      @CarneadesOfCyrene  6 лет назад +2

      +Robert Wilson III Awesome! I'm glad to hear that the series was helpful! And yes, one of my goals is to make philosophy available in places outside of the classroom. I agree that many do not realize that the internet can often provide the education of a traditional university, and the capacity fo the internet to do this is growing by the day. However, unfortunately university degrees are as much a symbol of status and priviledge which employers can claim represents a set of skills which can be accquired elsewehere, when in fact someone without such a degree would be jsut as qualified to perform a particular job. I would argue that RUclips is allowing many people without the resources to access traditional education the ability to educate themselves, even if that does not translate into a stronger resume. Thanks for watching!

  • @hoagie911
    @hoagie911 9 лет назад

    Good work. I have one objection which I hope is useful. At several points in this video you referred to P(e&h), telling us that by changing this prior probability, we can drastically affect our posterior probability. The two examples you used were whether seeing a white swan confirmed the hypothesis that all swans are white (and by how much), and how by engineering our priors we could become very confident God exists by seeing a bird. To exemplify, I think it would be helpful to express our posterior probabilities in a different form (the '|' means "given"): P(h|e) = P(e|h)P(h) / P(e). We can expand P(e) over any partitioning set of hypotheses, but the most common choice for epistemology is P(e) = P(e|h)P(h) + P(e|~h)P(~h).
    Now, which priors are we allowed to play with? We are certainly allowed to play with P(h), and all your arguments about that still go through. Can we play with P(e) and P(e|h)? Note firstly that playing with P(e) is equivalent to playing with P(e|h) and P(e|~h). So can we play with P(e|h) and P(e|~h)? In many statistical problems, the answer is no. P(e|h) and P(e|~h) express the likelihood of e given h and ~h respectively, and these are often determined by a random system operating. However, this undermines Bayesianism as an epistemological position, as we have to come to our beliefs about P(e|h) and P(e|~h) outside of Bayesianism (e.g. in Frequentism), so Bayesianism cannot be seen as fundamental.
    I believe this point highlights that Bayesianism needs to rely on other epistemological systems in order to work. Two important questions remain: can a non-fundamental account (i.e. an account that relies on other epistemological systems) of Bayesianism get around Bayesian Epistemology's problems? And if such an account does succeed in answering the problems, can such an account exclusively use Bayesianism to successfully explain all rational inductive reasoning (excluding reasoning relevant to setting up Bayesianism), or are there some types of rational inductive reasoning which simply cannot be explained Bayesianism, even if we ignore its foundational problems? One such mode of inductive reasoning may well be abduction, given Bayesians often have little to say about what makes a good explanation.

  • @Gnomefro
    @Gnomefro 9 лет назад +5

    The argument about swans and uniformity of nature/induction seems very strange to me. It's pretty clear that a belief in the uniformity of nature and the applicability of induction is probabilistically justified by the evidence of various types of systematic events. If I see a dice being tossed 100 times and it always produces a 3, then an appropriate Bayesian probe into such a reality could be establishing a hypothesis that "Dice tosses result in 3s with some probability". Had the results been more varied, the hypothesis would have to be more varied. You don't need any more than the data and extremely minimal rules for things like max and min probabilities and confidence you'll accept to establish such hypotheses.
    While this doesn't get you out of the problems with applying induction and the possibility of guessing wrong, even when you have good evidence for some hypothesis, the Bayesian approach is still vastly superior to most other approaches because it can quantify the uncertainty, given what you know.
    In any case, the "all swans are white" example is a pretty bad one as the Bayesian hypothesis would be stated as "Swans are white with max probability and with max confidence" which would serve as the correct probabilistic basis for guessing the color of the next swan you come across *not* for making the dogmatic statement that "all swans are white". This really has nothing to do with the uniformity of swan color in reality either, but about what's indicated by the data, as the probability will be extracted from your dataset. There's no need to believe in the uniformity of nature beyond what's indicated by the data for a Bayesian - in fact, the principle doesn't really matter at all. The Bayesian would not be surprised if he guessed wrong either, he'd just add the black swan to his dataset, recompute probabilities, and move on with his adjusted beliefs rather than suffer a full on epistemic breakdown.

    • @wardm4
      @wardm4 9 лет назад +2

      This is exactly right. I'd go so far as to say there is no problem of induction for a Bayesian, because the problem of induction only occurs for universal quantification. But Bayesians deal in probability: my degree of certainty that the next swan I see will be white is .99 based on my current data. The P of I will only occur if I somehow find my final calculation to be 1. That would never be possible unless I started with a prior of 1, which I'd say is a faulty prior.

    • @CarneadesOfCyrene
      @CarneadesOfCyrene  9 лет назад +4

      Gnomefro "It's pretty clear that a belief in the uniformity of nature and the applicability of induction is probabilistically justified by the evidence of various types of systematic events." The problem is that in order for Bayesian episetmology to justify this claim you have to set your priors in such a way that they already assume that the uniformity of nature is the case.
      "If I see a dice being tossed 100 times and it always produces a 3, then an appropriate Bayesian probe into such a reality could be establishing a hypothesis that "Dice tosses result in 3s with some probability"" And yet, the only reason that you define this as an appropriate Bayesian Response is that you pressupose that your priors are set in such a way as to assume the uniformity of nature (UoN). Imagine that you did not know if the UoN was the case, then getting a bunch of 3s would not make you want to increase your beliefs in future 3s at all.
      "In any case, the "all swans are white" example is a pretty bad one as the Bayesian hypothesis would be stated as "Swans are white with max probability and with max confidence""
      Incorrect. There are no restrictions on Bayesian hypotheses. If you think that that would be the hypothesis, then that is simply because you have consructed your priors in such a way as to make that hypothesis, instead of the one "all swans are white" more likely. Bayesian Epistemology does not make the restrictions that you imagine that it does, because that requires assumptions about prior probabilities that they cannot furnish.
      "This really has nothing to do with the uniformity of swan color in reality either, but about what's indicated by the data, as the probability will be extracted from your dataset. There's no need to believe in the uniformity of nature beyond what's indicated by the data for a Bayesian - in fact, the principle doesn't really matter at all" Your conception of Bayesian Epistemology is completely incorrect, and I encourage you to watch the videos again. It sounds like you are referring to Bayesian Probability theory which is something completely different, and cannot speak to truth or knowledge at all, merely abstract mathematical concepts. The point is that I can create a set of prior probabilities such that a Bayesian that observed only white swans would predict the next one to be black. There is nothing that is contradictory in Bayesian Epistemology that says that this is an incorrect set of probabilities. No Bayesian can speak out against it since prior probabilities are immune to criticism. In fact we can construct one such that the Bayesian would then see a white swan and have that validly by Bayesian Epistemology confirm his belief that the next swan will be white. Here's how:
      Hypothesis: The next swan I see will be black (.6 degree of confidence)
      Evidence: I see a white swan (.4 degree of confidence)
      P(H&E)=.39
      I see a white swan therefore I use Bayes' theorem to readjust my beliefs. My final degree of belief in H will change to .975 since .39/.4=.975. Therefore, my seeing a white swan has confirmed that the next swan I see will be black.
      Now you might be concerned about the prior probabilities that I used, but that is exactly the point. With no restriction on prior probabilities, there is nothing to prevent such a case from occurring. The only reason that you would create prior probabilities that make the future predictions in some way depend on the past is if you already believe in some version of the UoN.

    • @CarneadesOfCyrene
      @CarneadesOfCyrene  9 лет назад +4

      wardm4 Incorrect. The problem of induction applies to Basian epistemology in that when you construct your priors you rely on the uniformity of nature (UoN) to set them up. When you decide should my seeing a swan have any influence on the statement the next swan i see will be white, this comes into play. If you decide that it should confirm it, this is because you put faith in the UoN. If you decide it should disconfirm it, that is because you deny the UoN. If you decide that it should have no effect, then you are not taking a stance on the UoN. The point is, the way that you set your priors will determine the future probabilities that you have. In fact if you set your probabilities such that they will disconfirm the above hypothesis, the UoN will be dis-confirmed aswell.
      It's not the same problem of induction, as you correctly note that it need not deal with certain universals. But the problem is still there, it just appears in choosing what prior probabilities to assign.

    • @hoboghost
      @hoboghost 4 года назад

      ​@@CarneadesOfCyrene Can data collection before hypothesis lead to emergent probabilities we can count as knowledge? Must there be a prior belief?

  • @ThoseMadFoxes4330
    @ThoseMadFoxes4330 Год назад

    Ah! 3:32! How do you get the “better than a fifty fifty chance” part? Please and thank you!

  • @mesplin3
    @mesplin3 Год назад

    I'm confused. What are the problems with the philosophy of science and classical epistemology?
    My understanding is that inductive reasoning is designed to provide a pattern of reasoning for degrees of belief as opposed to deductive reasoning which provides patterns of reasoning for certainties of belief.

  • @The1SlayerChannel
    @The1SlayerChannel 9 лет назад +1

    You pointed out that a Bayesian cannot use Bayesian epistemology to show that the principles on which it is founded are true. But isn't this the case of any first order logic system? I believe that Godel's incompleteness theorem shows that this is the case but I don't know his theorem fully so correct me if I'm wrong

  • @numbo655
    @numbo655 5 лет назад

    Why do we have to certainty in the criterion we use? We could imagine coming up with a criterion and finding it useful, but allowing chance that the criterion might be wrong?

  • @aeellwood
    @aeellwood 17 дней назад

    There’s easier ways of saying I don’t really get math than posting a video of filled with technical errors.

  • @The1SlayerChannel
    @The1SlayerChannel 9 лет назад

    Taking the dragon under the hill example and the ad-populum fallacy created; I think that the problem here is not isolated to Bayesian epistemology. A frequentist may also commit this fallacy by saying "Probability that everyone believes there is a drag given there is a dragon= 0.9", "Probability that everyone believes there is a dragon given there is no dragon= 0.01" Under these conditions we would reject the hypothesis that there is no dragon and the null hypothesis would be that there is a dragon.
    In both the bayesian and frequentist cases the problem was thinking that peoples' beliefs are evidence that could be used to get insight into the dragon's existance.
    I realise that there are other epistemology paradigms which could overcome this but for the statisticians among us these two are the ones of interest

  • @dlmcnamara
    @dlmcnamara 9 лет назад

    @26:25 Argument is basically "if you have incorrect beliefs, you'll draw incorrect conclusions from new observations"; so Bayesian epistemology is invalid because people can make mistakes? Also note that this description mixes up the term prior probability with conditional probability -- usually, you'd model the conditional probability P(seeing a bird | God exists ).

    • @CarneadesOfCyrene
      @CarneadesOfCyrene  9 лет назад

      +David McNamara If I remember the argument correctly, I am not claiming that Bayesian Epistemology is invalid, I am claiming that it is irrational. I am claiming it is irrational because there is no rational way to determine prior probabilities. Furthermore I am uncertain as to what you mean by mixing up prior and conditional probabilities. The probability you assign to the conjunction of two statements (eg God exists and I see a bird today) /is/ a prior probability. In fact you cannot determine your conditional probabilities without it.

  • @MeltedCheesefondueGruyere
    @MeltedCheesefondueGruyere 8 лет назад

    The issue of P(~H)=1-P(H) is more an issue of logical omniscience, rather than of priors.

  • @creepypuppetspresents5605
    @creepypuppetspresents5605 2 года назад

    Some more objections: Let's suppose there is a valid means of assigning priors. How do we adjudicate which one? Suppose a scientist explicitly calculates the prior of a result of an experiment before conducting the experiment, say using regression from past, similar experiments, would it be irrational to disagree with their P(H)i? ; if yes, doesn't that just collapse bayesian epistemology into bayesian statistics? If no, why should we take bayesian epistemology as better than statistics?; Also, Bayesian Epistemology's solution to the raven paradox becomes hilarious for any claims about the properties of hydrogen.
    I'm a statistical paleontologist: I study questions like "What is the probability this deciduous forest was a boreal forest 12k years ago, given 80% of the fossil pollen from that time was spruce" We scientists use method 2, though we would use a more explicit calculation like regression (forest type ~ percent spruce pollen). I've had so many arguments with "Bayesian mythicists", Richard Carrier fans, about their laughable misunderstanding of statistics.

  • @numbo655
    @numbo655 5 лет назад

    Could you make a video on the result that the credence functions of two agents with different priors will become arbitrarily close after being exposed to a large amount of the same evidence?

  • @ThoseMadFoxes4330
    @ThoseMadFoxes4330 Год назад

    *Rewatching trying to figure it out* the truth table seems to say out almost definitely untrue as well, unless I’m reading it wrong (possible, I’m new)

  • @numbo655
    @numbo655 5 лет назад

    How can a fallacy confirm a hypothesis?

  • @gutzimmumdo4910
    @gutzimmumdo4910 2 года назад +1

    4:37 amongus?

  • @monkeymule
    @monkeymule 9 лет назад

    I studied philosophy until it became very clear to me that for a functional perspective to operate in the the world I was going to need to take at least some position somewhere on faith.
    Now I study computer science.
    My view is that this Bayesian epistemology it doesn't really solve the problems but maybe it does a good job at collecting them into one place.
    For example, perhaps you could make some priors by assuming that repeated observations confirm whatever the simplest law is that would be that explain all of them.
    Say you wanted to encode the idea of green or grue. What would take less information to store? They are both abstract forms, but they have measurable differences. Grue needs two colors and a point of inflection but green just needs one color. Green wins.
    Take it on faith that the universe will be simple and comprehensible wherever given the opportunity to do so, and then you can start making some laws with induction from there.
    If the objective is to take as little on faith as possible, that seems like a good point to set down that as an axiom, and then create a structure of knowledge based on that axiom.

    • @CarneadesOfCyrene
      @CarneadesOfCyrene  9 лет назад

      Nathan E - Aeium But taking as little on faith as possible is not our objective, or you cannot prove ti to be, nor can you prove efficient to be truth related. You can just be irrational (in the sense that no rational decision procedure could allow you to arrive at that conclusion) and say that you like efficient things. Additionally something being efficient is different from it being true. Solipsism is very efficient, and it takes relatively little on faith, yet surely it is not what you want to arrive at (ruclips.net/video/NoXxPojdyU4/видео.html). And if our objective is to take as little on faith as possible, then the skeptic still wins, since taking nothing on faith is still more efficient than taking something on faith.

    • @monkeymule
      @monkeymule 9 лет назад

      Carneades.org Solipsism is very efficient at explaining a large number of possible universes, but I don't really think it does a great job at explaining/predicting stuff that happens in this one.
      If everything was just imagined by one person, it seems all the rules that dictate the world around them would be subject to change, yet there are many things about the world that people do not have the capacity to change.
      I think to apply to this world, Solipsism would need to explain why some aspects of perception seem to be permanent and some do not.
      I suppose what it gets down to is that it's not necessarily desirable to explain more phenomena. People just want explanations for the phenomena that they have actually experienced, and they want these explanations to be simple, so that they can apply this understanding to their benefit.
      So, in desiring efficient explanations people don't necessarily want the best abstraction to explanation ratio, they want efficiency in the practical application of the abstraction.
      People want predictive efficiency, not just explanatory efficiency.
      Solipsism may be the ultimate in explanatory efficiency, but it has no predictive power at all. In that way it is possibly the least efficient view because all possible outcomes are predicted in equal measure.

    • @CarneadesOfCyrene
      @CarneadesOfCyrene  9 лет назад

      Nathan E - Aeium Simply because something is in your head does not mean you can change it. Few philosophers are advocates of doxastic voluntarism (the ability to change your beliefs through shear willpower, therefore even in the most materialistic picture it does not seem that we have full control of our mental faculties. So it would make sense that just because everything is in our head, that it is under our control.

  • @ThoseMadFoxes4330
    @ThoseMadFoxes4330 Год назад

    I do agree though that Bayesian stuff can’t tell us if we were correct in our first beliefs

  • @Elgeneralsimo69
    @Elgeneralsimo69 8 лет назад

    I can't escape feeling that the same objections here can be used to object to Moden Ponens (MP): "p>q, assume p, therfore q" is how I would state it such that "assume p" is stated clearly. Otherwise, it is obvious that p>q cannot be used to say *anything* about p.
    Now, in Bayesian Epistemology (BE) we have the same situation as with MP: "Pe(H), assume e, therefore P(H)" is how I would state that for similar reasons. Clearly Pe(H) cannot say anything about e and thus it very _application_ to the problem is an assumption as far as Pe(H) is concerned.
    So if the problem of priors is not a problem with MP, I don't see why it should be a problem with BE.
    12:04
    Quick comment; I haven't seen the induction or Uniformity of Nature (UoN) videos so forgive if this is covered therein. However, it seems to me that by the time you have learned what a swan is and the ability or necessity of differentiating, then you will have accumulated *a lot* of evidence for (or against!) the UoN. And given that memory is required to make this determination, I would even go so far as to say that any entity with memory counts on? already has? accumulates? naturally acquires? evidence in favor *FOR* UoN such that any disruption in the UoN is perceived by any entity with memory as "unnatural" or "irrational".
    It is a bootstrap, but not one that is independent or in-cognizant of every other evidence and bootstrap up until the time you needed to resolve this proposition. A baby, I would argue, would be at it's most anti-Bayesian state (will believe anything, beliefs are pliable) since it has the least amount of evidence to draw from while the elderly are the most Bayesian (bound by evidence; beliefs not pliable) given the large accumulation of evidence which shape their beliefs.
    Also, at 14:40, and again apologizing for not having seen those vids yet, but it seems that you use the ∀ qualifier almost exclusively instead of allowing for the ∃ qualifier. Thus evidence of one green emerald means that ∃(green+emeralds)=T, trivially, but at the same time having proven a ∃ we can now bootstrap our way up to ∀ by induction; in effect ∃⊑∀. In this case, to bootstrap grue by the same notion (which, not having seen the vids, I assume is fictional), I would first need evidence that grue existed to justify ∃ and then bootstrap up to the notion as you describe which is a ∀.
    As such I disagree with your conclusion;
    I believe finding another green emerald _having already found one_, having already proven ∃(emerald+green)=T, *IS* more rational to accept...
    ....than finding a green emerald and holding any belief that it will be grue. In fact, it's hard how you could even justify your premise as stated "finding an emerald and all emerald are grue" given that you already found a green emerald which contradicts that "all emeralds are grue".
    18:00
    Here again, I feel like you could use the exact argument, with zero modification, to say that using MP as a criterion to establish prior probabilities is a problem. As such, if the problem of the prior were valid (which as I've stated above I don't believe in or, at the least, am not convinced by the evidence), then it would be a problem with MP and if we eliminate BE on this basis (or be skeptical to the point of putting back on the shelves) then we should shelve MP as well and that seem eminently irrational.
    19:00
    So, in quantum mechanics we deal with (P(H)+P(~H))=T as a state of superposition which clearly breaks classical logic yet has proven extremely useful. Even the infinite number of ~H that you describe can be accounted by, for example, the Feynman path integral and it's more esoteric derivatives, the multiple worlds hypothesis/axiom. In light of all this evidence that classical logic is not the only logic we can conceive nor the only logic we can find/assign to phenomenology/reality/maya, could not your objections then be seen as a reflection of what QM might be telling us, that the LEM and thus ~∃(T&~T) is rational but QM seems to show that we also have to accept ∃(T&~T) as rational as well?
    23:00
    You mentioned flipping a coin. If we witness a coin as a truly random binary, on-off, zero-one event, then one can have faith (100% certainty) that the coin will have a 50/50 chance of heads or tails; There is no faith in heads involved in the application to Agrippa's Third nor faith in tails; only faith in the 50/50 randomness.... so there is no faith in using that as an option to break the symmetry of equivalence.
    This options implies a random "soup" that forms a prior base. In this soup, all beliefs are 50/50 random until a consistent framework is formed (pure symmetry to less symmetry; symmetry breaking from a blank whiteboard to a whiteboard with words on it); "the infinite monkeys on a typewriter" argument but with Aleph-null or higher number of monkeys... the blank slate but very pliable mind of a baby and child. Eventually, the monkeys form several seemingly strong framework and for utility, evolution, curiously, whatever you wish you call the driving force, certain frameworks were pursued and others weren't...and certain frameworks worked and other failed.
    And so the monkeys keep typing (more evidence, more art, more random, more science, more tech, more interdisciplinary studies, less tower of babel) and we keep choosing one consistent framework over another.
    _TBH, I see no truth in this pursuit, only the strength of the framework._.
    TL;DR: I think true randomness solves the problem of the prior since there is no faith involved in making a random decision. I might even go so far as to say that _Random-based determination is the opposite of Faith-based determination_, not apathy, indifference, or agnostism.

  • @bernardrobertson1164
    @bernardrobertson1164 2 года назад

    There are two problems with this presentation. The first is that it assumes the prior probability can be correct or not correct but there are no "correct probabilities" only probabilities assessed on the basis of whatever evidence one has, or applying an indifference principle or whatever. The second is that this criticism simply amounts to saying that if you make rubbish assessments Bayes' Theorem won't put them right.

  • @MeltedCheesefondueGruyere
    @MeltedCheesefondueGruyere 8 лет назад +1

    Not addressing the core logical and philosophical claims, but an argument I made showing that, in practice, even terrible priors often don't cause as much problems as you'd think: lesswrong.com/lw/nki/jfk_was_not_assassinated_prior_probability_zero/

  • @gdn5001
    @gdn5001 4 года назад

    Well, since it doesn’t seem psychologically or pragmatically possible to not have some beliefs, I might as well reason rationally from those beliefs that I hold. The whole system maybe scuffed, but that seems besides the point. I don’t REALLY care about truth just predicting and explaining my experience. ;)

    • @gdn5001
      @gdn5001 4 года назад

      Instrumentalism time!

  • @MeltedCheesefondueGruyere
    @MeltedCheesefondueGruyere 8 лет назад

    The argument on the problem of induction seems misleading. Assume for the moment that there are n swans in the world. I have n+1 theories, each that m swans are white (and that n-m are non-white). Then I inspect one of them. As long as I put a non-zero probability on all the theories, then seeing a white swan increases the probability of all swans being white. The more swans I inspect, the more the probability increases.
    Now, there are still anti-inductive priors (eg: P(m swans are white) being proportional to 0.5^(m!)) which means that the more white swans you see, the more likely you think it is that the next swan will be non-white. However, this is still inductive in the sense that, the more white swans you see, the more likely you think it is that all swans are white.

  • @ThoseMadFoxes4330
    @ThoseMadFoxes4330 Год назад

    I’m actually having trouble with reasoning why you’d connect the evidences to those hypotheses; they seem to be somewhat removed from being evidences of those things ie. People around you believing in aliens isn’t evidence of aliens, just evidence of something convincing people of aliens; the ground shaking isn’t evidence of a dragon, just something large -like- a dragon or even a giant. These could be evidences, but I feel the term is being used extremely loosely. Though! I will say if you find me someone who would use those as evidences, I’ll show you someone who’s posterior probability would increase 😉 personally I’d attack their hypothesis (probably why you talked about logic)
    If I’m wrong though, I would like to know why as you seem better at this than I am and to be taught would be appreciated!

  • @Thereisnosky
    @Thereisnosky 4 года назад +1

    17:11-17:33 - i think you beat eminem's rap god in terms of how many words per second ;)

  • @AMBhanBam
    @AMBhanBam 9 лет назад

    You're a beauty. 30 minute video :O

  • @numbo655
    @numbo655 5 лет назад

    You can avoid the problem of the priors creating a logical fallacy as long as you do not create false relations between hypothesis and evidences, no? Like the false relation that "everyone around you believes there are aliens" is evidence for "there are aliens living among us".

    • @numbo655
      @numbo655 5 лет назад

      I mean, we know it is a logical fallacy that there is a relation between these two H and E, so couldn't one criterion be that we can't assign prior probabilities that result in logical fallacies?

    • @numbo655
      @numbo655 5 лет назад

      This criterion would result in no logical fallacies. The rest of unequal prior probability distribution will converge in the limit as scientists are exposed to more and more evidence. So that even with unequal prior probabilities, their probabilities will become more and more alike as they are exposed to the same evidence.

  • @numbo655
    @numbo655 5 лет назад +1

    In all your examples, why isn't P(E&H) = P(E) * P(H) ?

    • @numbo655
      @numbo655 5 лет назад

      As long as this is the case, then your alien example is fine

    • @entivreality
      @entivreality 4 года назад

      That only holds when the two events are independent (which is usually not the case)

  • @ElectricQualia
    @ElectricQualia 9 лет назад

    Best prior solution en.wikipedia.org/wiki/Algorithmic_probability

    • @CarneadesOfCyrene
      @CarneadesOfCyrene  9 лет назад

      ElectricQualia I will have to look into that more, but right now it seems to rely on Occam's razor and the assumption that simplicity implies truth. As humans we like simplicity, but our best scientific theories seem to be very complicated. And Occam's razor is completely unjustified (ruclips.net/video/6AvYCTRx72Q/видео.html and ruclips.net/video/NoXxPojdyU4/видео.html). And if simplicity assigns the degree of truth to "statements that are simply are more likely to be true" the solution is clearly circular. As noted I would want to look into it more, thanks for the link. Is there a good article that you have that explains it in full?

    • @ElectricQualia
      @ElectricQualia 9 лет назад +2

      Carneades.org well, its used by the famous probability theorist Kolomogrov to assign priors. It does rely on Occam's Razor but also on algorithmic informational complexity. Ray Solomonov later used it in his "inductive inference" scheme, which combines the former with the principle of multiple explanations, and beysian inference for updating models. The only assumption that it makes is that the environment follows an unknown but computable probability distribution.
      www.scholarpedia.org/article/Algorithmic_probability
      Occam'z Razor is not "simplicity implies truth", but rather when you have 2 models/theories of equal predictive and explanatory power, the simpler 1 is more likely to be correct. The key word here is "likely" , it doesn't assume certainty. This is a famous meta-scientific principle that has intuitive roots in probability theory and everyday common sense. It is known that P(A)

  • @dr.shousa
    @dr.shousa 4 года назад +1

    Cromwell's rule and the Bernstein-von Mises theorem renders this objection moot.
    All of your objections seem to come from Bayesian epistemology in philosophy, but these problems have been addressed and dealt with (to different degrees, of course) within science (namely statistics). It actually highlights how lazy philosophers are with updating knowledge from other fields.

  • @ThoseMadFoxes4330
    @ThoseMadFoxes4330 Год назад

    Ah i think I understand now haha