Confirmation and Disconfirmation (Bayesian Epistemology)

Поделиться
HTML-код
  • Опубликовано: 23 янв 2015
  • An explanation of how a given theory is confirmed or disconfirmed in Bayesian Epistemology.
    Information for this video gathered from The Stanford Encyclopedia of Philosophy, The Internet Encyclopedia of Philosophy, The Cambridge Dictionary of Philosophy, The Oxford Dictionary of Philosophy and more!
    Information for this video gathered from The Stanford Encyclopedia of Philosophy, The Internet Encyclopedia of Philosophy, The Cambridge Dictionary of Philosophy, The Oxford Dictionary of Philosophy and more!

Комментарии • 18

  • @PotterSuppositionalist
    @PotterSuppositionalist 9 лет назад +2

    A quantitative system to discuss degrees of belief, confirmation and disconfirmation could be very useful.

  • @guillatra
    @guillatra 9 лет назад +1

    What do you think about the numbers? Why should someone end up with 0,1 for example, not 0,13?

    • @CarneadesOfCyrene
      @CarneadesOfCyrene  9 лет назад +1

      guillatra I'm going to touch on some worries with this in the Problem of the Priors at the end of the objections. The Dutch book arguments say that it's what we would bet. If you would bet 13 cents on a 1 dollar bet then you have .13, but if you would only bet 10 cents, then you have .1. The problem to me is that it does not seem to me that anyone can come up with their original probabilities and is generally completely unaware of the interaction of evidence on their degree of belief in a particular hypothesis. They'll be more on this and other problems in the objections series.

  • @pawelwysockicoreandquirks
    @pawelwysockicoreandquirks 8 лет назад

    Great videos!

  • @HamidSain
    @HamidSain 5 лет назад

    the evidence for proof of absence of afterlife here seems a bit "unscientific"

    • @CarneadesOfCyrene
      @CarneadesOfCyrene  5 лет назад

      The problem of the priors (the initial probabilities that you assign particular propositions before evidence is a real problem for Bayesian Epistemology. You may think that the prior on "there is no afterlife" is too high, but the issue is that we can't critique priors well with Bayesian epistemology. Check out my series on objections to this for more. ruclips.net/video/rWb7up_MoZc/видео.html

  • @zsoltnagy5654
    @zsoltnagy5654 2 года назад

    Really? This is how, "confirmation" and "disconfirmation" of an evidence E are defined?
    But "confirmation" and "disconfirmation" are supposed to be closer to giving a "proof" or "disproof" of a hypothesis rather than being "supportive" or "unsupportive".
    Let me explain this with an example:
    Let's suppose the following:
    hypothesis H: A person P* is ill with a rare diseas D, which is possessed by approximately 0.1% of the population.
    evidence E: That person P* is undergoing a test T, which will correctly identify a person to be ill or not ill with that specific diseas D with approximately 99% accuracy and *which is here positive.* (The test T is positive for person P, if that person P is ill with that diseas D, and the Test is negative for person P, if that person P is not ill with that diseas D, at 99% of the times.)
    So we've got the following probabilities:
    P(H)i=P(H)=0.001=0.1%,
    P(E|H)=0.99=99%,
    P(H)f=P(H|E)=P(E|H)·P(H)/(P(E|H)·P(H)+P(E|~H)·P(~H))
    =P(E|H)·P(H)/(P(E|H)·P(H)+(1-P(~E|~H))·(1-P(H)))
    =0.99·0.001/(0.99·0.001+(1-0.99)·(1-0.001))
    ≈0.0902=9.02%
    So all in all we've got the following here:
    *50% > [P(H)f≈9.02%] > [P(H)i≈0.1%]*
    Does this or "should" this really now mean here, that finding or knowing evidence E "confirmes" the hypothesis H?!?
    Sure, finding or knowing evidence E will boost the prior probabilities in favor for hypothesis H rather than hypotheisis ~H - finding or knowing evidence E will support the hypothesis H to be actually the case. But I wouldn't consider finding or knowing evidence E confirming or proving hypothesis H to be actually the case, since the posterior or final probability P(H)f isn't even greater than 50%. So then in the case of finding or knowing evidence E hypothesis ~H is still more likely than hypothesis H to be the case.
    So then how exactly supposed to be finding or knowing evidence E "confirm" or "prove" hypothesis H to be actually the case here, if the probability of hypothesis H is still not very high after finding the evidence E here?!?
    The terminologies "confirmation" and "disconfirmation" are just wrong here. They do not belong here.

  • @ShouVertica
    @ShouVertica 9 лет назад +1

    2:10 E is a very stupid example. How about....evidence of global climate change? I mean if you are increasing belief wouldn't *not* using an appeal to authority/popularity on a scientific matter be the ideal scenario? I understand it's just an example, but being realistic is important when talking about a scientific scenario. Just seemed like an easy fix that when making the video would almost instantly come to mind.
    6:25 I think a better example would suffice as well. I mean when you are talking about "it could move it up a bit" and the afterlife which is almost always believed to be final, it's a bit unrelated to a realistic scenario(one that you would get in the pre-death experience).

    • @Gnomefro
      @Gnomefro 9 лет назад +1

      _"I mean if you are increasing belief wouldn't not using an appeal to authority/popularity on a scientific matter be the ideal scenario?"_
      It would, but I suppose you could look at the example as a way of illustrating the subjective nature of Bayesian inference. If you both recognize people who work in a specific field as having expert knowledge in that field and you also happen to be incompetent in the same field, then being informed that there is a consensus in that field should cause you to regard whatever propositions are affected by that consensus as being more probable.
      Appeal to authority isn't a fallacy if you have good reason to regard the person as an authority - say for example because a physicist can reliably build certain machines that demonstrates his claims to you on a superficial level, even though you might not understand how it works in detail. (Transferred to climate science you might regard that field as having at least some level of expertise simply due to the success of physics in general)
      On the flip side, if you dismiss a field of study as useless, then a consensus should mean nothing to you. For me, this might include a consensus among muslim scholars that Islam is true. I simply don't care what their opinion is, because I see no reason to accept that they are experts on the supernatural.

    • @ShouVertica
      @ShouVertica 9 лет назад

      Gnomefro I was just saying I think it would be a more clear example to have X evidence or simple reason instead of "survey says" examples.
      Appeal to authority is a fallacy in the example used.

    • @CarneadesOfCyrene
      @CarneadesOfCyrene  9 лет назад +1

      ShouVertica To be clear, are you claiming that if a person had no personal evidence of global climate change, but most of the scientific community agreed that they should not change their degree of belief? The fallacy deals with deductive arguments, not inductive degrees of belief. More to the point, even if you think that this is still a problem, it's not a problem for my example, it's a problem for Bayesian Epistemology, because the Bayesian allows for inferences that are based on logical fallacies. So long as you don't change your degree of belief based on anything other than Bayes' Theorem, you are fine. It's one of the many problems that I'm going to deal with in my objections. The reason that I put such an example here is that it's impossible for the Bayesian to restrict the prior probabilities that we place on the conjunctions of evidence and hypotheses to avoid such fallacies.

    • @ShouVertica
      @ShouVertica 9 лет назад

      Carneades.org "Personal evidence"
      -Entirely too vague a term for a topic like global warming.
      -Do you mean that he personally went out and got evidence?
      -Do you mean that he looked at the objective data?
      (Just too vague of a term for scientific topics, explain what you mean).
      ==========================
      "it's not a problem for my example, it's a problem for Bayesian Epistemology,"
      Idk, if I spot the fallacy wouldn't it be a null % change?
      It's kind of like saying "well that car is blue, so you move more towards global warming to be true, +.05 points."
      The system is meant to show objective points and non-fallacious arguments to reach a "more truthfull" state, if you use a fallacy to describe the basics, you'll just be strawmanning BE all the way down.

    • @CarneadesOfCyrene
      @CarneadesOfCyrene  9 лет назад

      ShouVertica "The system is meant to show objective points and non-fallacious arguments to reach a "more truthfull" state" Incorrect. Just as deductively valid arguments can lead to false conclusions, and can be supported by inductively invalid or fallacious conclusions, inductively valid arguments can also be supported by deductively fallacious arguments. The problem is known as the problem of the priors. Until Bayesian Epistemology can come up with a concrete reason to take certain prior probabilities over others, there's nothing to prevent them from allowing deductivley invalid prior degrees of belief in. As the SEP notes "If the constraints on rational inference are so weak as to permit any or almost any probabilistically coherent prior probabilities, then there would be nothing to make inferences in the sciences any more rational than inferences in astrology or phrenology or in the conspiracy reasoning of a paranoid schizophrenic, because all of them can be reconstructed as inferences from probabilistically coherent prior probabilities."