181. Issues of Utilitarian Ethics | THUNK

Поделиться
HTML-код
  • Опубликовано: 26 окт 2024

Комментарии • 110

  • @TheGemsbok
    @TheGemsbok 4 года назад +14

    A secret joy of watching this channel has been watching the breadth of your perspective and the nature of your tone on certain philosophical topics subtly shift over the years---watching what you thunk become what you think, as it were.

    • @THUNKShow
      @THUNKShow  4 года назад +8

      It's always nice to look back on things I used to believe & stare horrifically at how far I've come since then. :P

  • @sbarandatosbarandato6255
    @sbarandatosbarandato6255 4 года назад +12

    Spock: "the need of the many outweigh the need of the few".
    THUNK: * *_troubled philosopher noises_* *

  • @JM-us3fr
    @JM-us3fr 2 года назад +4

    I doubt this comment section is still active, but as a deontologist turned consequentialist, I feel like it’s worth giving my opinions about these criticisms.
    For the Utility Monster, you mentioned that we might have to just accept the bizarre conclusion that we should direct all our actions in pleasing the Monster. That’s certainly one solution, but keep in mind this would be an exceptionally unusual and unlikely scenario. It’s also entirely possible that an individual’s contribution to overall societal welfare has an upper limit, which aligns our egalitarian intuitions.
    For the Experience Machine, personally I would only want to stay inside it permenantly so long as I could still interact with and influence the outside world. I don’t think it is “real-ness” that makes the outside world better, but the *vulnerability* that makes the virtual world a bit scary. Also, I think we can all agree that an experience is more pleasurable when you believe you actually accomplished something or made a difference, rather than just simulated it. Whether that allows for deceit is a different story.
    For the organ donations scenario, the common utilitarian response is that a doctor willing to take such actions would not be trusted in the medical field. Doctors understand they have to follow a strict ethical code, not because they are all deontologists, but because they recognize that instilling that trust in medical practitioners ensures patient cooperation and can alleviate anxiety. And if they kept organ harvesting under the radar, I still wouldn’t accept that because I wouldn’t trust their ability to measure precisely when my life was worth ending. That doesn’t mean there couldn’t be a proper time, but I don’t think they could know that (I’m not even sure I could know it).
    I think the Repugnant Conclusion is the one I’m most passionate about. The first few steps of the reasoning seem fine, but the extrapolation to a miserable society, where each individual has barely a will to live, I think is incorrect. The problem is how we create society B, and exactly how much well-being does it have. Of course we care about equality, but not at the cost of individual freedom (see John Rawls). This argument has the exact same problem the Utility Monster had, which is that we don’t know how best we ought to accumulate well-being into total welfare. If society B only has slightly less well-being than society A, then this could converge to something desirable, rather than a repugnant society Z.

  • @CosmoShidan
    @CosmoShidan 4 года назад +2

    A couple of books that critique utilitarianism are Brave New World by Aldous Huxley, which tears into the lower pleasures of J.S. Mill's greatest happiness principle, and We by Yevgeny Zamyatin which delves into the higher pleasures of the greatest happiness principle. I think both raise the question of why not have some middle ground on how to approach both sides of the hierarchy of pleasures or utility rather than simply favor one over the other.

    • @THUNKShow
      @THUNKShow  4 года назад +1

      Yeah, these days I find myself in more of a pluralist vein - we have competing moral standards that can defeat & confound each other.

    • @CosmoShidan
      @CosmoShidan 4 года назад

      @@THUNKShow Pluralism in ethics and moral philosophy has the advantage of combining ethical and moral theories and practices. One example I can think of is W.D. Ross' The Right and The Good, which he combines the Virtue ethics tradition of pluralist ethics and morals, along with the flexibility of J.S. Mill's harm principle, which he ends up with a loose deontic model where one simply needs to override a rule or principle or prima facie duty with another one. Although, he doesn't give guidelines how we can be certain the difference between right and wrong, it's exactly the point in the way a scientist may not be certain if a theory will fall in line with empirical data. Though I digress.

  • @Jesse__H
    @Jesse__H 4 года назад +5

    My first thought is to write SMBC's Felix out of the equation by presuming every person has a maximum happiness level = 1. But I guess that's adding something to the notion of Pure Utilitarianism, and changing the framework of the discussion.
    I dunno ... I haven't given much thought to Utilitarianism before this video, but after the thought experiments you showed us I'm still attracted to the basic idea. It doesn't seem like it'd be too hard to build counterarguments to each, which I'm sure someone has done at some point.

    • @THUNKShow
      @THUNKShow  4 года назад +5

      There are certainly systems of utilitarian ethics with many added epicycles to combat the numerous problems - many philosophers have tried to rescue it, because it is a compelling theory. But at some point you've added so many moving parts to the theory to make it work a certain way you lose that charming simplicity.

    • @Jesse__H
      @Jesse__H 4 года назад +2

      @@THUNKShow Are you a gamer, Mr Thunk? I'm playing a game called Bioshock 2 right now. In the first Bioshock game you venture to an underwater city built by a guy who's basically Ayn Rand's ideal man: an Objectivist who moved all the hard-working self-sufficient doers of the world under the sea so they could progress humanity without interruption from the rest of us. Of course it goes horribly wrong and everything falls apart.
      In Bioshock 2 you venture back to the postlapsarian city to find it taken over by a psychiatrist named Sofia Lamb who is instead a hardcore Utilitarian. it's all oretty surface-level, being an action videogame, but I thought it was a fun coincidence that I started that game two days ago, then you post this video yesterday 😊👍

    • @jasperschuerr8011
      @jasperschuerr8011 2 года назад

      The main issue I see with utilitarianism is that the attempt to quantify human happiness (i.e. max happiness level = 1) is inherently objectifying and de-humanizing. The question is should a philosophy try and create an objective answer to something that is inherently grey and subjective, or should it try and understand the grey area. Utilitarianism agrees with the former, and that's where people take issue with it.

  • @brandonbrisbane2145
    @brandonbrisbane2145 4 года назад +4

    One thing I would like to highlight in defense of utilitarianism is the "minimization of suffering" clause, which is often overlooked. In cases like the utility monster, there would be an upper limit to how many people could work in service of his happiness before their unhappiness outweighed it. In the case of the secret organ harvesting program, one could argue that if the secret got out it would cause great duress (over the happiness the available organs would create), making it at the very least ethically irresponsible. Depending on what you consider as well, it can still work out to cause more suffering. If the person who revives the organ goes on to live a less fulfilled life or causes suffering in others.
    As far as ethics defining what is ethical over our intuition, utilitarian ethics have a habit of devolving into the latter. When you can always justify anything with secondary degrees of happiness and suffering, suffering and happiness over time, the impact a given choice has on the happiness and suffering if it were adopted, you are effectively using utilitarianism to justify your ethical intuitions. Part of the reason this can be done is the ill definition of a "happiness/suffering" metric, especially when so many arguments (for and against) rely on greater amounts of suffering/happiness from other parties (i.e. the utility monster, why not to publicly harvest organs, etc.)

    • @THUNKShow
      @THUNKShow  4 года назад +2

      The examples you're citing don't forbid situations where the utilitarian calculus would lead to unsatisfactory conclusions, they just posit scenarios where the math might work out to something acceptable. The *possibility* that, with a high enough utility value for the monster or a robust enough organ harvesting conspiracy, awful situations would be called "good" by utilitarian systems is still problematic for most people. I totally agree that the vagueness of "ultimate utility" is a big issue as well!

    • @beamboy14526
      @beamboy14526 3 года назад

      The utility monster problem can easily be resolved by normalizing happiness, where each person's maximum happiness is equal to 100 utility points.

  • @KohuGaly
    @KohuGaly 4 года назад +4

    Regular utilitarianism trips right before the finish line. It fails to identify the proper metric. Caring about total well-being is an emergent instrumental goal. It emerges from 2 things: 1. Humans usually suffer when they observe suffering of others. 2. Actions that affect well-being of multiple agents form iterated non-zero sum game. These two things are satisfactory explanation for why we care about total well being most of the time and they both draw it from goals of the arbitrary individual.

    • @THUNKShow
      @THUNKShow  4 года назад

      Sure - that's a potential descriptive explanation for where utilitarian intuitions come from, but it's agnostic about normative claims. Knowing why we feel we ought to do X doesn't tell us what we ought to do, neh? ;)

    • @KohuGaly
      @KohuGaly 4 года назад

      @@THUNKShow well not really. When you treat humans as intelligent agents (as the term used in the AI research), then normative claims are a subset of descriptive claims.
      An intelligent agents has agency (ie. ability to affect its environment), agenda (ie. a terminal goal or a utility function) and intelligence (some degree of ability to utilize its agency in favor of the agenda).
      An agent follows its agenda by definition (although he may fail to fulfil it, due to ignorance, incompetence or bad luck).
      Normative claims merely describe optimal actions under some implied goal. The reason why the implication of a goal isn't always obvious is because we are typically dealing with humans and humans happen to have very similar terminal goals.
      It becomes much more obvious when you deal with arbitrary AI and try to make it "safe".

  • @nerdineverythingnerdinnoth4984
    @nerdineverythingnerdinnoth4984 2 года назад

    Ok, so here is me trying to give arguments against these objections (not all will work):
    1. There can‘t be such a thing as Felix, and even if it‘s a paradox.
    There can only be a time limited utilitarian monster, since the nature of humans quickly adapts to things. Sure certain people consistently enjoy certain things more than you can, but these are certain things, that they‘ll probably are happy to share, and you‘ll be happy to leave to them because you like seeing other people happy, which brings us to our paradox: If Felix enjoys everything more than you do, he‘ll enjoy it more if you are happy, than you do if he is. So shall he leave stuff to you now, or should you leave it to him?
    2. The Pleasure machine.
    Something can‘t come out of nothing. Let‘s say a machine that knows how to get maximum happiness (with all the pauses, other emotions and balances perfected so it reaches its highest limit). Someone will still have to get the resources, and control the machine, for the humans that live in the machine, so it is better to with time create this experience in the real world for everyone. If someone now says that the machine is automated, gets the resources by itself, never needs to get fixed, creates more and more humans to be perfectly happy for ever, in the real world the material the machines are made out of could still be used better, and we start to not just describe a pleasure machine anymore, but rather one of our dreams to be: The separation of emotions and rationale. We were that already as animals. Now we have evolved past that, by waking up aka forming a conscience. What we are now, and if it‘s better or worse, anthropology can discuss. (Sheesh! That was a deep one to think about. Please tell me if you think differently!)
    3. Organ harvesting
    Autonomy is one of the greatest happiness makers out there, and principles are a thing.
    If I got an unwillingly given organ, that might be taken away again without my consent, I don‘t think I‘d be that happy…
    4. Math
    I agree, even though there are many different ways to calculate good and evil with many different kinds of utilitarianism.
    This video was a blast to think about. Thanks!
    And thanks to those who read.

  • @jaypea314
    @jaypea314 4 года назад +6

    Haven't put a lot of thought into this, but it seems like game theory can be to be brought into ethics. Similar to how the prisoner's dilemma is bad in a single round, but repeated rounds have a more fair solution. For example it would be nice to get an organ on demand, but would you want to live in a system where you could be harvested at any time.

    • @THUNKShow
      @THUNKShow  4 года назад +1

      Game theory is certainly a lens thru which we might evaluate ethics, but doing so makes an implicit assumption about what's important/relevant in our valuation of different systems (i.e. the optimal game-theoretic outcome). There's no response for someone who asserts that the point of ethics is to satisfy our intrinsic human duties, for example.

  • @zorro_zorro
    @zorro_zorro 3 года назад +1

    Pleasure/happiness is not the only metrics that can fit into the utilitarianism framework, though. (e.g. I see myself as a remaining-lifespan-in-good-health-utilitarian.)
    It's easier to see why something feels wrong about those thought experiments if you remember that hedonist utilitarianism is just one of many utilitarianisms - and some utility functions may even include more than one metrics!

  • @landspide
    @landspide 4 года назад +4

    The dangers of absolutism.

  • @WizardofGargalondese
    @WizardofGargalondese Год назад

    The problem with this is that basing morality purely off of moral inutition will be worse than any system.
    The simple reality is, no moral system will always be correct but we should still choose one to consistently follow even when its wrong.
    many of these thought experiments are simply impossible as well and therefore are easy to just bite the bullet on. For example, a 'utility' monster is impossible. No being can have infinite pleasure, it simply is not possible.
    To solve scenarios like the population dilemma or organ harvesting I think we need to put less moral value on life itself and more on conscious experiences and their value.

  • @Reivaxbeastly
    @Reivaxbeastly 4 года назад +1

    Great video!
    Your closing remarks push us into the realm of meta-ethics!
    When G.E. Moore targeted all forms of moral naturalism with his Open Question Argument it became clear that we could not identify ‘goodness’ with some natural property (in virtue of the meanings of the terms) - in other words, ‘goodness = pleasure’ could not be an analytic truth.
    However, we can come to know this identity a posteriori instead, in the same kind of way that we come to know other identities in the empirical sciences. Before 1811 no one knew that water was H2O, even though people were well acquainted with the clear stuff that falls from the skies and fills our oceans and lakes. Likewise, we are well acquainted with goodness, but only recently has it become plausible that goodness might be coextensive with a certain class of complex conscious mental states with positive hedonic tone. This empirical identity provides a basis for a hedonistic theory of value which can be coupled with utilitarianism as a theory of the right.
    On this view, we do not need to rely on some mysterious 'moral intuition' which allows us to grasp moral truths. We come to know about moral facts in the same way that we come to know any fact in the sciences; through empirical observation and as the end result of abductive reasoning.

    • @THUNKShow
      @THUNKShow  4 года назад +1

      Interesting idea! How does this model account for things like moral disagreement?

    • @Reivaxbeastly
      @Reivaxbeastly 4 года назад +1

      @@THUNKShow This is a great question and it’s not a straightforward one to address!
      Typically, moral naturalists will argue that the meaning of the terms ‘right’ and 'wrong' are causally regulated by the environment (See: causal theory of reference). But then you get problematic cases where two hypothetical communities disagree about what ‘right’ and 'wrong' mean, simply because the causal-historical story of how the terms gets their meaning fixed is different for both communities. It then seems like moral disagreements between different linguistic communities leads to the members talking past each other, and you get some sort of moral relativism. A lot of philosophers have attempted to resolve this problem, but it’s not clear that their solutions are that effective as you can usually modify the thought experiment to reintroduce the errors. I am of the view that the causal theory of reference, while adequate for natural kind terms, fails to capture the nature of our moral concepts.
      Our moral concepts are intimately linked with our moral emotions, and just as colour-blind individuals lack a ‘red’ phenomenal experience when presented with red objects, psychopaths lack ‘horror’ experiences when presented with horrifying events. A colour-blind person may see ‘green’ instead, a psychopath may experience ‘approval’ instead. Colour perception and emotional perception aren’t completely analogous for obvious reasons, but the analogy is instructive. While ‘redness’ isn’t an intrinsic property of red objects, ‘horror’ is an intrinsic property of horrific events. When we are faced with a report of a tragic and horrific event, most of us tend to have a negative conscious experience that is ‘isohedonic’ with those of the victims and their families. In other words, it accurately represents the nature of the event itself by representing it empathetically. If a psychopath represents the suffering of his or her victim with an emotion like ‘hope,’ ‘pride,’ or ‘approval’ then we could point out how these emotions fail to correspond to the conscious mental states of the victim, and so they fail to represent on those grounds.
      Ultimately, if two people or cultures disagree about moral cases because one thinks that x is right, and the other thinks y is right, instead of asking what natural property causally regulates rightness for each culture, we should ask whether or not their moral feelings are isohedonic with the events they attempt to represent.
      If you’re interested, this response to Moral Twin Earth cases is developed by the philosopher Neil Sinhababu in his book Humean Nature: How desires explains action, thought, and feeling, as well as his papers.

    • @FlaminPigz7
      @FlaminPigz7 4 года назад +1

      A few years ago I developed a similar sort of argument that convinced me to be a utilitarian. Red things are experienced as red, bad things are experienced as bad. Suffering and happiness are mere synonyms for bad and good experiences respectively. Their intrinsic worth is revealed in having them. This requires consciousness realism, which is generally accepted, but not universally. While I don’t really have a problem with the argument now, I have become skeptical of the possibility of knowledge about theories of truth, and so while still taking it seriously I have become more morally pluralistic.

  • @95GuitarMan13
    @95GuitarMan13 4 года назад +2

    I don't think Felixs exist in reality.
    Have you ever encountered someone and thought: wow, that person has the capacity to reach levels of happiness I will never achieve...?
    The closest thing I can think of is children whose unbridled joie de vivre hasn't been tainted by, you know, vivre-ing. And in that case we gladly put in effort to help them achieve that higher ceiling of happiness and it doesn't feel counterproductive at all.
    In any case, the idea of weighing one person's happiness against another and declaring one heavier implies a far more precise science of happiness than we currently have (and maybe ever will) and no true Scotsman - I mean - Utilitarian would "mandate" definitive action based on such shaky premises. Peter Singer has some very reasonable guidelines for how much responsibility is demanded of people adopting the philosophy.
    Great video as always! This topic is a personal favourite

    • @THUNKShow
      @THUNKShow  4 года назад

      I don't know if I buy the evidential angle (e.g. so long as we don't ever see a utility monster in the wild, there's no reason to doubt the veracity of utilitarianism). I do think that Singer's arguments are compelling & reasonable, but not enough to buy in, personally. Gotta respect his commitment to the bit, tho.

    • @95GuitarMan13
      @95GuitarMan13 4 года назад +1

      @@THUNKShow that's where I find the child example interesting, because the closest thing I can think of to a utility monster doesn't have the repulsive implications that the theoretical extreme carries. It actually does feel right to help children achieve happiness, if a Felix is actually possible (which I doubt) it might actually feel right to burgeon his happiness as well, in which case the thought experiment just highlights a limitation of our ability to intuitively simulate that kind of scenario.

  • @dtaylor091489
    @dtaylor091489 4 года назад +3

    choice is one of our greatest sources of happiness. how do i know this? almost every pleasurable activity can be made undesirable by simply being forced to do it against your will. forcing me to feed a utility monster? check. forcing me to give up my organs? check. forcing me to “live” in an experience machine? check. even the repugnant conclusion forces people, particularly women, to have more children than they would care to.
    most of history’s greatest atrocities were in part caused by “smart” people forcing others to do what’s in the best interest of society. they were always wrong because they neglected the simple fact that choice, is the source of some our greatest pleasures. persuasion has to be at the center of a true utilitarian ethic.

    • @dtaylor091489
      @dtaylor091489 4 года назад

      btw i love your channel and i even liked the video, its well put together and lays out the best possible case against utilitarianism. it forced me to think more carefully about my position.

    • @THUNKShow
      @THUNKShow  4 года назад +1

      I think the persuasion angle just sort of pushes the problem back a step - now you have to ask if *convincing* a bunch of people to *want* to leap into the jaws of a utility monster or wire themselves to an experience machine would be a good thing or not.
      Thanks! :D I feel like a lot of the philosophy videos are explicitly about making people think more carefully about their positions - it's reassuring to hear that they do that sometimes! ;)

    • @dtaylor091489
      @dtaylor091489 4 года назад +1

      THUNK not necessarily. it’s seems to me that persuasion puts certain constraints on the range of moral considerations. after all, the number of things in which i can be forced to do is greater than the number of things in which you can persuade me to do.

  • @Zerspell
    @Zerspell Год назад +1

    Very good video

  • @thelotuseater6496
    @thelotuseater6496 3 года назад +1

    A lot of the issues present here can be solved by prioritising the prevention of suffering/ill-being aka negative utilitarianism. Of course that has it’s own issues, such as the pinprick argument or the benevolent world exploder. You could retaliate then by arguing for “negative utilitarianism plus.” Essentially, suffering is bad and *must* be avoided and prioritised but well-being is valuable too and a universe devoid of life would in turn have no value. Sure that’s better than negative value, but it’s not good enough.
    An excellent example of this would be David Pearce’s The Hedonistic Imperative, he’s a negative utilitarian but talks a lot about radically improving well being for all. Under NU+ utility monsters wouldn’t get extra resources at the expense of others’ well-being and the repugnant conclusion collapses in variations where ill-being is allowed.
    That being said, I do bite the bullet on the Experience Machine, in fact I think all of us ought to plug in if it were sustainable. Easy for me to say since plugging in the an EM is literally my greatest wish even though I know I’m in the minority.
    Seriously, I don’t understand why so many would refuse such a (imo) utopian idea.

    • @THUNKShow
      @THUNKShow  3 года назад

      The thing that gets me about the experience machine is a real contrast between "thinking something has happened" & "that thing actually happening." e.g. is it better for humankind to actually achieve great things (compose great symphonies, make great scientific discoveries, explore the universe) or to just imagine they have?

    • @uselessgarbagehandler
      @uselessgarbagehandler 3 года назад +2

      @@THUNKShow If it's indistinguisable from reality, then what's the difference? Are we not just the sum of our memories and experiences?

  • @thecupdidit
    @thecupdidit 2 года назад +1

    This was fantastic! Thank you =D

  • @tochoXK3
    @tochoXK3 4 года назад +2

    I find it...interesting how most people would redirect the trolley, but won't agree with "compulsary organ donation". It's essentially the same scenario -- are you willing to kill 1 to save 5 (assuming you also save 5 with your organs)?
    As has been said, you could argue that the fear of being a victim of organ harvest creates more suffering, compared to the trolley problem, but this issue is "solved" if you create a secret conspiracy (and here, we are in the territory of hypothetical thought experiments that wouldn't work in practise, but the trolley problem is also hypothetical, so it's a fair comparison).
    By the way, my "moral intuitions" align with most people in this regard, but I still find it strange (It is perfectly possible to find your own intuitions strange)

    • @Xob_Driesestig
      @Xob_Driesestig 4 года назад +3

      Yes, our intuitions also allowed slavery, so not a perfect track record by any stretch of the imagination. Also, weird question, did you reply to my comment? Because youtube tells me I have one reply by tochoXK3, but when I click on it, it won't load. If you did, would you be so kind as to copy it here? I'd love to respond if you did.

    • @tochoXK3
      @tochoXK3 4 года назад

      @@Xob_Driesestig I deleted it because it was based on a misunderstandment

    • @Xob_Driesestig
      @Xob_Driesestig 4 года назад

      @@tochoXK3 Ah! Thanks for clearing that up

    • @THUNKShow
      @THUNKShow  4 года назад

      Yeah, it certainly highlights some sort of inconsistency in our moral intuitions, which itself is pretty strange. The question is - which intuition do you trust when they conflict?

    • @tochoXK3
      @tochoXK3 4 года назад

      ​@@THUNKShow I really tend toward utilitarism.
      In case of organ harvesting, it really would have to remain secret (else, the fear of having your organs harvested and the societal outrage would create suffering that far outweights the good), which is hard in practise, but in theory, it's pretty much the moral equivalent to the trolley problem.
      I like to see this through a "probabilistic veil of ignorance": Imagine you're one of 6 people (one involuntary organ donator, 5 who are saved thereby), and don't know which, but the probability that you're saved is 5/6, and the probability that you're...sacrificed for the greater good is 1/6.
      You can't *only* focus on how much suffering it causes for the victim, you have to also consider how much good it does for those who profit from it, and I think the "probabilistic veil of ignorance" is a good way to do so.
      about utility monster:
      Again, I agree in theory, but I think an utility monster is VERY rare in reality. A true utility monster in the strictest sense defitly doesn't exist; I find it EXTREMLY unprobable that any serial killer experiences an amount of joy that's so great that it outweights all the suffering (It would have to be some kind of superhuman joy far beyond comprehension), and, to make another example, I also find it very unlikely that bulliers experence joy that outweight the suffering of the bullied (I experienced bullying in both directions. And I'm definitly not proud about having bullied others myself, but alas, you can't change the past).
      To make an example why I support the idea in theory, imagine (for the sake of argument) someone causes minor mayhem but by doing so, experiences some kind of great joy, so great that it's barely comprehensible, I think it would be very wrong to decline that joy. And working from thereon, you could make similar arguments if the joy would be slighly less great, as long as you can reasonably argue that it outweights the suffering.
      About the repugnant conclusion:
      Again, I agree in principle, BUT you have to understand that "a life worth living" means that your life isn't that bad -- it doesn't mean that it's barely good enough so that you don't kill yourself. If your life overall sucks, but it's not bad enough that you kill yourself, I'd say it would have been better if you've never been born.
      That's because never being born isn't morally equivalent to dying -- else, deciding to not have a child would be the moral equivalent to murder.
      P.S. I'm aware that what I wrote is VERY controversal, especially the part about "If you change the track of the trolley, you should morally supprot the "organ harvesting conspiracy"
      P.P.S. I wrote that my moral intuitions tend towards supporting changing the track of the trolley, and toward not supporting the organ harvest conspiracy, but I also realize that intuitions and emotions aren't the most reliable source of reason.
      P.P.P.S. There's something called "naive utilitarism", that is, not considering the moral priorities of others. For example, if some naive utilitarian government would openly support radical utilitarian ideas, most people would find them disgusting and horrible, which would create a great amount of suffering, so as long as you can't do things in utter secrecy, you do have to consider moral values non-utilitarians hold to make good utilitarian judgements (I know that this sounds slightly absurd/counter-intuitive)

  • @alexixeno4223
    @alexixeno4223 4 года назад +1

    Ironically I spend too much time thinking about ethics and good vs evil way too much for the silliest reason, DnD and in particular a line from one of the DnD books. "The world is moving closer to a utopia every day, but who's utopia? for a perfect world for a barbarian tribe so very much different then the perfect world of wizards."
    now for this Ep and focusing on Utilitarian ethics I have to argue that some of those examples, While great thought experiments for exploring and discussing ethics seem to taking things to an extreme, an almost illogical extreme. For example the Utility monster imply's one persons pleasure is worth more then another, and that one persons pleasure can be strong enough to warrant the suffering of others.

    • @THUNKShow
      @THUNKShow  4 года назад

      It's not so unreasonable an assumption - we often make concessions & compromises to enrich the lives of others. You'd be sad about ruining a nice suit to save a drowning child, but you'd probably do it anyways, no?

  • @kathytaylor3673
    @kathytaylor3673 3 года назад +1

    The train tracks analogy. What if the 5 people are serial killers and the 1 is a very productive person that may save many more lives? Hmmmm?

  • @geoannealyzandralaoestrada5918
    @geoannealyzandralaoestrada5918 3 года назад

    Is utilitarianism contrary to rights? If so would act and rule utilitarianism be contrary to rights as well? If it's not contrary to rights, would you say it respects rights?

  • @Xob_Driesestig
    @Xob_Driesestig 4 года назад +2

    So the literature on those issues and it's counterarguments and it's counter-counterarguments... is extensive. Wikipedia is a good starting place, but I'm going to take this opportunity to promote my favorite solution: Steal idea's from contractualism! More specifically Meta-Preference Utilitarianism: bobjacobssite.files.wordpress.com/2020/02/meta-preference-utilitarianism.pdf
    EDIT: You can see Lukas Gloor (research director of the foundational research institute) comment on it (as well as some other people) here:
    www.lesswrong.com/posts/2NTSQ5EZ8ambPLxjy/meta-preference-utilitarianism

    • @Xob_Driesestig
      @Xob_Driesestig 4 года назад

      It get's rid of Hedons so Felix can go take a hike. People can still enter the experience machine IF they want to, but if you force someone into the experience machine against their will you are lowering the global utility.
      The organ conspiracy is admittedly still a bit icky, but I would like to say that in practice you could never keep that conspiracy a secret. Here we enter the wacky-world of heuristics, where laws and rights exist for our collective long-term benefit, even if they don't always lead to the right conclusion (I'm not gonna get into it, but there is extensive literature on the topic of heuristics).
      Finally the repugnant conclusion can be mediated by other types of utilitarianism like average or median instead of total. These have their own problems, but the problems can be balanced with the help of meta-preferences.

    • @THUNKShow
      @THUNKShow  4 года назад +1

      :P You can fix anything with enough epicycles, neh?
      I'm sympathetic to the enterprise of trying to patch naive utilitarianism into something that keeps some sort of intuitive coherence in weird situations, but I have no more patience for those who assert dogmatically that bullet-biting is the only "rational" response to utilitarian issues, or that it's somehow "objectively true."

    • @Xob_Driesestig
      @Xob_Driesestig 4 года назад +3

      TBF our moral intuitions aren't top-tier material either: slavery, anti-semitism, adam sandler movies; all examples of how we can be horrified by things we used to consider normal.
      Objectively true isn't something you can even use in moral philosophy, the best we can hope for is tautologically true e.g: This is a good moral system, because something is good when it fulfills your inner desires and this system does that better than the previous one...
      I'm not gonna lie to you, I have fixed like 40% of all philosophical incongruities I've come across by just adding more meta-levels ;D

  • @rileybonar6324
    @rileybonar6324 2 года назад

    Regarding the utility monster. Couldn’t this be accounted for by adjusting what is seen as pleasure or happiness through a categorical metric? The argument is to say that a person feels happiness one million times greater than the average person which leads to the analogies of the formation of an oligarchy (or some authoritarian rule). However, if we looked at happiness as 1s and 0s, either you agree this would satisfy conditions to make you happy or no this would reduce the persons happiness and increase their suffering, then it doesn’t matter the magnitude of their happiness (or their suffering). For myself, I care more that a greater number of people have their needs met and are happy (but maybe not the happiest) rather than the majority of people not have their needs met while a small group lives as literal kings… oh wait, I just described the current economic system we live in today. Well I think you get what I’m getting at. I love hypotheticals and i think, at least for myself it adds a caveat that we should categorically aim for peoples to have their needs met (satisfying the imperative of happiness) even if it requires sacrifice from those who live with excess.

  • @LethalBubbles
    @LethalBubbles 2 года назад

    idk at the trolly problem. there's more to that than number of bodies and I don't think you can just categorically put every utilitarianist's opinions on it on one option like that.
    but one interesting aspect is that it's derived from hedonism which isn't as inherently bad as moralizers try to make it sound, it does indeed agree with our sensual instincts. Things like the trolly problem (in spirit I see what you mean, the differences I think they'd argue over is like if the people were equally valuable or not) though are what kinda urk me about the philosopher. Especially when ideology gets into the mix like "oh yeah it'll totally help the people if they think like me" decalibrating their mental libra scales.
    idk if being associated with hedonism is embarassing for utilitarianism or if being associated with utilitarianism is embarassing for hedonism.
    Personally I like to think about morals but not try to police them, I believe really strongly in autonomy and freedom of information and speech needed to investigate into things and make an informed decision. I think policing morals is immoral which is pretty sticky to work out in terms of people trying to make it sound like I'm pro-immorality. I think the ultimate problem is morals is when generation A will be very faithful to them, generation B will learn them only arbitrarily, also the enemies of the morals reappropriating them, and finally hypocritical sorts of people who claim to like them but end up just using them when convenient for some external reason. (enforcing selectively basically) I think just about anyone is liable to temptation so we can't hold the anti-hypocracy to such a point that it's all-or-nothing, basically the idea of "ethos" or "reputation" too can be subverted from "character" to a "surveillance permanent record" which is technically correct in showing something you did at some point of time but has no way to know if you've learned and changed or not. It really all comes down to power and the fear of something being said being turned around on us in a moment of weakness. That can grow into really oppressive orthodoxies on just about any level from public mobs to religious witch burnings to state murder. But if we let these things "kill god" so to speak, we end up with post-modernist solipsistic morality which can't be communicated, so authorities don't respect it and just become authoritarian out of sophistry instead.
    There's a kind of complex balance between feudal rule, capital rule, and hierarchy rule. They all seem to be always around, influencing each other and sometimes one might become king causing the others to change form and criticize it.

  • @tochoXK3
    @tochoXK3 4 года назад +1

    I can actually proof that utilitarianism is right, if we assume that happiness/suffering is measurable (which, I admit, is a debatable assumption).
    Let happiness be positive and suffering be negative on the scala. Let the scala be such that an increase of one happiness unit is equally good, no matter whether you increase form 0 to 1 or from 100 to 101. (analogous, the decrease by one happiness unit is equally bad, whether you go from 101 to 100 or from 1 to 0, or from -1 to -2)
    To make an example (which can be generalised), let's assume that you have the choice between 1 person suffering severely (hapiness -1,000) and 1,001 persons suffering slighly (happiness -1). Let's assume that all persons have, by default, neutral states (happiness 0 ).
    One person suffering severely can be interpreted as 1,000 decreases by 1 (0 to -1 to -2 to -3 to [...] to -1,000). Since the scala is definied such that (0 to -1) is, by definiton, as bad as (-1 to -2) and as bad as (-2 to -3) [...] (0 to -1000) is as bad as (0 to -1) 1,000 times, or 1,000 people, who have, by default, neutral states, experiencing minor suffering (0 to -1). Thus, 1,001 people experiencing slight suffering (0 to -1) is slightly worse than one person experincing severe suffering (0 to -1000).
    This priciple can be generalised for any situation such that the correct solution will be to just add the happiness (where suffering is negative happiness) in order to get a moral evaluation.

    • @FlaminPigz7
      @FlaminPigz7 4 года назад +1

      While I am sympathetic to utilitarianism, I would like to point out that this is not a proof of it. All that this proves is that utilitarianism works as intended. That it is in theory possible to measure happiness and suffering and to compare them on one scale. It does not prove that we have to use utilitarianism to make moral decisions.

    • @THUNKShow
      @THUNKShow  4 года назад +1

      It's a good attempt to build a scale of some sort for utilons! The sticky wicket is in: "This priciple can be generalised for any situation such that the correct solution will be to just add the happiness (where suffering is negative happiness) in order to get a moral evaluation."
      Just because we can measure a situation on the scale & add happiness (or subtract suffering) doesn't in any way imply that this is *the only* or *the best* or *the right* thing to do!

    • @APaleDot
      @APaleDot 3 года назад

      You've "proven" utilitarianism by assuming that it is true. You assume that happiness/suffering is measurable and that it is the only axis along which to judge the morality of an action. What if we don't assume that? What if we assume that suffering is the positive direction of the axis? We can conclude anything by simply assuming.

  • @TPGNATURAL
    @TPGNATURAL 4 года назад

    I see this as over thought. The book by Eva Wong, Lieh - TZU page 138 Strange customs in strange countries. You know the old but true saying " keep it simple ".

  • @InShadowsLinger
    @InShadowsLinger 2 года назад

    I don’t really get the moral instinct superiority. How do psychopaths or sociopaths fall into it? Aren’t our instincts shaped by the society or culture we grow up in? How come some cultures accept honor killings, stoning or genital mutilation and some don’t? It would seem to me that they have different moral instincts.
    The example about the monster seem a pretty week one as it clearly decreases the happiness and well-being of others.
    It seems to me that utilitarianism is like democracy - it is not perfect but the best we’ve got. I.e. cannot be relayed upon 100% but works 99% of time less some slippery slope and contrived examples.

  • @jorgemachado5317
    @jorgemachado5317 4 года назад +1

    Really good content

  • @imkharn
    @imkharn 4 года назад +2

    With zero ability to predict the future, consequentialism is impossible. If one is all knowing, the ends justify the means. Stuck between, each person draws their own line in the sand. Seems that all that can be argued solidly is that it has to be drawn somewhere. Disappointing.

    • @Xob_Driesestig
      @Xob_Driesestig 4 года назад +4

      Why do you think we have zero ability to predict the future? We might not predict it with 100% accuracy, but surely you agree there are shades of certainty here. Otherwise science wouldn't work at all. If I stab you with a fork I can't be 100% sure you will say "ow", but I'm a lot more certain of that than the possibility that you respond by conjuring a pink perpetual motion machine.

    • @6ThreeSided9
      @6ThreeSided9 4 года назад +1

      xbob Driesestig Agreed, while I understand and agree with the fact that our inherent uncertainty needs to be calculated in to any ethical equation, the idea that we can’t predict anything with any amount of confident a is rather silly.

    • @tochoXK3
      @tochoXK3 4 года назад +1

      There are ideas about maximizing the "expected value of utility" when we're uncertain

    • @THUNKShow
      @THUNKShow  4 года назад +2

      The original formulation of utilitarianism uses the maximizing-utility criteria *retroactively* to indicate which actions were good & which were bad - Bentham explicitly said that it wasn't intended to be calculated in advance, merely a guide to actions (i.e. do your best to increase utility - it might not end up being the right thing, but that's the direction you should lean).

    • @Xob_Driesestig
      @Xob_Driesestig 4 года назад +1

      @@THUNKShow That's one of the more trollish elements of consequentialism: But if we do that supposedly good action that is implied by consequentialism it will lead to bad outcomes! Well than that is tautologically speaking not the right action now is it (˵¯͒〰¯͒˵) ˢᵐᵘᵍ ᶠᵃᶜᵉ
      That's well and good mr Bentham but we need to live our life chronologically (except for the time traveler that made this episode: ruclips.net/video/Gs6UcgiDwg0/видео.html )
      Luckily there are ways to consistently improve things over time (medical science being an obvious example) but then you will have to deal with heuristics and other inelegant methods.

  • @G_Rad_Ski
    @G_Rad_Ski 4 года назад +4

    Pragmatic virtue ethics with a dash of deontological dictation from logic, that's for me ; )

    • @THUNKShow
      @THUNKShow  4 года назад

      Deontological-dictation-bae?

  • @creosjediacademyguides7103
    @creosjediacademyguides7103 2 года назад +1

    never thought I could change my entire world view by watching a single video...I believed in utilitarianism for years without exploring the topic deeply, now I see that it doesn't work a lot of the time as a moral system, great video, thank you...

  • @rhythmandacoustics
    @rhythmandacoustics 4 года назад +1

    All of the Value Judgement/Ethics/Morality issues can be traced to Biology. Utilitarian is connected to valuing life, which is a part of nature. Nature demands life forms to struggle and survive. Egoism and Altruism can be found in animals.Utilitarianism is not necessarily correct or incorrect, but we can trace it to biology in which cells want to multiply. I do not think systems of morality and moral instincts are necessarily different from one another but the former is merely a formalized version of the latter; one is a descriptive and the other is prescriptive. Utilitarian can be seen also in times of extreme hardships such as famines, wars, pandemics, and so on, and not just by studying cells or micro-organisms, in which the goal is to save as much people as possible, and not necessarily to produce greatest number of "good" or maximize "pleasure", but to minimize loss of life and minimize pain and suffering.

    • @bernardobertamini856
      @bernardobertamini856 2 года назад

      I think that the "Utilitarian intuition" (which basically says: "It's better to do what brings conscious living beings to their ultimate good, which is the psychological wellbeing") is not something that we shares with animals.
      In my opinion humans are animals such as cats, dogs, ... but with extremely complex social "structures" (I think that sciences, poetry, art, ... all the things that make humans unique, are nothing more than ways to fulfill our instinct od being socially accepted... in other words, we are animals that follow "sophisticated" instincts). That being said, I think that utilitarianism can be tracked to the human psychology.

    • @rhythmandacoustics
      @rhythmandacoustics 2 года назад

      @@bernardobertamini856 psychology is based on biology

  • @molly___roth___
    @molly___roth___ 4 года назад

    Thank you for this!

  • @Azariy0
    @Azariy0 Год назад

    I mean, I honestly would just "bite the bullet" here.
    Utilitarianism seems to be the most logical and the simplest moral system.
    I think that the most problematic argument for me to solve would be the population one, but practically I don't think that in it can ever be a real problem.
    Less humans will offer lower overall happiness but they are much easier to control, so the chance of them rebelling and breaking the utilitarian system is much lower. Also, because they will be very happy, they have no reason to rebell, really.
    And now that I think about it, this thought experiment is quite inaccurate to reality. With unhappy citizens, a nation would not be able to exist for very long. That is due to the Maslow's hierarchy of needs. Unhappy people would not be able to do something like science because they haven't fulfilled their basic needs. Without science, the society would fall.
    So, practically, every utilitarian nation has to allow for every human need to be fulfilled, and at that point, their citizens can no longer be classified as "unhappy".

  • @blurb8397
    @blurb8397 4 года назад +1

    I don’t think most of these issues are really issues of utilitarianism, but rather issues of doing it INCORRECTLY.
    By analogy, if you want to improve the economy, you don’t just look at one number, you look at a LOT of different measurements, quantifying different aspects of the economy which are all important.
    How about the following definition?
    First, let’s define “increasing utility”, as changing the world in a way, which as large percentage as possible of the current population of our current world, were they rigorously thinking, informed agents, would agree that the change made sustainably increases pleasure & decreases suffering for all of the current & foreseeable future population.
    Now, regarding the conundrums you raised.
    First off, Felix.
    This is just an issue of using a bad measurement for utility, and says more about misuse of statistics than it does about utilitarianism.
    This would be analogous to using only the GDP or total wealth to measure our economy, and it has the same problem of masking massive inequality.
    While Felix would certainly think that such a world would be best for him, the rest of the population wouldn’t want that world, as it’s not good for them.
    Next, the experience machine.
    This problem lies in the definition of “pleasure”, I think.
    I’d think it’s already fairly established that people generally want meaning in their life.
    I think it’s fair to include “meaning” as an important part of pleasure.
    Now the problem of the experience machine is largely solved, because absolutely most people will agree that being in that machine permanently will take away meaning from their lives, and thus lower their pleasure.
    Even if making that transition would make them feel better after said transition is done, the CURRENT population feels that this transition would lower utility by losing the meaning in our lives.
    And lastly, the hospital problem.
    I think everyone in the current population can agree, a world in which you can get your organs harvested without your consent, is not a world they want to live in, and thus is a world with less utility.
    And now, a problem which you didn’t raise, but I think should be addressed given the “most of the population” part of the definition for increasing utility I gave above:
    Why should any specific person be helped, if helping them only increases THEIR pleasure, not that of most of the current population?
    Once again, the answer lies in asking the question “what kind of world would a certain policy make for us?”
    A world in which people don’t get any help if they’re not of value to all of the population, is a world which is worse for most of the current population.
    A world in which you get help, regardless of your status, (so long as helping you doesn’t hurt many other people, say, if you’re a tyrant for example), is a world which is better for most of the current population.
    And what about a world, in which most of the population lives very good lives, while a designated small minority suffers for the majority’s wellbeing?
    A sort of opposite to the Felix thought experiment?
    Alright, this does seem to be where my definition does go against our ethical intuitions.
    Maybe a better ethical system would be a COMBINATION of utilitarianism as I defined above, with an ethical system built around the rights of the individual (human rights, social rights, etc), and around trying to minimize harm to the rights of as many individuals as possible?
    As in, “increase utility, without violating rights”
    What do y’all think?

    • @THUNKShow
      @THUNKShow  4 года назад +1

      You're describing a form of pluralism!
      plato.stanford.edu/entries/value-pluralism/
      FWIW, I think you can easily find holes to poke in those arguments, e.g. the "democratic" angle of your formulation would indicate that the happiest 50% of the population should murder the unhappiest 50% & take their stuff, for example. (Even the unhappiest people would agree that it'd be better for everyone left!) If you want to come to the Discord to chat, we can talk it out - I think RUclips comments aren't necessarily the best medium! :)

  • @Biomechanic2010
    @Biomechanic2010 10 месяцев назад

    Im amazed no one else has realized that well-being isnt a satisfactory metric for moral impact. Its always been wrong. Instead, desire utilitarianism was the first step to identifying what was actually the metric of morality: agency. The ability of agents to act, and actions in pursuit of that outcome are what are moral. Hence, i offer "agency utilitarianism." In this way, all action and inaction can be assessed for its moral impact in a consistent way. It's much more considerate than western philosophies that agents are mediums in which action occur, and not intrinsically defined by the action or its morality. Instead, we can enjoy nuance and moral algebra. "The ends justify the means, however the means inform the end." If you save the world at the expense of 90% of people dying, thats better than not saving the world at all. However, it would have been more moral if only 89% of people died.

  • @eduardocsuf
    @eduardocsuf Год назад

    Great vids overall! But I think that most of the faults noted regarding the utility of utilitarianism are strawman fallacies. Most of the flaws noted, here and elsewhere, attribute (potential) horrendous acts to utilitarianism, but they are far fetched. Each individual is to be treated equally regarding their happiness (ie, pleasure AND pain). The significance of pain is often overlooked. We could make the case, even refer directly to Mill's writings, that avoiding pain is just as important as promoting pleasure. It may even be more important as we can live with contentment, but not with perpetual pain. I think this is a more realistic picture of "net happiness" in society, empirically speaking. Utilitarianism starts with the individual in society, and that we as individuals should promote the well being of society, but not at the expense of foolish pleasures. We have to be more careful how we outline utilitarian ideas. It's not perfect, but has its benefits. I tend towards stoicism, but am inclined to say that utilitarian thinking welcomes plurality and flexibility, which includes virtue. Thanks again for thunking!

  • @justus4684
    @justus4684 3 года назад +1

    Nice vid

  • @leonardosantos21
    @leonardosantos21 4 года назад +1

    much better light.. nice

    • @THUNKShow
      @THUNKShow  4 года назад

      Oh thank god, I was wondering if I'd improved it or made it worse. Thanks for the feedback!

  • @N-HTTi
    @N-HTTi 2 года назад

    I came here because I got issues with utilitarianism such as the black and white thinking
    This video though maybe barely touch on that mostly repeats bad arguments
    It is not: maximizing happiness over minimal happiness
    it is: maximizing happiness that can be over suffering
    The basic utilitarian argument is that we do what’s best for the majority
    Two people, one verses the other is not within utilitarian parameter
    If anything, utilitarian thinking would promote the equality to the degree that everyone would reach the same level of happiness
    So no, you got the ice cream example upside down. A utilitarian would argue since (person A) enjoys it more he should get less so that (person B) can get more and enjoys it as much, there by increasing utility.

  • @passingthetorch5831
    @passingthetorch5831 4 года назад

    I'm sure you aren't a hard core libertarian, but since you recently showed that Utilitarianism is both impossible and here that it seems to leave a lot to be desired, you might check out "The Ethics of Liberty." Utilitarianism always leads conflict between people. Meanwhile, we can create a rights based ethics which is "consistent."

  • @eduardocampos5739
    @eduardocampos5739 4 года назад

    Is that coco?

    • @THUNKShow
      @THUNKShow  4 года назад +1

      That's Newton. ^.^

  • @jorgemachado5317
    @jorgemachado5317 4 года назад +7

    I am officially giving up objective morality. Kinda sad

    • @user-in5ru2cd9l
      @user-in5ru2cd9l 3 года назад

      You mean subjective?

    • @planetary-rendez-vous
      @planetary-rendez-vous Год назад

      Utilitarianism is not the poster child of objective morality.

    • @JM-us3fr
      @JM-us3fr Год назад

      See my post for rebuttals to each of these arguments

  • @String.Epsilon
    @String.Epsilon 4 года назад +2

    I remember an old thought experiment about an orgasm button. And that such a thing would be devastating to humans, because we'd constantly just push it instead of doing anything else.
    But with maxed out utilitarianism, not only would one be mandated to constantly push this button on everyone. You'd also be mandated to invent this button and install it on every human on earth. Because you ought to maximize happiness.

    • @Xob_Driesestig
      @Xob_Driesestig 4 года назад +1

      With classic utilitarianism you are (probably) correct. With more modern utilitarian models (like preference utilitarianism) this is mostly negated.

    • @6ThreeSided9
      @6ThreeSided9 4 года назад +1

      If you end humanity, you end the ability for happiness to be generated. From a utilitarian perspective this maximizes short term utility, but destroys any hope of long term utility. The only place this would result in positive utility is if the projected time remaining for the species were very small - in which case, yeah, fuck it, let’s party and enjoy what little time we have left.

    • @THUNKShow
      @THUNKShow  4 года назад

      Obvs we have to set up some infrastructure first so we can perpetuate the species while continuously stimulating our pleasure centers!

    • @6ThreeSided9
      @6ThreeSided9 4 года назад +1

      THUNK In theory this would be the optimal solution, yes. It would require technology well beyond what we currently have though if we don’t want that infrastructure to slowly deteriorate until humanity went extinct.

    • @lionelbokeli4018
      @lionelbokeli4018 2 года назад

      If you think about the implications of utilitarianism for more than a second this is a very bad counter argument and can be debunked so easily. Because it is devastating to humans we wouldnt do such a thing because it would be devastating to humans (destating entails that it would in the long term hurt humans and lead to pain doesnt it ?). And if what you mean by devastating doesnt lead to pain but, why would it be bad ? This is like saying : " utlitarianism says you should instantly shoot heroin because it feels good and utilitariansm says you should do what feels good". But utilitarianism does very not work like that. It doesnt at all follow from utilitarian principles to seek short term pleasure and then take the long term bad consequences. There are plenty other criticism to be made though.

  • @randalltilander6684
    @randalltilander6684 2 года назад

    Great video. I’m wishing that he had also addressed the “ant and the grasshopper” problem. Aesop’s grasshopper has a grand old time throughout the summer while the ant toils away at accumulating food. When the snow comes, should the ant be compelled to surrender a portion of his food so that both survive. The utilitarian would argue yes. The grasshopper would have had more pleasure in the summer and would suffer more grievously in the winter. One can easily see the moral hazard here. The next summer, the ant would be disinclined to work and there would be insufficient food for either the next winter.
    This issue become relevant in the real world with the politics of pandemic. Should those who refuse to mask, refuse to vaccinate and party hardy have a right to monopolize hospital facilities? In terms of the availability of hospital beds, would it not make more sense to allocate these first to those who have taken precautions? That would be the less utilitarian but more sustainable way to allocate resources.

    • @bernardobertamini856
      @bernardobertamini856 2 года назад

      In the grasshopper mental experiment the utilitarian wouldn't say "Yes" at all... utilitarianism doesn't say that long term consequences don't need to be taken into account. It says that we should maximise the overall happiness and minimize the overall suffering in all times.

  • @Zurtle1
    @Zurtle1 3 года назад

    Utilitarianism isn’t egoism bud, just because a serial killer is having fun doesn’t mean utilitarianists would support him, the definition of utilitarianism is the most good for the most people, a serial killer and an ice cream hog is not good for most people.

    • @THUNKShow
      @THUNKShow  3 года назад +1

      The utility monster argument privileges "the most good" over "the most people," for sure! Usually utilitarians are willing to accept some sacrifices in the number of people if it results in a net gain in utility (e.g. a heroic sacrifice of one person's life so everyone else can live in idyllic bliss) - the utility monster argument just takes that to its logical extreme. :)