I've been focusing predominantly on metaethics, since until we can demonstrate that moral realism is the correct, it seems pointless to bother with anything else. One might as well choose their favored system of ethics similar to a favorite sports team, otherwise, with no criterion other than preference. First some definitions: -- Objective: satisfies known functional requirements. -- Subjective: unrelated to any known functional requirements. The structural integrity of a building is required to prevent it from collapsing -- therefore a design lacking it is objectively wrong. Yet we can't say a building design is "objectively wrong" for having tacky yellow walls. That's just "subjectively unpleasant". This is unless the tacky color is causing traffic accidents left and right, in which case we can say it's objectively wrong, malfunctional, and in need of changes. This is the practical engineering distinction for "objective" vs. "subjective". To define "objective" as "mind-independent" is impractical. After all, everything could be a figment of my imagination and "mind-dependent". Who can demonstrate otherwise, and how can I tell you aren't a figment of my imagination? ---- The Subjective "Is-Ought": The objective bridge is not between "is-ought", unless the "ought/should" used is a "must". It's always hopelessly subjective, like so: P: I am hungry. C: I ought to eat. ... to which someone might ask, "Why ought you eat?" Me: "Because I'm hungry and want to eat." This is subjective. We could try: P1: It's natural for human beings to eat food. P2: I'm a human being. C: I ought to eat. ... at which point we have a naturalistic fallacy. Q: "Why ought you eat?" Me: "Because it's natural." Q: "Why ought you do what's natural?" Me: "Because I wanna do what's natural!" Now it's subjective as well. ---- The Objective "Is-Must": There is an objective bridge between "is" and "must": P: Humans periodically require food to survive. C: Humans must periodically eat. This is objective and purely in the realm of description. Once we establish a "must", then any additional "musts" derived from it are falsifiable and not based on preferences. They could be objectively wrong, but not grounded in subjectivity. Rather, they're further specifications of objective functional requirements. Ex: P1: We require food to survive. P2: We are extremely short on food. P3: There appears to be food we can gather in that abandoned building. P4: There doesn't appear to be food anywhere else. C: We must investigate that abandoned building. One or more of the premises along with the conclusion (which is still a description) could be objectively wrong and falsified, but none of this is grounded in subjectivity, and instead objective functional requirements: "We need to do this whether we want to or not." Whether someone wants to meet these requirements is up to them. A person can choose not to eat and starve. Yet it's objectively the case that they failed to meet their requirements (which is why they ceased to exist). A "must" would be a Hypothetical Imperative, but a critical form of it: objectively required for anything to continue to function and exist. "If this building is to continue existing, it must maintain structural integrity." It's not a subjective and unfalsifiable hypothetical such as, "If you want the best life, you should be a Hegelian." ---- Survival Requirement The root requirement of an ethical system is to protect the existence of its followers, since an ethical system whose followers have all abandoned the system or dead can't function. From there we can derive additional requirements: P1: Social species are required to cooperate to survive. P2: Humans are a social species. C: Humans must cooperate to survive. There are flaws in P1 since there are cases in which cooperation is neither required nor productive for survival, such as cooperating with those who seek to destroy us. Yet that's a epistemic error in the premise, and not the result of subjectivity. ---- To Justice and Human Rights Recognizing this flaw, we might refine P1 in ways where we derive a system of justice as required to protect society along with human rights. Although we've gone all the way from basic survival to a system of justice and human rights, we're still completely within the realm of specifying objective requirements. Once we specify a just society as a requirement for the survival of that society, there can still be criminals who subjectively choose to be unjust and violate human rights. Yet they would be objectively unethical if that system of justice and associated human rights truly are required to protect the members of the society and therefore the society itself. This is all within the realm of falsifiability, and completely divorced from preferences and cultural values. Again some of it is likely wrong. The system of justice may require revisions, e.g., and we might require more/fewer human rights, but it would be objectively wrong and falsifiable if so. ---- Objective Moral Facts This is the path to objective moral facts, starting from the basic, non-negotiable foundational fact that an ethical system requires protecting its followers to continue to function -- in the same way an airplane must not violate the laws of physics or else it'll crash and explode. The better we meet our requirements, the longer and more effectively an ethical system can continue to function -- as with a building constructed with solid understanding of structural engineering which can last for many generations and withstand even earthquakes. ---- Questions >> You smuggled an "ought" assuming it's a value for a system to survive. I'm not saying, "One ought have a functional ethical system." I'm defining the parameters required for it to continually function. People are free to prefer that 1+1=3, for example, and determine, "1+1 ought to be 3." I'm just pointing out that they aren't going to function in tasks that require arithmetic, as they've chosen to violate the functional requirements of arithmetic. >> What if a person's ethical system seeks to only function for a brief period of time? That's their prerogative, but it'd be objectively wrong to claim it correct so long as "objectively wrong" holds any practical meaning. If a person's proposed business practices for others to follow are designed to bankrupt business owners, it is objectively wrong the same way, and defeats the entire purpose of adopting business practices in the first place. Much like an architect who designs a building on the faultiest foundations, causing it to collapse days after at the slightest gust of wind, their design is objectively incorrect since they violated functional requirements. That's what "objectively wrong" means -- violating requirements. If it's not violating a currently known requirement, then it's only "subjectively unappealing". >> What about when individuals jeopardize other people's survival in order to optimize their own? This is why I defined the requirement in terms of the ethical system itself, and not individuals. An individual that optimizes their own survival at cost to others can cause civil unrest and pose a cascading threat to the system's integrity. Meanwhile, an individual that voluntarily self-sacrifices for a greater good is likely aiding the system's integrity while celebrated for their noble deeds. We have to operate within our epistemic limits, and errors will be made. >> What's the appropriate scale of the system? What happens with conflicts of interests? We must act practically according to our human limits. This isn't a matter of subjectivity, but rather practical constraints in meeting objective requirements. A parent must prioritize their own child to avoid risking neglect, and a leader of a community must prioritize their own community. Decentralization is an inherent requirement of the system given our human limits, as we aren’t Borgs with a shared hive mind. Resolving conflicts of interest among groups is complicated, but not absent objective requirements. Building a space shuttle is incredibly complicated, requires decentralized collaboration, decentralized prioritization, and a hierarchy to resolve conflicts -- yet still has objective requirements. ---- Metaethical Criterion Lacking a metaethical criterion leaves us vulnerable to the threat of visionaries whose only criterion for "right" vs. "wrong" is whether people are following their ethical system -- those who believe the "means justify any ends". There's no way to measure whether their system is functioning, except by whether people continue to follow it. If we recognize the survival of an ethical system’s followers as a metaethical foundational requirement -- and derive the ultimate "ought" to act in accordance with that requirement (the one subjective choice required for continual function) -- then our inability to perfectly predict the future should make it clear that mass starvation resulting from the ethical system is an immediate emergency, for example, threatening its integrity and demanding urgent repairs/changes to the ethical system itself. In the same way, our epistemic limits and practical risk aversion should prompt us to put out a fire on an airplane immediately, rather than waste time debating whether the plane is even meant to avoid crashing. After all, if an ethical system’s metaethical requirement isn't its continued function, then there can be no practical and functional alternative. Any real-world system -- ethical or otherwise -- requires safeguards to ensure its continued function, and ways to measure its reliability/efficiency to provide us any ability to optimize blatant inefficiencies and repair blatant malfunctions. Otherwise the system is completely aimless, as with a blind truck driver.
Some further notes from Qs I've received: >> How did you make the leap from collective survival to human rights? I can imagine many cases where violating human rights would improve survival. A programmer working on a system with millions of lines of code can't seek to optimize every single function to the nth degree, or else they'll almost certainly produce neither an efficient system nor even a functional one. Especially if the system is mission-critical -- and an ethical system is as mission-critical as imaginable, given that any error could cost human lives -- we are required to optimize only the most blatant inefficiencies and correct the most obvious errors, and even then with the utmost caution. If the first response to my proposal to protect collective survival as a functional requirement is conjuring images of historical atrocities, it’s critical to recognize that no cautious, humble person with an awareness of our epistemic limits would ever try to "optimize" a mission-critical system in this reckless regard. -- Human Rights On the contrary, one of the first clear derivatives of my proposal is basic human rights. Given our inherent limitations and the mission-critical nature of an ethical system, the first engineering safeguard and risk-aversion against catastrophic failure is to prevent the system from violating the rights of any followers. To assume that violating human rights will optimize the system in the long run is like assuming that setting the wings of an aircraft on fire will somehow prevent it from crashing. It’s an absurd assumption given what we can reasonably predict. Even if such an outcome were to somehow occur, none of us would have been qualified to make that assumption. History repeatedly shows that when people have assumed they could predict the long-term benefits of such drastic measures, they were catastrophically wrong. Epistemic hubris -- the overconfidence in our ability to foresee and control complex consequences -- is one of the most dangerous possible character traits when dealing with a mission-critical system. -- Mission-Critical Engineering Standards In the same way that engineers working on a complex system take the utmost care to impose cautious safeguards -- even if that means not optimizing every last function -- those building an ethical system must prioritize these same protections. Human rights are therefore one of the most fundamental safeguards against catastrophic failure in an ethical system. This is not an abstract ideal, but the most practical, tried and tested approach to preserving the integrity of a critical system. This is the discipline and compromise required of anyone working in a mission-critical field. >> Morality is concerned with "oughts", not "requirements". "1+1 ought to be 2." ---- "Why ought it be 2?" "Because the rules of arithmetic say so." ---- "Why ought we adhere to the rules of arithmetic?" "Because we ought to be logical." ---- "Why ought we be logical?" As an Engineer, it is not in my place to tell Architects what they "ought" to do. What I'm telling them is what will happen if they don't: their building will collapse and destroy its inhabitants. When we violate the requirements of mathematics, logic, gravity, or the moral requirements I specified, our society begins to malfunction and work towards its collapse. I'm not saying "people ought to be logical", or that "people ought to be objectively moral." I'm saying that their chosen moral framework will cease to exist if they aren't, and even if their objectively immoral behaviors are subjectively considered "moral" in their framework. >> Nothing we build can be expected to last forever. Why should we focus on longevity? I agree, but my proposal is akin to making sure a teenager doesn't die prematurely of an easily preventable heart attack. All buildings we construct will eventually collapse, but my proposal is to allow continual repairs to their structural integrity so they don’t collapse prematurely or unexpectedly. That is all we can do as human beings to preserve functionality. An ethical system that protects the survival of its followers stands the greatest odds of lasting for generations, receiving the necessary repairs and changes along the way as required to continually adapt and improve. Nothing lasts forever, but things that maintain their integrity will function much longer -- and that's all we can do in this life. Again, I am not sneaking a value judgment here. People are free to design short-lived systems that can only possibly function for a very limited and unexpected period of time. Yet that's precisely how we generally use the term "malfunctional", as with an airplane that crashes soon after take-off, or a computer whose hardware breaks down only a year after purchase. It is our choice whether to accept or reject the constraints that nature imposed upon us; my job as an engineer is to point out those constraints so that no one unknowingly violates them and risks catastrophic failure. ---- Obviously Objective Moral Facts It's surprising to me that many, when confronted about why they shouldn't abuse innocent life, struggle to answer the question without simply citing rules. They could appeal to human rights, but then they're challenged to ask why we have human rights. One could take a Kantian approach, treating a person as a means, and the Categorical Imperative, but then they're challenged again. They could try to appeal to our shared empathy, but then we're appealing to emotions. There's a simple engineering answer that avoids this infinite regress: the reason we must value logic, mathematics, human rights, the Golden Rule, Categorical Imperative, empathy, one or more gods -- whatever we want to throw in -- is that these are perceived requirements for the continued operation of our society. If we were to ethically condone abusing innocent members of our society, every reasonable assumption tells us that we'd be working toward societal collapse, triggering a cascading system failure of the very ethical framework that condones such actions. If someone disagrees, they must demonstrate that a society can remain stable -- without clear signs of systemic decline or failure risks -- while ethically condoning the abuse of its innocent followers. Since we cannot see into the distant future, the best we can do is heuristically assess patterns of structural integrity over time, and whether we're likely heading towards continual function or failure. The burden of proof is now on them, as the reasonable assumption is that ethically condoning such actions moves a society towards its demise. -- Self-Defense Condoning harm against innocent people is like allowing engineers in a mission-critical system to modify lines of code that are neither erroneous nor inefficient. Even if the changes appear harmless or beneficial, they risk instability and catastrophic failure. Harming those who are threatening others in self-defense, however, is another matter -- akin to correcting a line of code that is clearly erroneous. >> What about wars? What social services should we provide? [Other complex questions] I don't claim to have anything close to all the answers to maintain the structural integrity of an ethical framework. These are extremely complicated matters and we should err towards existing models, but "complicated" doesn't mean absent objective requirements. In the future, I'm sure knowledgeable people will look back on us and realize we made so many errors that needlessly jeopardized our system integrity, as we do today looking back upon the leaning Tower of Pisa. We do the best we can given what we know today, and discipline in a mission-critical system demands the mindset, "If it ain't broke, don't fix it." We have to tread carefully. My aim is not to provide the means to answer all questions: it's to provide the means by which we can determine when our answers are likely incorrect. The goal is to provide a criterion to determine whether an ethical system is malfunctioning and in need of a change/replacement, and to protect us from zealots pursuing lofty ideals at complete disregard to risks and costs. >> What about human flourishing and well-being? Aren't those important? Absolutely, but that's in the currently-subjective realm of Architecture, and not the objective requirements of Engineering. As an Engineer, I cannot tell an Architect how to make a beautiful building that results in the flourishing of its inhabitants, or else I've overstepped my boundaries of determining what's objectively and functionally required for the building to function. An Architect can design a beautiful building informed by their own philosophy, cultural ideas, and tastes. Yet if it collapses immediately, none of its beauty matters in the real world, and that's why we Engineers deal in the realm of requirements and solely what's required for such beauty to continue to exist. When it comes to objective ethical requirements, would we want to include a concept as ambitious as "well-being"? That would imply that we're required to optimize each other's well-being, whether we wish it or not. Given that we cannot, as of yet, perfectly specify what's required to optimize the well-being of a society's members, the most effective solution might be to leave us free to make that determination for ourselves and loved ones, and remain the Architects of our own lives.
I love your presentation!!
Thank you Dr.
THANK YOU!
I've been focusing predominantly on metaethics, since until we can demonstrate that moral realism is the correct, it seems pointless to bother with anything else. One might as well choose their favored system of ethics similar to a favorite sports team, otherwise, with no criterion other than preference. First some definitions:
-- Objective: satisfies known functional requirements.
-- Subjective: unrelated to any known functional requirements.
The structural integrity of a building is required to prevent it from collapsing -- therefore a design lacking it is objectively wrong. Yet we can't say a building design is "objectively wrong" for having tacky yellow walls. That's just "subjectively unpleasant". This is unless the tacky color is causing traffic accidents left and right, in which case we can say it's objectively wrong, malfunctional, and in need of changes.
This is the practical engineering distinction for "objective" vs. "subjective". To define "objective" as "mind-independent" is impractical. After all, everything could be a figment of my imagination and "mind-dependent". Who can demonstrate otherwise, and how can I tell you aren't a figment of my imagination?
---- The Subjective "Is-Ought":
The objective bridge is not between "is-ought", unless the "ought/should" used is a "must". It's always hopelessly subjective, like so:
P: I am hungry.
C: I ought to eat.
... to which someone might ask, "Why ought you eat?" Me: "Because I'm hungry and want to eat." This is subjective. We could try:
P1: It's natural for human beings to eat food.
P2: I'm a human being.
C: I ought to eat.
... at which point we have a naturalistic fallacy. Q: "Why ought you eat?" Me: "Because it's natural." Q: "Why ought you do what's natural?" Me: "Because I wanna do what's natural!" Now it's subjective as well.
---- The Objective "Is-Must":
There is an objective bridge between "is" and "must":
P: Humans periodically require food to survive.
C: Humans must periodically eat.
This is objective and purely in the realm of description. Once we establish a "must", then any additional "musts" derived from it are falsifiable and not based on preferences. They could be objectively wrong, but not grounded in subjectivity. Rather, they're further specifications of objective functional requirements. Ex:
P1: We require food to survive.
P2: We are extremely short on food.
P3: There appears to be food we can gather in that abandoned building.
P4: There doesn't appear to be food anywhere else.
C: We must investigate that abandoned building.
One or more of the premises along with the conclusion (which is still a description) could be objectively wrong and falsified, but none of this is grounded in subjectivity, and instead objective functional requirements: "We need to do this whether we want to or not."
Whether someone wants to meet these requirements is up to them. A person can choose not to eat and starve. Yet it's objectively the case that they failed to meet their requirements (which is why they ceased to exist).
A "must" would be a Hypothetical Imperative, but a critical form of it: objectively required for anything to continue to function and exist. "If this building is to continue existing, it must maintain structural integrity." It's not a subjective and unfalsifiable hypothetical such as, "If you want the best life, you should be a Hegelian."
---- Survival Requirement
The root requirement of an ethical system is to protect the existence of its followers, since an ethical system whose followers have all abandoned the system or dead can't function. From there we can derive additional requirements:
P1: Social species are required to cooperate to survive.
P2: Humans are a social species.
C: Humans must cooperate to survive.
There are flaws in P1 since there are cases in which cooperation is neither required nor productive for survival, such as cooperating with those who seek to destroy us. Yet that's a epistemic error in the premise, and not the result of subjectivity.
---- To Justice and Human Rights
Recognizing this flaw, we might refine P1 in ways where we derive a system of justice as required to protect society along with human rights. Although we've gone all the way from basic survival to a system of justice and human rights, we're still completely within the realm of specifying objective requirements.
Once we specify a just society as a requirement for the survival of that society, there can still be criminals who subjectively choose to be unjust and violate human rights. Yet they would be objectively unethical if that system of justice and associated human rights truly are required to protect the members of the society and therefore the society itself. This is all within the realm of falsifiability, and completely divorced from preferences and cultural values.
Again some of it is likely wrong. The system of justice may require revisions, e.g., and we might require more/fewer human rights, but it would be objectively wrong and falsifiable if so.
---- Objective Moral Facts
This is the path to objective moral facts, starting from the basic, non-negotiable foundational fact that an ethical system requires protecting its followers to continue to function -- in the same way an airplane must not violate the laws of physics or else it'll crash and explode.
The better we meet our requirements, the longer and more effectively an ethical system can continue to function -- as with a building constructed with solid understanding of structural engineering which can last for many generations and withstand even earthquakes.
---- Questions
>> You smuggled an "ought" assuming it's a value for a system to survive.
I'm not saying, "One ought have a functional ethical system." I'm defining the parameters required for it to continually function. People are free to prefer that 1+1=3, for example, and determine, "1+1 ought to be 3." I'm just pointing out that they aren't going to function in tasks that require arithmetic, as they've chosen to violate the functional requirements of arithmetic.
>> What if a person's ethical system seeks to only function for a brief period of time?
That's their prerogative, but it'd be objectively wrong to claim it correct so long as "objectively wrong" holds any practical meaning. If a person's proposed business practices for others to follow are designed to bankrupt business owners, it is objectively wrong the same way, and defeats the entire purpose of adopting business practices in the first place.
Much like an architect who designs a building on the faultiest foundations, causing it to collapse days after at the slightest gust of wind, their design is objectively incorrect since they violated functional requirements. That's what "objectively wrong" means -- violating requirements. If it's not violating a currently known requirement, then it's only "subjectively unappealing".
>> What about when individuals jeopardize other people's survival in order to optimize their own?
This is why I defined the requirement in terms of the ethical system itself, and not individuals. An individual that optimizes their own survival at cost to others can cause civil unrest and pose a cascading threat to the system's integrity. Meanwhile, an individual that voluntarily self-sacrifices for a greater good is likely aiding the system's integrity while celebrated for their noble deeds. We have to operate within our epistemic limits, and errors will be made.
>> What's the appropriate scale of the system? What happens with conflicts of interests?
We must act practically according to our human limits. This isn't a matter of subjectivity, but rather practical constraints in meeting objective requirements. A parent must prioritize their own child to avoid risking neglect, and a leader of a community must prioritize their own community. Decentralization is an inherent requirement of the system given our human limits, as we aren’t Borgs with a shared hive mind.
Resolving conflicts of interest among groups is complicated, but not absent objective requirements. Building a space shuttle is incredibly complicated, requires decentralized collaboration, decentralized prioritization, and a hierarchy to resolve conflicts -- yet still has objective requirements.
---- Metaethical Criterion
Lacking a metaethical criterion leaves us vulnerable to the threat of visionaries whose only criterion for "right" vs. "wrong" is whether people are following their ethical system -- those who believe the "means justify any ends". There's no way to measure whether their system is functioning, except by whether people continue to follow it.
If we recognize the survival of an ethical system’s followers as a metaethical foundational requirement -- and derive the ultimate "ought" to act in accordance with that requirement (the one subjective choice required for continual function) -- then our inability to perfectly predict the future should make it clear that mass starvation resulting from the ethical system is an immediate emergency, for example, threatening its integrity and demanding urgent repairs/changes to the ethical system itself.
In the same way, our epistemic limits and practical risk aversion should prompt us to put out a fire on an airplane immediately, rather than waste time debating whether the plane is even meant to avoid crashing.
After all, if an ethical system’s metaethical requirement isn't its continued function, then there can be no practical and functional alternative. Any real-world system -- ethical or otherwise -- requires safeguards to ensure its continued function, and ways to measure its reliability/efficiency to provide us any ability to optimize blatant inefficiencies and repair blatant malfunctions. Otherwise the system is completely aimless, as with a blind truck driver.
Some further notes from Qs I've received:
>> How did you make the leap from collective survival to human rights? I can imagine many cases where violating human rights would improve survival.
A programmer working on a system with millions of lines of code can't seek to optimize every single function to the nth degree, or else they'll almost certainly produce neither an efficient system nor even a functional one. Especially if the system is mission-critical -- and an ethical system is as mission-critical as imaginable, given that any error could cost human lives -- we are required to optimize only the most blatant inefficiencies and correct the most obvious errors, and even then with the utmost caution.
If the first response to my proposal to protect collective survival as a functional requirement is conjuring images of historical atrocities, it’s critical to recognize that no cautious, humble person with an awareness of our epistemic limits would ever try to "optimize" a mission-critical system in this reckless regard.
-- Human Rights
On the contrary, one of the first clear derivatives of my proposal is basic human rights. Given our inherent limitations and the mission-critical nature of an ethical system, the first engineering safeguard and risk-aversion against catastrophic failure is to prevent the system from violating the rights of any followers.
To assume that violating human rights will optimize the system in the long run is like assuming that setting the wings of an aircraft on fire will somehow prevent it from crashing. It’s an absurd assumption given what we can reasonably predict. Even if such an outcome were to somehow occur, none of us would have been qualified to make that assumption. History repeatedly shows that when people have assumed they could predict the long-term benefits of such drastic measures, they were catastrophically wrong. Epistemic hubris -- the overconfidence in our ability to foresee and control complex consequences -- is one of the most dangerous possible character traits when dealing with a mission-critical system.
-- Mission-Critical Engineering Standards
In the same way that engineers working on a complex system take the utmost care to impose cautious safeguards -- even if that means not optimizing every last function -- those building an ethical system must prioritize these same protections. Human rights are therefore one of the most fundamental safeguards against catastrophic failure in an ethical system. This is not an abstract ideal, but the most practical, tried and tested approach to preserving the integrity of a critical system. This is the discipline and compromise required of anyone working in a mission-critical field.
>> Morality is concerned with "oughts", not "requirements".
"1+1 ought to be 2."
---- "Why ought it be 2?"
"Because the rules of arithmetic say so."
---- "Why ought we adhere to the rules of arithmetic?"
"Because we ought to be logical."
---- "Why ought we be logical?"
As an Engineer, it is not in my place to tell Architects what they "ought" to do. What I'm telling them is what will happen if they don't: their building will collapse and destroy its inhabitants. When we violate the requirements of mathematics, logic, gravity, or the moral requirements I specified, our society begins to malfunction and work towards its collapse.
I'm not saying "people ought to be logical", or that "people ought to be objectively moral." I'm saying that their chosen moral framework will cease to exist if they aren't, and even if their objectively immoral behaviors are subjectively considered "moral" in their framework.
>> Nothing we build can be expected to last forever. Why should we focus on longevity?
I agree, but my proposal is akin to making sure a teenager doesn't die prematurely of an easily preventable heart attack. All buildings we construct will eventually collapse, but my proposal is to allow continual repairs to their structural integrity so they don’t collapse prematurely or unexpectedly. That is all we can do as human beings to preserve functionality. An ethical system that protects the survival of its followers stands the greatest odds of lasting for generations, receiving the necessary repairs and changes along the way as required to continually adapt and improve. Nothing lasts forever, but things that maintain their integrity will function much longer -- and that's all we can do in this life.
Again, I am not sneaking a value judgment here. People are free to design short-lived systems that can only possibly function for a very limited and unexpected period of time. Yet that's precisely how we generally use the term "malfunctional", as with an airplane that crashes soon after take-off, or a computer whose hardware breaks down only a year after purchase. It is our choice whether to accept or reject the constraints that nature imposed upon us; my job as an engineer is to point out those constraints so that no one unknowingly violates them and risks catastrophic failure.
---- Obviously Objective Moral Facts
It's surprising to me that many, when confronted about why they shouldn't abuse innocent life, struggle to answer the question without simply citing rules. They could appeal to human rights, but then they're challenged to ask why we have human rights. One could take a Kantian approach, treating a person as a means, and the Categorical Imperative, but then they're challenged again. They could try to appeal to our shared empathy, but then we're appealing to emotions.
There's a simple engineering answer that avoids this infinite regress: the reason we must value logic, mathematics, human rights, the Golden Rule, Categorical Imperative, empathy, one or more gods -- whatever we want to throw in -- is that these are perceived requirements for the continued operation of our society. If we were to ethically condone abusing innocent members of our society, every reasonable assumption tells us that we'd be working toward societal collapse, triggering a cascading system failure of the very ethical framework that condones such actions.
If someone disagrees, they must demonstrate that a society can remain stable -- without clear signs of systemic decline or failure risks -- while ethically condoning the abuse of its innocent followers. Since we cannot see into the distant future, the best we can do is heuristically assess patterns of structural integrity over time, and whether we're likely heading towards continual function or failure. The burden of proof is now on them, as the reasonable assumption is that ethically condoning such actions moves a society towards its demise.
-- Self-Defense
Condoning harm against innocent people is like allowing engineers in a mission-critical system to modify lines of code that are neither erroneous nor inefficient. Even if the changes appear harmless or beneficial, they risk instability and catastrophic failure. Harming those who are threatening others in self-defense, however, is another matter -- akin to correcting a line of code that is clearly erroneous.
>> What about wars? What social services should we provide? [Other complex questions]
I don't claim to have anything close to all the answers to maintain the structural integrity of an ethical framework. These are extremely complicated matters and we should err towards existing models, but "complicated" doesn't mean absent objective requirements. In the future, I'm sure knowledgeable people will look back on us and realize we made so many errors that needlessly jeopardized our system integrity, as we do today looking back upon the leaning Tower of Pisa. We do the best we can given what we know today, and discipline in a mission-critical system demands the mindset, "If it ain't broke, don't fix it." We have to tread carefully.
My aim is not to provide the means to answer all questions: it's to provide the means by which we can determine when our answers are likely incorrect. The goal is to provide a criterion to determine whether an ethical system is malfunctioning and in need of a change/replacement, and to protect us from zealots pursuing lofty ideals at complete disregard to risks and costs.
>> What about human flourishing and well-being? Aren't those important?
Absolutely, but that's in the currently-subjective realm of Architecture, and not the objective requirements of Engineering. As an Engineer, I cannot tell an Architect how to make a beautiful building that results in the flourishing of its inhabitants, or else I've overstepped my boundaries of determining what's objectively and functionally required for the building to function.
An Architect can design a beautiful building informed by their own philosophy, cultural ideas, and tastes. Yet if it collapses immediately, none of its beauty matters in the real world, and that's why we Engineers deal in the realm of requirements and solely what's required for such beauty to continue to exist.
When it comes to objective ethical requirements, would we want to include a concept as ambitious as "well-being"? That would imply that we're required to optimize each other's well-being, whether we wish it or not. Given that we cannot, as of yet, perfectly specify what's required to optimize the well-being of a society's members, the most effective solution might be to leave us free to make that determination for ourselves and loved ones, and remain the Architects of our own lives.
Great !!!!
Thanks