The moral equation

in #ethics7 years ago (edited)

In the previous posts (Part I, Part II) in this series I proposed that ethics from a global perspective is best seen as an optimization problem, and that the objective is to maximize the individual utility values of all sentient beings. Utility was defined in terms of what each sentient being would prefer for itself if it could somehow choose one out of a number of counterfactual alternatives after having lived through them all.

In order to close in on a mathematical formulation of the moral optimization problem I propose that each sentient individual (as defined in Part II) at each given time can be assigned a real-valued number which we will call its utility. The higher the utility number, the more the individual will be inclined to prefer this particular state of the world compared to the alternatives. I postulate that it is possible to define utility in this manner so that it is transitive, i.e. if outcome A is preferred to outcome B, and B is preferred to C, then it is always the case that A is preferred to C.

Unfortunately, intransitivity of preferences in decision making is routinely observed in both behavioral economics and psychology studies, making this assumption questionable at first sight. However, these studies observe how people actually make decisions in real life and what they seem to show is how sensitive our decision making strategies are to what should be irrelevant situational details. For example, having recently encountered an arbitrary number affects how much money subjects report being willing to pay for goods, such that a higher number makes people more willing to pay, even if it is explicitly stated that it is a random number (for more reading on such quirks of cognition I recommend Daniel Kahneman's seminal book Thinking, fast and slow). As such, these experiments do not reveal the actual utility subjects get out of the different options but only how much relative utility the subjects, with their limited foresight, predict they will get out of them. And since decision making is such a hard task, we often make do with quick and dirty heuristics that work well enough most of the time, even though they may not respect transitivity of preferences.

Indeed, we often realize eventually (or at least come to believe) that we made the wrong decision as we bear the consequences. This suggests to me that while we often fail to make the optimal choices, it is probably possible, at least in principle, to order the utility values of counterfactuals. However difficult this is in practice, all we need for our theoretical edifice is that it is possible in principle. Transitivity is thus one of the central postulates of the theory proposed here.

We are now at a point where it is possible to formalize the optimization problem mathematically. As I argued in Part I, ethics is fundamentally a multi-agent problem. The main questions that we need to address in order to move from an individual to a global measure of utility are 1) how to assign moral weights to different moral subjects, and 2) how to accumulate their individual utilities into a single global value.

This post will mostly deal with the second aspect, as the first one is complex enough to warrant at least a full post of its own. For the purposes of this discussion, we will simply state (without supporting arguments for now) that every individual can be assigned a number representing its degree of sentience, and that this is the moral weight of the subject. Conceptually, this degree of sentience encodes the extent to which an individual is aware of, and has a capacity to care for, distinctions in its environment.

As for how to weigh together individual utilities, there are two obvious suggestions which have been amply discussed thoughout the history of utilitarianism, both with clear merits and shortcomings. The first of these is to simply add up all individual utilities. Each individual will have a range of possible utility values, the span of which depends on its degree of sentience. The global utility at a given instant in time is then simply computed as the sum of all individual utility values. In this scheme, a world with many sentient beings is considered to be morally superior to one with fewer or more dimly sentient beings, which makes a certain amount of sense. After all, a universe devoid of sentience is not a very meaningful place. However, it also tells us that we should put as many children as possible into the world, even if they are expected to live in misery. Clearly this doesn't work as universally as we would like it to.

The other obvious choice is to instead compute the average utility and strive to maximize this. In this scheme we avoid the absurd moral imperative to procreate at any cost, but on the other hand, this scheme tells you to kill anyone whose utility level is below average, which hardly seems like the moral thing to do. So we have two simple schemes, neither of which stands up to scrutiny. Of course we could be creative and try to divine up some kind of nonlinear function of the individual utilities, but we don't have complete freedom in choosing such a function. To begin with, we would want to conserve the symmetry between individuals in the sense that nobody is special. We also want the global utility to be a monotonously growing function of the individual utilities, so that if anyone is better off, all else being equal, the global utility increases. So our hands are pretty tied. What gives?

My solution to this problem is to note that the moral good of a single state in isolation is not meaningful. Ethics is about choosing among possible transitions from the current state to a future one. Because of this, transitions is what our moral value function needs to evaluate. In some transitions, the subtleties described in the previous paragraphs don't apply since all the stakeholders in a transition are present both before and after the transition. In these cases it doesn't matter whether we use an additive or average measure of utility, they will both yield the same ordering of possible transitions.

The difficult cases discussed above all stem from changing numbers of moral subjects. In assessing transitions where the total amount of sentience in the universe changes, it is important to recognize that the stakeholders that matter are the ones in existence before the transition, as these are the ones who are currently real. We cannot take into account the interests of not yet conceived children, because they do not yet exist, they are only potential. They are not actual persons but a probability distribution over the persons they may become if born at all. Likewise, in a transition where someone dies, it seems obvious that their interests should have a bearing on the global moral utility, so that we in this case are lead to choose an average measure of utility.

This resolves the issues with the previous suggestions, but isn't it too simplistic to completely disregard the unborn? What about our moral duty to be good stewards of the world so that future generations should not suffer for our sins? It would be a quite extreme position to say that we have no responsibility at all to those unborn. The simple summing scheme told us that it is morally right to produce offspring even to miserable lives, but now it seems it is simply morally irrelevant.

Well, not quite. It is not exactly that the unborn don't matter, and they are not unreal in the sense of being fictional. There will certainly be people in the next generation, unless we drive ourselves to extinction in a very near future. We can describe quite well the aggregate interests of, say, the next billion people to be born, because their individual differences, which we cannot predict, will smooth out over the population, and in the big scheme of things, they will be, well, human. Thus this group of people, even though they are not yet sentient beings, can be straightforwardly included in moral considerations about the future. Failure to do so is negligence on the part of the moral optimizer. Thus in a sense, this billion of unborn children is real, even if none of them individually exist yet.

What about the moral implications of bringing a single child into the world? There is of course the effects on the parents and other currently living people, which our model already covers. But what about the interests of the potential new person? In this case the uncertainty is much larger. However, there are some things we can know that somewhat narrows the probability distribution of moral outcomes for the unborn. If the parents are starving in destitute poverty, the prospects of the child are probably not very favorable. If the parents are prosperous, living in an affluent society and yearn for children, its chances of having a life worth living is of course much better.
Following the principle stated in the previous post:

The interest of each individual should be considered in such a way that their moral vote is cast on the alternative that they would prefer above all others if they were to live through all alternatives,

it seems that it is morally good to bring a child to the world if the expectation value of the change in global utility including the utility of the child, is positive. The child's utility should be evalutated over the probability distribution of possible life trajectories, in each case from the perspective of the distribution of personalities that the child could have. While this definition is hardly actionable in practice, it is theoretically consistent, which is all we need here.

Summing up our conclusions from this part of the series, ethics is an optimization problem with the aim of maximizing the global utility of sentient beings, where the global utility is defined as the (weighted) average utility of currently existing moral stakeholders and a probability distribution representing the average utility of stakeholders brought into existence during the transition.

The probability distributions alluded to throughout this post merit some discussion of how they are to be interpreted. These distributions are Bayesian in nature, meaning that they reflect not some carefully measured frequency behavior of a well defined process, like casting dice, but rather the beliefs of whoever is doing the moral consideration. As such they are not objective facts of the world. However, given a set of beliefs of what is likely to happen, the theoretical framework presented here is meant to give you guidance in how to act to maximize the expected moral outcomes.

There is another point that is always relevant when making decisions under uncertainty. As a lot of different very smart people have been credited with saying: it is hard to make predictions, especially about the future. And if the near future is hard to predict, the far future is so much more difficult. The further ahead you try to forecast, the more uncertain are your projections. This has bearings on rational decision making. Economists have understood this for a long time and encode it in what is called discount rates. The economic concept has many reasons, only one of which is uncertainty, but the point here is that the further into the future something is expected to happen, the less weight you should give it in your decision making, simply because you are more uncertain about its relevance. In any actual application of moral optimization, there therefore needs to be some kind of discounting of the future. This concludes the current discussion and I would just like to close by repeating the moral equation, as we have stated it in words:

Ethics is an optimization problem with the aim of maximizing the global utility of sentient beings, where the global utility is defined as the weighted average utility of currently existing moral stakeholders and a probability distribution representing the average utility of stakeholders brought into existence during the transition, discounted in proportion to their uncertainty.

If you've made it this far into the text, thanks for reading. As always, feel free to upvote, resteem, share on other social media or comment if you found this interesting.

Sort:  

Congratulations @raztard! You received a personal award!

1 Year on Steemit

Click here to view your Board

Do not miss the last post from @steemitboard:

Christmas Challenge - Send a gift to to your friends

Support SteemitBoard's project! Vote for its witness and get one more award!

Congratulations @raztard! You received a personal award!

Happy Birthday! - You are on the Steem blockchain for 2 years!

You can view your badges on your Steem Board and compare to others on the Steem Ranking

Vote for @Steemitboard as a witness to get one more award and increased upvotes!