Saturday, January 16, 2010

Pragmatism, uncertainty and ethical principles.

The essential feature of ethical pragmatism is that actions and conditions (objective (non-minded) states of affairs) do not have direct, intrinsic (deontic) ethical value. Actions and conditions (and the potential for actions and conditions) have effects on our subjective feelings; they therefore have indirect, instrumental (pragmatic) ethical value. Similarly, ethical principles have no objective ethical truth: it is neither objectively true nor false that "we should not kill innocent people". These observations are actually true, whether we like them or not.

A naive view of pragmatism, Bentham's utilitarianism, says that we should predict the effects of various actions and conditions and choose the alternative that maximizes the pragmatic outcome. Obviously true, because, what's the alternative? Should we choose an alternative that we know is worse? But we should always be suspicious of the obvious: it's often an indication that we're missing something important*.

*Astute readers of this blog will notice that I sometimes call various statements obviously true; you should infer either that I myself am being insufficiently skeptical or that I'm offering a subtle clue that I want to examine the issue more deeply.

Bentham's utilitarianism fails to account for uncertainty: lack of knowledge about the future that we can't (or can't yet) predict as risk, i.e. as a probability distribution calculated or modeled analytically from first principles. We can for example calculate the risk that a couple of dice will come up snake-eyes, but we are uncertain if they're loaded or fair.

Our ethical and political thinking is almost completely dominated by uncertainty. Even with our considerable modern scientific knowledge, there's a lot about the world we just don't know, even probabilistically. We can guess about future uncertainty by looking at the past, but without a model based on first principles, we don't know what elements or conditions might have changed that would render the past irrelevant as a guide to the future.

Even when we do have probabilistic knowledge, human intuition seems extremely poor at calculating actual probabilities. Even professional statisticians have reported to me that they often make fundamental errors in their thinking; becoming convinced that a specific calculation is accurate requires careful, time-consuming study. It's just not a matter of complexity; human (and even animal) neurology has evolved to perform equally complex calculations, for example the ballistic calculations that allow us to play catch or the aerodynamic calculations that allow birds to fly. My guess is that the mental tools we evolved to handle uncertainty also handle the actual risks faced by animals and humans well enough that our difficulty intuiting risk was not selected against.

By definition, we can't rationally and analytically determine uncertainty. But we still have to move around in an uncertain and hostile world. Hence we can conclude that we have evolved mechanisms to deal with uncertainty. Evolution can solve problems that analysis cannot or has not yet solved. (We do not for instance have to understand anything about molecular biology to perform artificial selection on animals and plants.) Some of those mechanisms are biological — animals too have to survive in an uncertain world — and some are social.

Understanding uncertainty as a key element of ethical and political biological and social evolution seems to have good explanatory power*. One knotty problem in ethical philosophy, the Trolley Problem, becomes easily comprehensible under the uncertainty paradigm. In the first case, where the agent must choose whether or not to throw a switch to redirect the trolley to the track with fewer people in the way, the uncertainty seems intuitively symmetrical, and we don't have any problem making the decision analytically. In the second case, however, where the agent must choose whether or not to push the "fat man" in front of the trolley to stop it, the uncertainty is qualitatively asymmetrical: we are uncertain about the consequences of pushing a person in front of a train in a substantively different way than we are uncertain about letting a runaway trolley put people on the tracks at risk.

*Explanatory power is not sufficient proof that a theory is true, but it's a good start.

Indeed, The vast majority of arguments and experiments in ethical philosophy simply assume certainty* about the consequences of the alternatives. The Trolley Problem, for example, assumes that we are certain that pushing the fat man in front of the train will stop the train, and it assumes that if we don't push him, we are equally certain that five people will die. I'm all for making simplifying assumptions to understand complex systems, but it's possible to simplify away the wrong feature: one cannot, for example, understand anything at all about aircraft design if we simplify away air resistance. Likewise we cannot understand ethical and political behavior if simplify away uncertainty. Which does appear to actually be the case: experiments that explore ethical thinking by assuming certainty are confused and contradictory; they either make people look like blithering idiots or require a rococo ethical ontology.

*Or at least a rational understanding of the risk.

Furthermore, given that selection — both biological and social — is a negative process, then it makes sense that our evolutionary response to uncertainty is biased towards avoiding catastrophe rather than optimizing outcome. The catastrophic failure of some variation leads to immediate adverse selection; an optimization appears selected "for" only indirectly, when the optimization alters the environment to select against unoptimized variations. And we do in fact see these characteristics in many cognitive biases.

We can draw several conclusions from this analysis. First, although ethical principles do not have intrinsic objective truth, we cannot trivially dispense with them, in favor of some sort of analytical utilitarianism. On the other hand, just because some ethical principle has not been selected against in the past does not mean it is some sort of eternal verity. Thus a degree of small-cee conservatism is warranted — if it ain't broke, don't fix it — but some conclusions drawn conservatively — we must preserve tradition at all costs — are equally unwarranted. Second, when we do make changes, we must take special precautions against catastrophe, even at the cost of theoretical optimizations. Given this constraint, however, the best we can do rationally is the best we can do: If we are not to simply lobotomize ourselves en masse, then we still must act in the face of uncertainty.

Most importantly, however, we can see that most ethical and political discourse is simply bullshit: consisting mostly of analyses of the intrinsic merits of these principles. But ethical principles do not have intrinsic merits, we usually cannot even analyze the consequences of changes to ethical principles to justify them directly.

There is only one good conservative argument: we know existing principles have not yet failed catastrophically. There is only one good "liberal" argument: we know the negative effects of these principles, and we must minimize the potential catastrophic consequences of any changes to these principles.

Furthermore, because ethical and political principles are the result of an evolutionary process, to make changes to these principles in a society, we must (somehow) exert negative selection pressure*: we must directly or indirectly deprecate bad ideas. Promoting a good idea ("positive" selection) will work only when adoption of that idea by a minority changes the social and economic environment in such a way that alternative ideas are deprecated.

*Negative selection pressure in the world of ideas does not mean killing people who hold bad ideas.

No comments:

Post a Comment

Please pick a handle or moniker for your comment. It's much easier to address someone by a name or pseudonym than simply "hey you". I have the option of requiring a "hard" identity, but I don't want to turn that on... yet.

With few exceptions, I will not respond or reply to anonymous comments, and I may delete them. I keep a copy of all comments; if you want the text of your comment to repost with something vaguely resembling an identity, email me.

No spam, pr0n, commercial advertising, insanity, lies, repetition or off-topic comments. Creationists, Global Warming deniers, anti-vaxers, Randians, and Libertarians are automatically presumed to be idiots; Christians and Muslims might get the benefit of the doubt, if I'm in a good mood.

See the Debate Flowchart for some basic rules.

Sourced factual corrections are always published and acknowledged.

I will respond or not respond to comments as the mood takes me. See my latest comment policy for details. I am not a pseudonomous-American: my real name is Larry.

Comments may be moderated from time to time. When I do moderate comments, anonymous comments are far more likely to be rejected.

I've already answered some typical comments.

I have jqMath enabled for the blog. If you have a dollar sign (\$) in your comment, put a \\ in front of it: \\\$, unless you want to include a formula in your comment.

Note: Only a member of this blog may post a comment.