thinking makes it so

There is grandeur in this view of life…

Scratch my back

leave a comment »

Structural similarities are fairly apparent between Kant’s categorical imperative, the Golden Rule, and the evolved behavioural strategy of reciprocal altruism. This is not to say the Golden Rule or the categorical imperative are identical with or derived from evolved reciprocal altruism. But the similarities certainly seem worth exploring.

This article follows Masters of war in a discussion on The ethics of belief.

Robert Trivers provides a useful and concise definition of reciprocal altruism:

Altruism is suffering a cost to confer a benefit. Reciprocal altruism is the exchange of such acts between individuals so as to produce a net benefit on both sides.1

Reciprocal altruism as a behavioural strategy rests on a few preconditions, which again are well documented in evolutionary theory. For example individuals must be able to recognise and remember each other as individuals, so as to be able to recognise and remember each other as either past co-operators or past defectors (cheaters). In humans at least other developments follow so as to inspire trust and therefore distinguish individuals as co-operators. Examples are the protection of reputation and the involuntary expression of emotion.

Much of the explanation of the evolution of reciprocal altruism is based on game theory, which explores the consequences of adopting competing strategies in repeated, two-player, non-zero-sum games like the Prisoner’s Dilemma2. One such strategy is ‘tit for tat’, in which a player (player A) starts out cooperating with a new opponent (player B). If player B has a different strategy (for example ‘always defect’ – ie cheat), and therefore defects, player A will also defect in the next round with player B3. Since A’s strategy stays ‘tit for tat’, A’s opening move with another player (eg player C) will however be to cooperate. A population of individuals who all consistently adopt ‘tit for tat’ will therefore cooperate and succeed indefinitely. But if the starting population is a mixture of ‘tit for tat’ strategists and ‘defector’ strategists, computer models demonstrate that under certain conditions the ‘tit for tat’ strategists thrive at the expense of the ‘defectors’. ‘Thriving’ means here that if success in the game is translated into reproductive success, after a certain number of generations the concentration of ‘tit for tat’ strategists increases at the expense of the defectors.

This is a hugely important finding. It has long been recognised that cooperative behaviour describable as reciprocal altruism does exist at many levels in nature – that same nature which in other respects is still ‘red in tooth and claw’. Before game theory explanations came along reciprocal altruism seemed to defy the evolutionary logic of survival of the fittest. But now there is a firm basis for understanding how ethical behaviour could have evolved, consistent with the rest of scientific knowledge.

We shall now try to look at ‘tit for tat’ in terms of Kant’s categorical imperative:

[(i)] I ought never to act in such a way that I couldn’t also will that the maxim on which I act should be a universal law.4

Simplifying into a second-person imperative – without the double negative – we get:

(ii) Always act in such a way that you could also will that the maxim on which you act should be a universal law.

‘Tit for tat’ could itself be expressed as a maxim:

(iii) When faced with a situation involving another individual and where the choices open to you are either to cooperate or defect, you should always cooperate unless you know that individual previously defected, in which case you should defect.

Now could maxim (iii) be logically willed to be a universal law? We saw in the discussion of the categorical imperative (Categorically imperative) that characterising an appropriately ‘universal’ domain is not quite as straightforward as Kant seemed to assume. Kant wanted his formula to apply to all free rational beings insofar as they were free and rational – not just to all human beings. But when content is added to generate specific imperatives on eg promising or telling the truth or killing, other attributes then need to be considered, which in humans take specific contingent values.

The issue is that free rational beings could potentially exist who are not human, even if the current world offers little evidence. If the categorical imperative derives (as Kant intended) from the concept of free rational being, then it should cover not only human beings but also free and rational angels, super-robots and extra-terrestrial aliens. And these, unlike humans, could possess immortality and/or omniscience and/or omnipotence and/or hypersensitivity and/or a whole range of other qualities consistent with freedom and rationality. Because of differences like these, it might be counterintuitive to extend to (say) hypersensitive, omniscient and immortal angels a maxim which works perfectly well across all human beings. (See Categorically imperative.)

If the argument above is sound, then we have only two options.

Option 1 is that a number of moral imperatives to do with lying, promising, killing and so forth no longer fit the categorical imperative formula. This would be unfortunate, as it would mean the categorical imperative no longer explains what it was intended to explain: why moral obligation is binding.

Option 2 is that specific imperatives, created by adding content to the formula, can legitimately apply within a specified subdomain (eg across all human beings). But as we saw in Categorically imperative the problem with this is knowing what is special about the subdomain of all human beings rather than (say) the subdomain of all white male human beings, or all human beings of Germanic descent. Applying a maxim across all human beings appears morally superior to applying a maxim across all white males, but the categorical imperative does not provide an explanation without begging the question.

Getting back to the ‘tit for tat’ formula, this question of domain is important because a maxim like (iii) works across some domains but not others. In an ‘ideal’ domain of perfect co-operators (and perfect operators) the maxim works. The default is to cooperate, and since no one encounters a defecter, everyone continues to cooperate.

A problem would arise however if (say) the individuals were not all perfect operators. Although they all intend to cooperate, sometimes an individual could defect by mistake – behave as if defecting while intending to cooperate. Pure ‘tit for tat’ is unforgiving:

Round 1: A cooperates (default). B defects (by mistake).

Round 2: A defects (tit for tat). B cooperates (tit for tat).

Round 3: A cooperates (tit for tat). B defects (tit for tat).

…and so on. The relationship between A and B will only get back to its original reciprocal altruism if either makes a compensating mistake – to cooperate by mistake when the intention was to defect. But the opposite mistake could be just as likely – A or B defects by mistake when the intention was to cooperate. This would lock them into reciprocal defection indefinitely – until one of them cooperates by mistake.

But this is a special case of a familiar issue in ethics – part of familiar moral grammar, if you like. Moral codes and moral thinking have long distinguished between voluntary and involuntary actions. In general we tend to censure deliberate bad actions but forgive involuntary bad actions. In fact we may often forgive the involuntary bad action itself but censure a voluntary action or omission which may have allowed the involuntary bad action to happen. An obvious example is the distinction between premeditated murder and culpable homicide resulting from voluntary negligence.

Applying this distinction to our test case we now get something like:

(iv) When faced with a situation involving another individual and where the choices open to you are either to cooperate or defect, you should always cooperate unless you know that individual previously deliberately defected, in which case you should defect.

Or perhaps more pragmatically:

(v) When faced with a situation involving another individual and where the choices open to you are either to cooperate or defect, you should always cooperate unless you have good reason to think that individual previously deliberately defected, in which case you should defect.

These formulations add a bit more flesh to the preconditions for moral behaviour. And again it is familiar flesh. A moral agent addressed by (v) needs to be something of a mind reader – at least to the extent of having a functioning ‘theory of mind’5.

The last adjustment we will make to our illustrative strategy for now involves the last clause: ‘in which case you should defect’. It sticks out like a sore thumb, almost begging to be seen as evidence that obviously any attempt to base ethics on evolutionary game theory must be misguided.

But we are not claiming ‘good’ ethical behaviour boils down to ‘tit for tat’. Many people would for example consider a strategy like this morally superior to the previous one:

(vi) When faced with a situation involving another individual and where the choices open to you are to cooperate, defect or decline, you should always cooperate unless you have good reason to think that individual previously deliberately defected, in which case you should politely decline from participating – and explain why.

With (vi) we seem to be closer to a maxim which could come under the categorical imperative (ii). Someone acting on maxim (vi) could quite coherently will maxim (vi) to be a universal law. And if a pure ‘tit for tat’ strategy can thrive under natural selection, so can (vi). In fact it has the advantage of protecting the reputation of co-operators.

A maxim like (vi) may not seem quite as fundamental as one against killing or lying or breaking a promise, but in the absence of shared principles like (vi) it is hard to see how social life can be maintained.

Of course few if any ethical situations involve games like the Prisoner’s Dilemma. And moral decisions are not the same as behavioural strategies. The point though is that models based on game theory can be used to explain how cooperative and reciprocally-altruistic behaviour could have evolved under natural selection, without needing any external (and particularly any supernatural) injection. With cooperation and reciprocal altruism we have a basis for building increasingly sophisticated levels of morally evaluated (and morally evaluating) social behaviour.

We do not have to prejudge how much of this evolved moral behaviour is present at birth, how much subsequent development takes place irrespective of environmental factors, and how much subsequent development may vary depending on those environmental factors. The theory merely proposes that, for humans, moral behaviour is part of their evolved nature, and outlines possible explanations as to how moral behaviour could have evolved.

If it is part of our evolved nature, then in theory it is supplied to every human individual regardless of that individual’s other characteristics, and regardless of the future progress of his or her life – just as every sparrow is supplied with the potential for flight, even though marauding magpies may prevent some of them from getting to take-off.

If this thinking is at all sound, it suggests a possible link to the universality of both the Golden Rule and Kant’s categorical imperative (and perhaps also to the ‘original position’ in John Rawls’s6 celebrated thought experiment?).

Some at least of which we will address in the next post

References

1 RL Trivers, Reciprocal altruism: 30 years later. In: C.P. van Schaik and P.M. Kappeler (eds.) Cooperation in Primates and Humans: Mechanisms and Evolution. Berlin: Springer-Verlag, 2005.

2 RL Trivers, The evolution of reciprocal altruism, in: Quarterly Review of Biology, 46, 1971.

3 RL Trivers, 2005: See 1 above.

4 Immanuel Kant, Groundwork for the Metaphysic of Morals, translated by Jonathan Bennett (July 2005), http://www.earlymoderntexts.com/pdf/kantgw.pdf.

5 Marc D Hauser, Moral minds, Abacus, 2008.

6 John Rawls, A Theory of Justice, 1971.

© Chris Lawrence 2009.

Advertisements

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

%d bloggers like this: