thinking makes it so

There is grandeur in this view of life…

Posts Tagged ‘reciprocal altruism

Ethics as a product of evolution

with 20 comments

[The post below is my draft research proposal for a philosophy PhD at a UK university. Any feedback would be more than welcome!]

The question I want to examine is one which is formally hypothetical, but has more than hypothetical significance.

I am not assuming that human moral sense and behaviour are products of evolution. But I am assuming it is at least possible that they are. If that assumption is unsound, I want to understand why.

Assuming the assumption is sound, I then want to consider what its impact might be on the branch of philosophy we know as ethics, if it actually turned out to be true.

Read the rest of this entry »

Advertisements

Written by Chris Lawrence

7 November 2010 at 6:02 pm

A secular imperative to love

with 7 comments

This post responds to yet another interesting dialogue with Terry Sissons, the author of The Other I.

Karen Armstrong: The case for God

Karen Armstrong: The case for God

It followed from the last of a series I had written about Karen Armstrong’s new book The case for God: What religion really means.

In a previous conversation, either on Terry’s blog or my own, I’d written – probably in the context of rejecting transcendent spirituality as a foundation for ethics – ‘Love is hard enough. But it is also enough.’ (For clarity, read the second sentence as ‘But love is also enough – of an imperative’.) Then later, as part of the Case for God exchange, Terry asked me why I thought that. Below is an attempt to expand on my original response.

The first step is to state my belief that humans have probably evolved to be the entities they are: sentient, social, interdependent, mortal etc. I say this not because I particularly want an evolutionary explanation to be true, but because an evolutionary explanation seems more sound, and to require less metaphysical baggage and/or wishful thinking than any other explanation currently on offer.

Read the rest of this entry »

Written by Chris Lawrence

24 November 2009 at 11:57 pm

Not I my lord

leave a comment »

In this post we introduce further implications of our model of moral behaviour, and finally thread back to William James and William Clifford.

This article follows Just perfect in a discussion on The ethics of belief.

The way life used to be

We have the self-conscious human agent who feels a universal imperative of the form

(viii) Whenever conditions x1xn obtain, do y

We have said that it is for that self-conscious human agent him or herself to decide what ‘universal applies to – ie what group or domain does the agent count him or herself a member of? Is it family, tribe, church, nation, race, generation, species?

The Good Samaritan (Rembrandt)

The Good Samaritan (Rembrandt)

But we need not assume the agent construes ‘universal the same way for ever.

Moral imperatives can be reflexive – an extremely valuable feature. For example in Precious metal rules OK? we showed that the Golden Rule can be applied in the spirit of the Golden Rule. In fact the Golden Rule should be applied in the spirit of the Golden Rule as it is in the spirit of the Golden Rule that the Golden Rule should be applied in the spirit of the Golden Rule…!

So for example y could take the value refine how you construe ‘universal or rethink what group you count yourself a member of. This, after all, is one of the lessons of the parable of the Good Samaritan: who is my neighbour?

We have therefore one way at least in which a universal imperative can contain within itself the imperative to improve the imperative itself. This is further evidence that the model can accommodate the concept of moral perfection.

Brightest in the sky

But the concept of moral perfection brings with it another implication which we must deal with. Our model is very explicitly a model without a god to arbitrate what is right or wrong. As such it is exposed to a possible ‘ethical objection.

The fall of Satan/Lucifer (Gustave Doré)

The fall of Satan/Lucifer (Gustave Doré)

The objection is that it is dangerously arrogant for mere humans to think they have it in them to decide how to be perfect or how to live their lives in the best possible way, without something like the concept of a god as a divine ‘not-self or ‘other. It is the sin of pride in its subtlest form: the desire of Satan (or Lucifer) to compete with god.

It is an argument which does not disappear by just excluding god from your ontology. If there is no god then obviously you are not competing with god. But the ethical argument is about the moral danger of trying to be something you are not and cannot be – even if that something does not exist or even cannot exist. In completely secular terms the argument would be that any autonomous attempt to achieve moral perfection would be self-defeating in the long run.

We could juxtapose an ‘equal and opposite argument against this, that it is irresponsible to give someone else or something else the job of figuring out what is best: which then allows you – indeed obliges you – to abandon the quest to do it yourself unaided by a god.

A true believer could however respond by saying that with a god there is no abdication of responsibility. You are responsible all the way – for finding out what god wants you to do. But while you are exercising that responsibility, god is acting through you. Only by exercising that responsibility will you carry out gods will, and god will act through you as you do so. But if you evade the responsibility you are not carrying out gods will.

If we decide to take this objection seriously we could suggest that y in the universal moral imperative (viii) takes a particular value when conditions x1xn relate to the recognition of this moral danger of arrogance. That value would be to posit a ‘not-self or ‘otherspecifically to avoid any risk of the ‘sin of pride. We could then label that not-self ‘god – or, for reasons I will now explain, ‘god1.

Better than one?

In making this move we have deliberately not identified that ‘god1 with, for example, the ‘god2 which is the answer to questions like why is there something rather than nothing? posed by the likes of Thomas Aquinas and Leibniz1.

Since having reached this point the only reason for positing ‘god1for the model is an ethical one, we would need an ethical reason for identifying ‘god1with ‘god2. And not only is there no obvious ethical reason for doing so, it is also fairly easy to think of an ethical reason for not identifying the two.

For example ‘god2could have all sorts of cosmic powers by which it could exert its authority. Which means an agent obeying an imperative imposed by ‘god2 might well have prudential rather than moral reasons for obeying – for example fear of the personal consequences of disobedience. (In Kantian terms this would be heteronomy rather than autonomy.2) The equivalent objection does not hold against ‘god1 because we are not, in circular fashion, positing ‘god1 as necessarily the source of that moral perfection. All we are saying is that our pursuit of moral perfection necessitates that in its pursuit we are carrying out what we posit to be the will of our ‘not-self (ie the will of ‘god1), not (just) our own will.

I think it is fair to say that our conclusion so far is that we are not yet convinced either way about positing a not-self (‘god1) as a precaution against the self-defeating possibility of pride. But that even if we do admit ‘god1we have no reason to identify it with a cosmological/creator ‘god2, and a very good reason not to.

We could however add a bit more substance to this idea of ‘god1, by looking at other ways in which imperatives of the form (viii) could be reflexive, still within the evolutionary psychology of reciprocal altruism. We said in the previous post that reciprocal altruism is based on trust, and to a significant extent the domains (groups, communities) agents see themselves as members of are the domains within which they extend their trust. Another perspective is that of reputation: behavioural strategies are selected for which enhance an agent’s reputation as a cooperator within that agent’s domain. But both the ‘domain of trust and the ‘domain of reputation (ie the current community of fellow agents who are in a position to hold opinions about the agent) can be subject to the same kind of reflexive improvement as the domain of pure ‘belonging referred to above.

For example when conditions x1xn are such as to lead the agent to doubt whether the assumed ‘domain of trust(eg the agents current immediate community) is 100% worthy of trust, then y could take a value like rethink whom you trust. In a similar way, if conditions x1xn are such as to lead the agent to doubt whether the current ‘domain of reputation is problematic, then y could take a value like rethink whose opinions you value.

As with the concept of moral perfection itself, the reflexive logic of improvement could lead to the concept of a ‘perfect recipient of trust and a ‘perfect custodian of reputationrespectively. Both are familiar features of the concept of god in its ethical dimension – ie the concept of ‘god1.

In case this kind of talk appears far-fetched, bear in mind that we are not conjuring an absolute entity out of nothing, we are constructing an absolute concept (‘god1) out of moral logic. We are doing this by flexing that moral logic to the utmost, because it is a necessary feature of moral logic that it both allows this flexing and requires it. And it is an important part of our claim that, if the evolutionary account of reciprocal altruism is sound, then in the context of self-conscious agents, behavioural imperatives which originally evolved out of the complex non-zero sum economics of social interaction will exhibit precisely that kind of moral logic.

William James

William James

If we now go right back to William Jamess assertion of the ‘religious hypothesis (see Better believe it), we can see that his argument does seem to rely on equating ‘god1with ‘god2.

Jamess first affirmation is:

that the best things are the more eternal things, the overlapping things, the things in the universe that throw the last stone, …and say the final word.3

He further claims that

The more perfect and more eternal aspect of the universe is represented in our religions as having personal form. The universe is no longer a mere It to us, but a Thou, if we are religious; and any relation that may be possible from person to person might be possible here.

The reference to ‘universeimplies that James has in mind either something nearer to ‘god2 or to something which combines ‘god1and ‘god2.

His second affirmation of religion is

that we are better off even now if we believe [the] first affirmation to be true.

If ‘better off even nowincludes any moral element, then this again suggests ‘god1and ‘god2 are the same.

And as we saw in As if, as if, the whole issue for James is about ethics anyway:

If the action required or inspired by the religious hypothesis is in no way different from that dictated by the naturalistic hypothesis, then religious faith is a pure superfluity, better pruned away… [T]he religious hypothesis gives to the world an expression which specifically determines our reactions, and makes them in a large part unlike what they might be on a purely naturalistic scheme of belief.

This seems to clinch it. James equates the god of ethics (what we have labelled ‘god1) with the god of the universe (what we have labelled ‘god2).

Remember he argued against William Clifford’s dictum that

it is wrong always, everywhere, and for anyone, to believe anything upon insufficient evidence.4

For James the ‘religious hypothesis was precisely the kind of thing which it was (morally) right to believe even without sufficient evidence, because of the (moral) benefit which accrued from that belief.

I have been trying to show both the strength and the weakness of his argument. Its strength is that, yes, in the ethical domain there could well be a case for acknowledging a ‘leap of faith in order to give moral striving the universality and endlessness it needs. But its weakness is its assumption that that ‘leap of faithhas to be in the direction of some supernatural or cosmic entity whose existence must be taken on trust. A leap towards ‘god1 is a totally different thing from a leap towards ‘god2. A leap of faith towards ‘god1 does not breach Cliffords dictum. We therefore do not need to breach Cliffords dictum in order to achieve the benefit James describes as unique to the ‘religious hypothesis.

The point of articulating a model of moral behaviour in terms of evolutionary psychology was not to presuppose its truth, but purely to demonstrate its feasibility. If we are looking for an explanation as to the origin of moral behaviour and moral consciousness we do not have to consult the supernatural or the metaphysical. Nor do we have to give in to an inexplicable mystery. A model in terms of evolutionary psychology, or something very like it, seems well worth exploring. A model like this will not tell us what is right or wrong, but it might help us understand why we want to know what is right or wrong and why it matters so much to us.

References

1 GW Leibniz, The Principles of Philosophy known as Monadology, translated by Jonathan Bennett, September 2004; amended July 2007. [http://www.earlymoderntexts.com/pdf/leibmon.pdf]

2 Immanuel Kant, Groundwork for the Metaphysic of Morals, translated by Jonathan Bennett, July 2005: http://www.earlymoderntexts.com/pdf/kantgw.pdf.

3 William James, The will to believe, 1896.

4 William Clifford, The ethics of belief, 1877.

© Chris Lawrence 2009.

Just perfect

with 2 comments

I ended the previous post with a promise to explain how we get from a decision-making mechanism which arbitrates between different moral options to the concept of moral perfection. Gulp.

This article follows Food for thought in a discussion on The ethics of belief.

We can understand what sort of thing an explanation of cooperative (reciprocally-altruistic) behaviour would be in terms of evolutionary theory.1 We can then understand that conscious agents who have evolved with cooperative behavioural strategies might experience those behavioural strategies as imperatives. (To compare with a non-moral example, a conscious agent who has evolved with a combination of sensations experienced when the blood sugar is low and the stomach is empty, might experience those sensations as, or accompanied by, an imperative to feed.)

Moral conflict: To be or not to be?

Moral conflict: To be or not to be?

A conscious agent faced with two mutually exclusive imperatives will experience conflict. Indefinite indecision could damage survival by wasting resources and opportunities. Selection pressure would therefore favour the evolution of algorithms and/or strategies and/or mechanisms to resolve conflict of this kind. In the social contexts we are considering here, a positive imperative to do x trumped by a stronger positive imperative to do y is the same as seeing y as a morally better option than x.

If the choice is between three or more options, then for example the positive imperative to do x could be trumped by a stronger positive imperative to do y, but y in turn could be trumped by an even stronger positive imperative to do z. In this scenario z is the best of the three. An agent who tends to choose options in accordance with imperatives arising from reciprocal altruism (over, say, options satisfying purely personal happiness, survival or laziness) would be a ‘good’ agent. An agent who chooses these sorts of options relatively more often than another agent would be ‘better’ than that other agent. The agent who chooses in this way more than any other in a group of three or more would be the ‘best’. The agent who for whatever combination of reasons always chooses the best possible option, and perhaps even seeks out the most challenging scenarios for this sort of decision-making, would be one approaching moral ‘perfection’.

To go from good/better/best to the concept of perfection or absolute goodness is no more special or bizarre than to go from the relatively straightforward algorithm of counting to the concept of infinity. Infinity is where you project to if you assume that counting can continue without end. The seemingly magical or at least profoundly mysterious aspect of moral behaviour is not the moral behaviour per se, but the moral agent’s consciousness of him or herself as a moral agent. And this is equivalent to the profoundly mysterious (because still unexplained) consciousness of counting which can lead to the conscious appreciation of the concept of infinity. Or indeed the profoundly mysterious – because still unexplained – consciousness of anything.

Count me in

But back to counting. Counting is not just a useful analogy in this context. Although ethically neutral in itself, counting – or something related to counting – could be quite crucial to the model of morality we are exploring here. Which brings us to another of the implications we need to draw out.

To be able to count you have to be able to see (or otherwise perceive) boundaries between things and similarities between things. So for example if I know what dogs and cats are, and I know they are both animals, I could count (say) three dogs and two cats, or I could count five animals. To count things, to see them as a set, or to put them in a set, I need to know what counts as one of the things. Even if I’m counting ‘every other grey blob in a cloud’ I must be able to see a blob as greyer than its surroundings; count that one; find the next and ignore it; and then find the next and count it.

All fairly obvious, but what has it got to do with ethics?

Well, in the two previous posts in this series (Three wise mentalities and Food for thought) we said that an evolved behavioural strategy of the form

(viii) Whenever conditions x1xn obtain, do y

will appear to a self-conscious human agent as an imperative which is universal in two ways. It is universal as to the conditions (whenever conditions x1xn obtain); and it is universal as to the agent – the strategy will apply as an imperative to all self-conscious human agents*. (*We will need to qualify this expression soon, but for now we can go with the flow.) As this is a strategy of cooperation (reciprocal altruism) further kinds, or nuances, of universality are contained within those other two. The conditions x1xn will contain, where relevant, references to that same domain of self-conscious human agents. For example: ‘whenever you see in front of you a person in danger of starving to death and you have food to spare…’. And then since strategy (viii) is universal as to the agent in that it applies to all self-conscious human agents, all those self-conscious human agents understand that it applies to every other self-conscious human agent. This, after all, is what ‘reciprocal’ means.

And this, too, is where counting comes in. It is no coincidence that we use the expression ‘count as’ (in for example ‘I count you as a friend’) to mean the same as ‘see as’. Counting is dealing in sets, and to deal in sets we need criteria to define membership of those sets.

Going back now to the self-conscious human agent who feels a universal imperative of the form (viii), we have to acknowledge that it is for that self-conscious human agent him or herself to construe ‘universal’. This is the qualification to the expression ‘all self-conscious human agents’ mentioned above (*). I understand the imperative of form (viii) applies universally – within the group I count myself a member of.

This issue of whether or not we recognise the ‘other’ as something like ourselves certainly accords with a familiar, and significant, aspect of ethical thought. Reciprocal altruism is based on trust, and to a significant extent the domain(s) I see myself as a member of are the domain(s) within which I extend my trust. Is my domain my family, my tribe, my gender, my church, my faith, my nation or my race? Is it my generation – perhaps to the exclusion of future generations? Is it my species?

We are on the edge of another implication of the model – which again I will postpone to next time.

References

1 See eg: RL Trivers, Reciprocal altruism: 30 years later. In: C.P. van Schaik and P.M. Kappeler (eds.) Cooperation in Primates and Humans: Mechanisms and Evolution. Berlin: Springer-Verlag, 2005.

© Chris Lawrence 2009.

Food for thought

leave a comment »

We ended the previous post with a model for human moral behaviour. The model is not proven of course, but can I think claim to be coherent and feasible.

This article follows Three wise mentalities in a discussion on The ethics of belief.

The Good Samaritan (van Gogh)

The Good Samaritan (van Gogh)

The model draws its inspiration from significant similarities in structure and content between Kant’s categorical imperative1, evolutionary theory regarding reciprocal altruism2, and principles like the Golden Rule found in many world religions and ethical systems.

Evolutionary theory provides the model’s explanation as to how moral behaviour arose. But the model does not suffer from the naturalistic fallacy. It does not imply a particular behaviour is good because it evolved. It would acknowledge that ‘bad’ behaviour is just as likely to have evolved as ‘good’ behaviour, if not more so. Whether or not something evolved is, ethically speaking, neither here nor there. The important question is whether the evolved feature itself is ethically significant.

But the model does not rely on a purely external criterion of what constitutes ethically significant behaviour either. It is not left to an arbitrary subjective judgment to decide which evolved behaviour is ‘good’ and what is ‘bad’ or ‘neutral’. We get part of the way towards the criterion by saying that ‘good’ evolved behaviour is behaviour arising out of algorithms peculiar to social interaction.

But this is not a definition of ‘good’ or ‘right’. To call an action or decision ‘good’ or ‘right’ is still a conscious judgment – but it is not arbitrary. This is one of the things the algorithmic insight adds to the picture.

We said in the previous post that an evolved behavioural strategy of the form

(viii) Whenever conditions x1xn obtain, do y

will appear to a self-conscious human agent as an imperative. This imperative is universal in two ways. It is universal as to the conditions (whenever conditions x1xn obtain); and, assuming it is a strategy which all humans have inherited, it is universal as to the agent – strategy (viii) applies as an imperative to all self-conscious human agents.

This universality is what maps the model to the categorical imperative. The categorical imperative was profoundly correct in linking the conscious agent’s individual quandary to its universal context. But the way Kant construed the categorical imperative is unnecessarily abstract in two respects.

First, in order to give the categorical imperative the metaphysical clout he wanted, Kant derived it from the concepts of freedom and rationality. This would establish it as a synthetic a priori, and therefore as necessarily binding. But as we saw in Categorically imperative, this stipulation has some counter-intuitive consequences, and it does not seem as possible as Kant thought to formulate categorical imperatives which apply across the entire domain of free rational beings. Narrowing our sights just to human beings removes these difficulties.

Second, Kant did not want to confine the categorical imperative to social interactions or contexts which had a social dimension. He insisted there were duties to self, and that these would fall under the categorical imperative just as duties to others would. But again as we saw in Precious metal rules OK? and Three wise mentalities his supporting arguments are less than convincing and rely on an unwarranted teleological assumption.

Both of these could be seen as remnants of a theist perspective. A theist perspective might see humans as contingent creations of a god, distinguished only by their free will and their rationality and whose every other characteristic can be discounted as merely animal. That same perspective might also see human social organisation as ultimately contingent, if for example a person’s most fundamental relationship is with his or her god.

Once we strip these two abstract remnants away, the categorical imperative moves very close to the Golden Rule – particularly the Golden Rule when it is itself applied in the spirit of the Golden Rule. This is the so-called Platinum Rule: Do to others as they would like you to do to them. See Precious metal rules OK?

There is an interesting parallel between the two following dichotomies:

  • The idea of morality as derived from evolved reciprocal altruism (and as therefore a posteriori) versus Kant’s idea of morality grounded in the categorical imperative, which is a synthetic a priori principle.
  • The idea of causality as part of evolved ‘intuitive physics’ (and again as therefore a posteriori)3 versus Kant’s idea of causality as a synthetic a priori category of thought.4

(Could there be another parallel between all of these and Plato‘s doctrine of recollection or anamnesis [άνάμνησις]5 6?)

To repeat: I am not taking it as self-evident that the accounts provided by evolutionary psychology are even partially correct. But I am claiming they are feasible and scientifically verifiable and falsifiable. As such they connect with the rest of scientific knowledge, and stand or fall by the balance of evidence in their favour. But if they do hold some measure of truth they shed a very different light on issues which were hitherto assumed to be purely the domain of metaphysics and/or philosophy of language and/or philosophy of mind.

We can elaborate the model further by drawing out a few of its implications. A good place to start is with moral conflict within the same individual.

One type of conflict is between mutually exclusive selfish and cooperative (reciprocally-altruistic) options available to an individual at the same time.

An agent who was calculating the best thing to do (as in most prudent – not necessarily the best morally) would generally make different decisions depending on whether he or she was factoring in future costs and benefits. An agent ignoring (or not even considering) future consequences would, other things being equal, tend to choose immediate gratification.

Now the algorithmic logic behind evolved reciprocal altruism presupposes the capacity to delay gratification. I choose to cooperate now so as to enjoy the benefits of others cooperating with me in the future – either the specific individual I am cooperating with right now, or others more generally who may learn about my reputation as a cooperator.

Moral agents (ie agents who do have capacities like delayed gratification and algorithmically derived behavioural strategies, and are therefore able to participate in social interactions involving reciprocal altruism) will now be exposed to a second type of conflict. This is the conflict between one cooperative (reciprocally-altruistic) option and another.

For example the agent could have evolved with both the following strategies (consciously perceived as imperatives):

  • Whenever conditions a1an obtain, do p.
  • Whenever conditions b1bm obtain, do q.

What does the agent do in a scenario where it is only possible to do either p or q but not both p and q? (The agent might also be free to do neither, but this would not be a conflicting option in this scenario.) In evolutionary terms the agent would need an algorithm or other mechanism for making a decision, so as to avoid the survival cost of indecision and prevarication.

For example, consider the following pair of options. (Yes I know strictly speaking the example involves kin selection rather than just reciprocal altruism, but I wanted an example that would be readily apparent.)

Option 1: Give your only remaining and indivisible unit of food to your son, who will otherwise starve to death.

Option 2: Give your only remaining and indivisible unit of food to a total stranger, who will otherwise starve to death.

It would be understandable if the agent eventually (but fairly rapidly) chose option 1.

Now consider these two:

Option 2: Give your only remaining and indivisible unit of food to a total stranger, who will otherwise starve to death.

Option 3: Give your only remaining and indivisible unit of food to your son, who will not otherwise starve to death, but who really enjoys that sort of food.

Here it would be odd to say the least for the agent not to choose option 2.

And lastly:

Option 4: Give all your remaining but divisible food to your son, who will starve to death if he receives no food at all, but needs less than half to survive.

Option 5: Give all your remaining but divisible food to the total stranger, who will starve to death if he receives no food at all, but needs less than half to survive.

Option 6: Divide your remaining food 50/50 between your son and the total stranger, who will both starve to death if they receive no food at all, but need less than half each to survive.

Option 7: Divide your remaining food between your son and the total stranger, (who will both starve to death if they receive no food at all), giving the stranger enough to survive, and the rest (the majority) to your son.

Option 8: Divide your remaining food between your son and the total stranger, (who will both starve to death if they receive no food at all), giving your son enough to survive, and the rest (the majority) to the stranger.

Option 9: Divide your remaining food between your son and the total stranger, (who will both starve to death if they receive no food at all), in proportion to your perception of their relative need.

Here the single ‘right’ choice is not so obvious. Both 4 and 5 seem as ‘wrong’ as option 3 in the previous scenario. But options 6 to 9 all have at least some merit – in the sense that one might find oneself imagining under what further conditions (not specified here) each might be better than the other three.

The point of introducing this kind of conflict is that in an entirely straightforward way it brings with it the concept of one option being ‘morally better’ than another. If we have the concept of ‘morally better’ then we have the concept of ‘morally best’. For example, although we may struggle to identify which of options 6 to 9 is the morally best of the four, we do not struggle with the idea that one of them might be.

From here, in terms of the logic and semantics of language, it is not that big a step to the concept of moral perfection.

But we’ll leave that to next time!

References

1 Immanuel Kant, Groundwork for the Metaphysic of Morals, translated by Jonathan Bennett (July 2005), http://www.earlymoderntexts.com/pdf/kantgw.pdf.

2 See eg: RL Trivers, Reciprocal altruism: 30 years later. In: C.P. van Schaik and P.M. Kappeler (eds.) Cooperation in Primates and Humans: Mechanisms and Evolution. Berlin: Springer-Verlag, 2005.

3 See eg: Steven Pinker, The stuff of thought, Allen Lane, 2007.

4 Immanuel Kant, Critique of Pure Reason.

5 Plato, Meno.

6 Plato, Phaedo.

© Chris Lawrence 2009.

Three wise mentalities

with one comment

The ‘structural similarities’ I spoke about in Scratch my back – those between Kant’s categorical imperative, the Golden Rule, and the evolved behavioural strategy of reciprocal altruism – are some really obvious ones.

This article follows Scratch my back in a discussion on The ethics of belief.

All three are principles guiding behaviour. All three are expressed in universal form.

Immanuel Kant

Immanuel Kant

All three are also ‘formal’, in the sense that specific content must be added to flesh them out to derive specific guidelines for specific contexts. For example the categorical imperative does not refer to promising or telling the truth, but can apply to both of these. Again reciprocal altruism is about cooperation in general, and can apply to a variety of scenarios where cooperation (and, significantly, cooperation by default) is an option. Similarly the Golden Rule (or the Silver Rule or the Platinum Rule: see Any fool can make a rule and Precious metal rules OK?) does not talk about specific things individuals may do or not do to each other, but about all of them.

With one interesting exception, they all assume, and address, a context of social interaction. The exception is Kant’s extension of the categorical imperative to cover ‘duties to self’. I mentioned in Precious metal rules OK? how unconvinced I am by his arguments on this subject. Now is perhaps a good time to dig deeper.

Kant presents the categorical imperative as a formula applicable to all free and rational beings, insofar as they are free and rational – full stop. Not insofar as they are humans or social beings of any kind. He seems to take it as given that there are duties to self – because moral traditions, including in particular the Christian tradition, say there are. His intention was after all to explain moral obligation as commonly experienced. And for example many moral, religious and legal codes have declared a ban on suicide, which contravenes the duty of self-preservation.

But this in itself does not independently prove that duties to self exist. His only substantive supporting arguments either refer in the end to the impact on others of obeying or disobeying these supposed duties, or involve unconvincing teleological claims about ‘nature’s purpose for humanity’.1

The categorical imperative cannot presuppose obligation, as obligation is what it is intended to explain. Kant is not saying we ought to obey the categorical imperative. He is saying we cannot but see as binding those imperatives which fit the formula of the categorical imperative, purely because we are free and rational. Anything else would involve a contradiction.

It is worth recapping some examples which fit the categorical imperative explanation.

Promising and telling the truth fit because they both rely on universal acceptance. I cannot have the right to lie because I cannot at the same time will that everyone else has the right to lie when they want to. If everyone could lie, the convention of truth-telling on which communication (and therefore lying itself) depends would be destroyed, or would never be established in the first place. One person’s putative ‘right to lie’ presupposes that people in general do not exercise the right.

Promising is if anything even more clear-cut. The convention of promising can only exist in a context where individuals act on the maxim that they keep their promises. Breaking a promise is only possible within the convention of promising. So again one person’s putative ‘right to break a promise’ presupposes that others do not exercise that right.

Moses and the Ten Commandments (Rembrandt)

Moses and the Ten Commandments (Rembrandt)

But now consider the sixth commandment: Thou shalt not kill. If this also fits the categorical imperative formula, it does so for a different reason than for promising and telling the truth. The argument is that I cannot will it to be a universal law that others can kill whom they want to – including myself and anyone who matters to me. But this is not a logical contradiction, as in promising and truth-telling. My putative right to break a promise presupposes that others do not exercise that right. But my putative right to kill another does not presuppose that others do not exercise the right to kill me.

It is theoretically possible, and logically coherent, for me to act on a maxim giving me the right to kill another and at the same time will it to be a universal law that others have the same right to kill me. If this sounds like the theme of a Spaghetti Western that should be no surprise. To live outside the law you must be honest, sang Bob Dylan.2 Well maybe. But you should certainly take care.

It is hard to see how social life is possible if people give themselves the right to kill each other. So the sixth commandment could be a categorical imperative under Kant’s formula, but only within the domain of social beings. It may not apply to honest and logically consistent outlaws.

So what about duties to self? To make sure we are not really talking about duties to others, we must exclude every possible impact on others of, say, suicide or self-neglect. And once we do that, it is hard to see how a putative duty of self-preservation can be explained as a categorical imperative. I can commit suicide while acting on the maxim that I have a sovereign right over my life, and at the same time will it to be a universal law that everyone else has the same sovereign right over their own lives.

The exception therefore seems to be the type that proves the rule. So for practical purposes, despite Kant’s protestations, all three principles (categorical imperative, Golden Rule, and evolved reciprocal altruism) also assume and address a context of social interaction.

The last similarity we should consider is the ‘message’ or ‘advice’ they contain. Now we can exclude duties to self, the message is effectively that of the Golden Rule itself: treat others as you would like to be treated. The categorical imperative formula from the previous post Scratch my back was:

(ii) Always act in such a way that you could also will that the maxim on which you act should be a universal law.

Continuing the same numbering sequence from Scratch my back, if we confine (ii) to the domain of social interaction it becomes something virtually identical to the Golden Rule:

(vii) Always act towards others in such a way that you could also will that the maxim on which you act should be a universal law.

The only real difference is that Golden Rule formulations typically talk about treating others as you would like to be treated (or in the Platinum version, as they would like to be treated), whereas (viii) talks about treating others as you think everyone would like to be treated – by everyone else.

Reciprocal altruism is a strategy (or a range of strategies) rather than an imperative. But in Scratch my back we managed to morph ‘tit for tat’ into:

(vi) When faced with a situation involving another individual and where the choices open to you are to cooperate, defect or decline, you should always cooperate unless you have good reason to think that individual previously deliberately defected, in which case you should politely decline from participating – and explain why.

On the assumption that you want others to cooperate with you, because of the survival benefits you will enjoy, then strategy – or maxim – (vi) is effectively to treat others as you would like to be treated.

Having laboured all these similarities, we should look at how the three principles differ.

The most obvious difference, and for present purposes the most interesting difference, is that the three come to us from such different sources. The Golden Rule is a guide to behaviour taught by a variety of world religions and traditions, but originated in the Axial Age around twenty-five centuries ago. The categorical imperative comes from the eighteenth-century European Enlightenment. It was intended as a synthetic a priori principle, providing a logical foundation and justification for moral experience. And reciprocal altruism as an evolved strategy comes from twentieth-century biological and game-theoretical models, backed by computer simulations.

Now if all three principles have pretty much the same meaning, and pretty much the same context – human social interaction – then perhaps they are three different views or descriptions of the same thing, but coming at it from three different perspectives? If so, this suggests it might be possible to construct a model reflecting all three.

Assume for the moment that an evolutionary explanation of human ethical behaviour is the correct one. (This is just an assumption, but it is one based on research which has at least established its feasibility.) Assume also that part of this evolutionary explanation is that humans are in some way hard-wired with strategies causing them to behave and decide as if in accordance with general guiding principles or imperatives – for example ‘tit for tat’ or ‘forgiving tit for tat’ or ‘cautious cooperation’: something like (vi). And assume, furthermore, that an evolved behavioural strategy like (vi) will appear to a self-conscious human agent as an imperative.

We have established the obvious feature of these principles that they are universal in form. For example (vi) is of the form:

(viii) Whenever conditions x1xn obtain, do y.

This is the universality of the agent’s context. Whenever conditions x1xn obtain, the agent is to do y. But if the agent is a member of a set or domain – indeed a species – which is ‘hard-wired’ to behave in a particular way, then universality also applies to the agent. In this example, we are saying that an imperative (or maxim, or strategy) like (vi) applies to all humans, because all humans are hard-wired with (vi).

The result is a model of human moral behaviour. It includes assumptions which are at least feasible. It is supported by some scientific research, but is clearly nowhere near proven. It could well be false. But it is both verifiable and falsifiable, and it is consistent with other scientific knowledge.

The model has some interesting implications, which will be explored in the next post.

References

1 Immanuel Kant, Groundwork for the Metaphysic of Morals, translated by Jonathan Bennett (July 2005), http://www.earlymoderntexts.com/pdf/kantgw.pdf.

2 Bob Dylan, Absolutely Sweet Marie, 1966. In: Blonde on blonde, 1966.

© Chris Lawrence 2009.

Scratch my back

leave a comment »

Structural similarities are fairly apparent between Kant’s categorical imperative, the Golden Rule, and the evolved behavioural strategy of reciprocal altruism. This is not to say the Golden Rule or the categorical imperative are identical with or derived from evolved reciprocal altruism. But the similarities certainly seem worth exploring.

This article follows Masters of war in a discussion on The ethics of belief.

Robert Trivers provides a useful and concise definition of reciprocal altruism:

Altruism is suffering a cost to confer a benefit. Reciprocal altruism is the exchange of such acts between individuals so as to produce a net benefit on both sides.1

Reciprocal altruism as a behavioural strategy rests on a few preconditions, which again are well documented in evolutionary theory. For example individuals must be able to recognise and remember each other as individuals, so as to be able to recognise and remember each other as either past co-operators or past defectors (cheaters). In humans at least other developments follow so as to inspire trust and therefore distinguish individuals as co-operators. Examples are the protection of reputation and the involuntary expression of emotion.

Much of the explanation of the evolution of reciprocal altruism is based on game theory, which explores the consequences of adopting competing strategies in repeated, two-player, non-zero-sum games like the Prisoner’s Dilemma2. One such strategy is ‘tit for tat’, in which a player (player A) starts out cooperating with a new opponent (player B). If player B has a different strategy (for example ‘always defect’ – ie cheat), and therefore defects, player A will also defect in the next round with player B3. Since A’s strategy stays ‘tit for tat’, A’s opening move with another player (eg player C) will however be to cooperate. A population of individuals who all consistently adopt ‘tit for tat’ will therefore cooperate and succeed indefinitely. But if the starting population is a mixture of ‘tit for tat’ strategists and ‘defector’ strategists, computer models demonstrate that under certain conditions the ‘tit for tat’ strategists thrive at the expense of the ‘defectors’. ‘Thriving’ means here that if success in the game is translated into reproductive success, after a certain number of generations the concentration of ‘tit for tat’ strategists increases at the expense of the defectors.

This is a hugely important finding. It has long been recognised that cooperative behaviour describable as reciprocal altruism does exist at many levels in nature – that same nature which in other respects is still ‘red in tooth and claw’. Before game theory explanations came along reciprocal altruism seemed to defy the evolutionary logic of survival of the fittest. But now there is a firm basis for understanding how ethical behaviour could have evolved, consistent with the rest of scientific knowledge.

We shall now try to look at ‘tit for tat’ in terms of Kant’s categorical imperative:

[(i)] I ought never to act in such a way that I couldn’t also will that the maxim on which I act should be a universal law.4

Simplifying into a second-person imperative – without the double negative – we get:

(ii) Always act in such a way that you could also will that the maxim on which you act should be a universal law.

‘Tit for tat’ could itself be expressed as a maxim:

(iii) When faced with a situation involving another individual and where the choices open to you are either to cooperate or defect, you should always cooperate unless you know that individual previously defected, in which case you should defect.

Now could maxim (iii) be logically willed to be a universal law? We saw in the discussion of the categorical imperative (Categorically imperative) that characterising an appropriately ‘universal’ domain is not quite as straightforward as Kant seemed to assume. Kant wanted his formula to apply to all free rational beings insofar as they were free and rational – not just to all human beings. But when content is added to generate specific imperatives on eg promising or telling the truth or killing, other attributes then need to be considered, which in humans take specific contingent values.

The issue is that free rational beings could potentially exist who are not human, even if the current world offers little evidence. If the categorical imperative derives (as Kant intended) from the concept of free rational being, then it should cover not only human beings but also free and rational angels, super-robots and extra-terrestrial aliens. And these, unlike humans, could possess immortality and/or omniscience and/or omnipotence and/or hypersensitivity and/or a whole range of other qualities consistent with freedom and rationality. Because of differences like these, it might be counterintuitive to extend to (say) hypersensitive, omniscient and immortal angels a maxim which works perfectly well across all human beings. (See Categorically imperative.)

If the argument above is sound, then we have only two options.

Option 1 is that a number of moral imperatives to do with lying, promising, killing and so forth no longer fit the categorical imperative formula. This would be unfortunate, as it would mean the categorical imperative no longer explains what it was intended to explain: why moral obligation is binding.

Option 2 is that specific imperatives, created by adding content to the formula, can legitimately apply within a specified subdomain (eg across all human beings). But as we saw in Categorically imperative the problem with this is knowing what is special about the subdomain of all human beings rather than (say) the subdomain of all white male human beings, or all human beings of Germanic descent. Applying a maxim across all human beings appears morally superior to applying a maxim across all white males, but the categorical imperative does not provide an explanation without begging the question.

Getting back to the ‘tit for tat’ formula, this question of domain is important because a maxim like (iii) works across some domains but not others. In an ‘ideal’ domain of perfect co-operators (and perfect operators) the maxim works. The default is to cooperate, and since no one encounters a defecter, everyone continues to cooperate.

A problem would arise however if (say) the individuals were not all perfect operators. Although they all intend to cooperate, sometimes an individual could defect by mistake – behave as if defecting while intending to cooperate. Pure ‘tit for tat’ is unforgiving:

Round 1: A cooperates (default). B defects (by mistake).

Round 2: A defects (tit for tat). B cooperates (tit for tat).

Round 3: A cooperates (tit for tat). B defects (tit for tat).

…and so on. The relationship between A and B will only get back to its original reciprocal altruism if either makes a compensating mistake – to cooperate by mistake when the intention was to defect. But the opposite mistake could be just as likely – A or B defects by mistake when the intention was to cooperate. This would lock them into reciprocal defection indefinitely – until one of them cooperates by mistake.

But this is a special case of a familiar issue in ethics – part of familiar moral grammar, if you like. Moral codes and moral thinking have long distinguished between voluntary and involuntary actions. In general we tend to censure deliberate bad actions but forgive involuntary bad actions. In fact we may often forgive the involuntary bad action itself but censure a voluntary action or omission which may have allowed the involuntary bad action to happen. An obvious example is the distinction between premeditated murder and culpable homicide resulting from voluntary negligence.

Applying this distinction to our test case we now get something like:

(iv) When faced with a situation involving another individual and where the choices open to you are either to cooperate or defect, you should always cooperate unless you know that individual previously deliberately defected, in which case you should defect.

Or perhaps more pragmatically:

(v) When faced with a situation involving another individual and where the choices open to you are either to cooperate or defect, you should always cooperate unless you have good reason to think that individual previously deliberately defected, in which case you should defect.

These formulations add a bit more flesh to the preconditions for moral behaviour. And again it is familiar flesh. A moral agent addressed by (v) needs to be something of a mind reader – at least to the extent of having a functioning ‘theory of mind’5.

The last adjustment we will make to our illustrative strategy for now involves the last clause: ‘in which case you should defect’. It sticks out like a sore thumb, almost begging to be seen as evidence that obviously any attempt to base ethics on evolutionary game theory must be misguided.

But we are not claiming ‘good’ ethical behaviour boils down to ‘tit for tat’. Many people would for example consider a strategy like this morally superior to the previous one:

(vi) When faced with a situation involving another individual and where the choices open to you are to cooperate, defect or decline, you should always cooperate unless you have good reason to think that individual previously deliberately defected, in which case you should politely decline from participating – and explain why.

With (vi) we seem to be closer to a maxim which could come under the categorical imperative (ii). Someone acting on maxim (vi) could quite coherently will maxim (vi) to be a universal law. And if a pure ‘tit for tat’ strategy can thrive under natural selection, so can (vi). In fact it has the advantage of protecting the reputation of co-operators.

A maxim like (vi) may not seem quite as fundamental as one against killing or lying or breaking a promise, but in the absence of shared principles like (vi) it is hard to see how social life can be maintained.

Of course few if any ethical situations involve games like the Prisoner’s Dilemma. And moral decisions are not the same as behavioural strategies. The point though is that models based on game theory can be used to explain how cooperative and reciprocally-altruistic behaviour could have evolved under natural selection, without needing any external (and particularly any supernatural) injection. With cooperation and reciprocal altruism we have a basis for building increasingly sophisticated levels of morally evaluated (and morally evaluating) social behaviour.

We do not have to prejudge how much of this evolved moral behaviour is present at birth, how much subsequent development takes place irrespective of environmental factors, and how much subsequent development may vary depending on those environmental factors. The theory merely proposes that, for humans, moral behaviour is part of their evolved nature, and outlines possible explanations as to how moral behaviour could have evolved.

If it is part of our evolved nature, then in theory it is supplied to every human individual regardless of that individual’s other characteristics, and regardless of the future progress of his or her life – just as every sparrow is supplied with the potential for flight, even though marauding magpies may prevent some of them from getting to take-off.

If this thinking is at all sound, it suggests a possible link to the universality of both the Golden Rule and Kant’s categorical imperative (and perhaps also to the ‘original position’ in John Rawls’s6 celebrated thought experiment?).

Some at least of which we will address in the next post

References

1 RL Trivers, Reciprocal altruism: 30 years later. In: C.P. van Schaik and P.M. Kappeler (eds.) Cooperation in Primates and Humans: Mechanisms and Evolution. Berlin: Springer-Verlag, 2005.

2 RL Trivers, The evolution of reciprocal altruism, in: Quarterly Review of Biology, 46, 1971.

3 RL Trivers, 2005: See 1 above.

4 Immanuel Kant, Groundwork for the Metaphysic of Morals, translated by Jonathan Bennett (July 2005), http://www.earlymoderntexts.com/pdf/kantgw.pdf.

5 Marc D Hauser, Moral minds, Abacus, 2008.

6 John Rawls, A Theory of Justice, 1971.

© Chris Lawrence 2009.

%d bloggers like this: