thinking makes it so

There is grandeur in this view of life…

Food for thought

leave a comment »

We ended the previous post with a model for human moral behaviour. The model is not proven of course, but can I think claim to be coherent and feasible.

This article follows Three wise mentalities in a discussion on The ethics of belief.

The Good Samaritan (van Gogh)

The Good Samaritan (van Gogh)

The model draws its inspiration from significant similarities in structure and content between Kant’s categorical imperative1, evolutionary theory regarding reciprocal altruism2, and principles like the Golden Rule found in many world religions and ethical systems.

Evolutionary theory provides the model’s explanation as to how moral behaviour arose. But the model does not suffer from the naturalistic fallacy. It does not imply a particular behaviour is good because it evolved. It would acknowledge that ‘bad’ behaviour is just as likely to have evolved as ‘good’ behaviour, if not more so. Whether or not something evolved is, ethically speaking, neither here nor there. The important question is whether the evolved feature itself is ethically significant.

But the model does not rely on a purely external criterion of what constitutes ethically significant behaviour either. It is not left to an arbitrary subjective judgment to decide which evolved behaviour is ‘good’ and what is ‘bad’ or ‘neutral’. We get part of the way towards the criterion by saying that ‘good’ evolved behaviour is behaviour arising out of algorithms peculiar to social interaction.

But this is not a definition of ‘good’ or ‘right’. To call an action or decision ‘good’ or ‘right’ is still a conscious judgment – but it is not arbitrary. This is one of the things the algorithmic insight adds to the picture.

We said in the previous post that an evolved behavioural strategy of the form

(viii) Whenever conditions x1xn obtain, do y

will appear to a self-conscious human agent as an imperative. This imperative is universal in two ways. It is universal as to the conditions (whenever conditions x1xn obtain); and, assuming it is a strategy which all humans have inherited, it is universal as to the agent – strategy (viii) applies as an imperative to all self-conscious human agents.

This universality is what maps the model to the categorical imperative. The categorical imperative was profoundly correct in linking the conscious agent’s individual quandary to its universal context. But the way Kant construed the categorical imperative is unnecessarily abstract in two respects.

First, in order to give the categorical imperative the metaphysical clout he wanted, Kant derived it from the concepts of freedom and rationality. This would establish it as a synthetic a priori, and therefore as necessarily binding. But as we saw in Categorically imperative, this stipulation has some counter-intuitive consequences, and it does not seem as possible as Kant thought to formulate categorical imperatives which apply across the entire domain of free rational beings. Narrowing our sights just to human beings removes these difficulties.

Second, Kant did not want to confine the categorical imperative to social interactions or contexts which had a social dimension. He insisted there were duties to self, and that these would fall under the categorical imperative just as duties to others would. But again as we saw in Precious metal rules OK? and Three wise mentalities his supporting arguments are less than convincing and rely on an unwarranted teleological assumption.

Both of these could be seen as remnants of a theist perspective. A theist perspective might see humans as contingent creations of a god, distinguished only by their free will and their rationality and whose every other characteristic can be discounted as merely animal. That same perspective might also see human social organisation as ultimately contingent, if for example a person’s most fundamental relationship is with his or her god.

Once we strip these two abstract remnants away, the categorical imperative moves very close to the Golden Rule – particularly the Golden Rule when it is itself applied in the spirit of the Golden Rule. This is the so-called Platinum Rule: Do to others as they would like you to do to them. See Precious metal rules OK?

There is an interesting parallel between the two following dichotomies:

  • The idea of morality as derived from evolved reciprocal altruism (and as therefore a posteriori) versus Kant’s idea of morality grounded in the categorical imperative, which is a synthetic a priori principle.
  • The idea of causality as part of evolved ‘intuitive physics’ (and again as therefore a posteriori)3 versus Kant’s idea of causality as a synthetic a priori category of thought.4

(Could there be another parallel between all of these and Plato‘s doctrine of recollection or anamnesis [άνάμνησις]5 6?)

To repeat: I am not taking it as self-evident that the accounts provided by evolutionary psychology are even partially correct. But I am claiming they are feasible and scientifically verifiable and falsifiable. As such they connect with the rest of scientific knowledge, and stand or fall by the balance of evidence in their favour. But if they do hold some measure of truth they shed a very different light on issues which were hitherto assumed to be purely the domain of metaphysics and/or philosophy of language and/or philosophy of mind.

We can elaborate the model further by drawing out a few of its implications. A good place to start is with moral conflict within the same individual.

One type of conflict is between mutually exclusive selfish and cooperative (reciprocally-altruistic) options available to an individual at the same time.

An agent who was calculating the best thing to do (as in most prudent – not necessarily the best morally) would generally make different decisions depending on whether he or she was factoring in future costs and benefits. An agent ignoring (or not even considering) future consequences would, other things being equal, tend to choose immediate gratification.

Now the algorithmic logic behind evolved reciprocal altruism presupposes the capacity to delay gratification. I choose to cooperate now so as to enjoy the benefits of others cooperating with me in the future – either the specific individual I am cooperating with right now, or others more generally who may learn about my reputation as a cooperator.

Moral agents (ie agents who do have capacities like delayed gratification and algorithmically derived behavioural strategies, and are therefore able to participate in social interactions involving reciprocal altruism) will now be exposed to a second type of conflict. This is the conflict between one cooperative (reciprocally-altruistic) option and another.

For example the agent could have evolved with both the following strategies (consciously perceived as imperatives):

  • Whenever conditions a1an obtain, do p.
  • Whenever conditions b1bm obtain, do q.

What does the agent do in a scenario where it is only possible to do either p or q but not both p and q? (The agent might also be free to do neither, but this would not be a conflicting option in this scenario.) In evolutionary terms the agent would need an algorithm or other mechanism for making a decision, so as to avoid the survival cost of indecision and prevarication.

For example, consider the following pair of options. (Yes I know strictly speaking the example involves kin selection rather than just reciprocal altruism, but I wanted an example that would be readily apparent.)

Option 1: Give your only remaining and indivisible unit of food to your son, who will otherwise starve to death.

Option 2: Give your only remaining and indivisible unit of food to a total stranger, who will otherwise starve to death.

It would be understandable if the agent eventually (but fairly rapidly) chose option 1.

Now consider these two:

Option 2: Give your only remaining and indivisible unit of food to a total stranger, who will otherwise starve to death.

Option 3: Give your only remaining and indivisible unit of food to your son, who will not otherwise starve to death, but who really enjoys that sort of food.

Here it would be odd to say the least for the agent not to choose option 2.

And lastly:

Option 4: Give all your remaining but divisible food to your son, who will starve to death if he receives no food at all, but needs less than half to survive.

Option 5: Give all your remaining but divisible food to the total stranger, who will starve to death if he receives no food at all, but needs less than half to survive.

Option 6: Divide your remaining food 50/50 between your son and the total stranger, who will both starve to death if they receive no food at all, but need less than half each to survive.

Option 7: Divide your remaining food between your son and the total stranger, (who will both starve to death if they receive no food at all), giving the stranger enough to survive, and the rest (the majority) to your son.

Option 8: Divide your remaining food between your son and the total stranger, (who will both starve to death if they receive no food at all), giving your son enough to survive, and the rest (the majority) to the stranger.

Option 9: Divide your remaining food between your son and the total stranger, (who will both starve to death if they receive no food at all), in proportion to your perception of their relative need.

Here the single ‘right’ choice is not so obvious. Both 4 and 5 seem as ‘wrong’ as option 3 in the previous scenario. But options 6 to 9 all have at least some merit – in the sense that one might find oneself imagining under what further conditions (not specified here) each might be better than the other three.

The point of introducing this kind of conflict is that in an entirely straightforward way it brings with it the concept of one option being ‘morally better’ than another. If we have the concept of ‘morally better’ then we have the concept of ‘morally best’. For example, although we may struggle to identify which of options 6 to 9 is the morally best of the four, we do not struggle with the idea that one of them might be.

From here, in terms of the logic and semantics of language, it is not that big a step to the concept of moral perfection.

But we’ll leave that to next time!

References

1 Immanuel Kant, Groundwork for the Metaphysic of Morals, translated by Jonathan Bennett (July 2005), http://www.earlymoderntexts.com/pdf/kantgw.pdf.

2 See eg: RL Trivers, Reciprocal altruism: 30 years later. In: C.P. van Schaik and P.M. Kappeler (eds.) Cooperation in Primates and Humans: Mechanisms and Evolution. Berlin: Springer-Verlag, 2005.

3 See eg: Steven Pinker, The stuff of thought, Allen Lane, 2007.

4 Immanuel Kant, Critique of Pure Reason.

5 Plato, Meno.

6 Plato, Phaedo.

© Chris Lawrence 2009.

Advertisements

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

%d bloggers like this: