I ended the previous post with a promise to explain how we get from a decision-making mechanism which arbitrates between different moral options to the concept of moral perfection. Gulp.
We can understand what sort of thing an explanation of cooperative (reciprocally-altruistic) behaviour would be in terms of evolutionary theory.1 We can then understand that conscious agents who have evolved with cooperative behavioural strategies might experience those behavioural strategies as imperatives. (To compare with a non-moral example, a conscious agent who has evolved with a combination of sensations experienced when the blood sugar is low and the stomach is empty, might experience those sensations as, or accompanied by, an imperative to feed.)
A conscious agent faced with two mutually exclusive imperatives will experience conflict. Indefinite indecision could damage survival by wasting resources and opportunities. Selection pressure would therefore favour the evolution of algorithms and/or strategies and/or mechanisms to resolve conflict of this kind. In the social contexts we are considering here, a positive imperative to do x trumped by a stronger positive imperative to do y is the same as seeing y as a morally better option than x.
If the choice is between three or more options, then for example the positive imperative to do x could be trumped by a stronger positive imperative to do y, but y in turn could be trumped by an even stronger positive imperative to do z. In this scenario z is the best of the three. An agent who tends to choose options in accordance with imperatives arising from reciprocal altruism (over, say, options satisfying purely personal happiness, survival or laziness) would be a ‘good’ agent. An agent who chooses these sorts of options relatively more often than another agent would be ‘better’ than that other agent. The agent who chooses in this way more than any other in a group of three or more would be the ‘best’. The agent who for whatever combination of reasons always chooses the best possible option, and perhaps even seeks out the most challenging scenarios for this sort of decision-making, would be one approaching moral ‘perfection’.
To go from good/better/best to the concept of perfection or absolute goodness is no more special or bizarre than to go from the relatively straightforward algorithm of counting to the concept of infinity. Infinity is where you project to if you assume that counting can continue without end. The seemingly magical or at least profoundly mysterious aspect of moral behaviour is not the moral behaviour per se, but the moral agent’s consciousness of him or herself as a moral agent. And this is equivalent to the profoundly mysterious (because still unexplained) consciousness of counting which can lead to the conscious appreciation of the concept of infinity. Or indeed the profoundly mysterious – because still unexplained – consciousness of anything.
Count me in
But back to counting. Counting is not just a useful analogy in this context. Although ethically neutral in itself, counting – or something related to counting – could be quite crucial to the model of morality we are exploring here. Which brings us to another of the implications we need to draw out.
To be able to count you have to be able to see (or otherwise perceive) boundaries between things and similarities between things. So for example if I know what dogs and cats are, and I know they are both animals, I could count (say) three dogs and two cats, or I could count five animals. To count things, to see them as a set, or to put them in a set, I need to know what counts as one of the things. Even if I’m counting ‘every other grey blob in a cloud’ I must be able to see a blob as greyer than its surroundings; count that one; find the next and ignore it; and then find the next and count it.
All fairly obvious, but what has it got to do with ethics?
(viii) Whenever conditions x1–xn obtain, do y
will appear to a self-conscious human agent as an imperative which is universal in two ways. It is universal as to the conditions (whenever conditions x1–xn obtain); and it is universal as to the agent – the strategy will apply as an imperative to all self-conscious human agents*. (*We will need to qualify this expression soon, but for now we can go with the flow.) As this is a strategy of cooperation (reciprocal altruism) further kinds, or nuances, of universality are contained within those other two. The conditions x1–xn will contain, where relevant, references to that same domain of self-conscious human agents. For example: ‘whenever you see in front of you a person in danger of starving to death and you have food to spare…’. And then since strategy (viii) is universal as to the agent in that it applies to all self-conscious human agents, all those self-conscious human agents understand that it applies to every other self-conscious human agent. This, after all, is what ‘reciprocal’ means.
And this, too, is where counting comes in. It is no coincidence that we use the expression ‘count as’ (in for example ‘I count you as a friend’) to mean the same as ‘see as’. Counting is dealing in sets, and to deal in sets we need criteria to define membership of those sets.
Going back now to the self-conscious human agent who feels a universal imperative of the form (viii), we have to acknowledge that it is for that self-conscious human agent him or herself to construe ‘universal’. This is the qualification to the expression ‘all self-conscious human agents’ mentioned above (*). I understand the imperative of form (viii) applies universally – within the group I count myself a member of.
This issue of whether or not we recognise the ‘other’ as something like ourselves certainly accords with a familiar, and significant, aspect of ethical thought. Reciprocal altruism is based on trust, and to a significant extent the domain(s) I see myself as a member of are the domain(s) within which I extend my trust. Is my domain my family, my tribe, my gender, my church, my faith, my nation or my race? Is it my generation – perhaps to the exclusion of future generations? Is it my species?
We are on the edge of another implication of the model – which again I will postpone to next time.
© Chris Lawrence 2009.