Reductio ad Professor Plum
I found myself talking last time about a lot of things I don’t really understand: Alonzo Church’s proof of the ‘halting problem’ displayed by Turing machines, and Gödel’s Incompleteness Theorem. All because Stuart Kauffman’s book Reinventing the Sacred leaves me cold and confused.
[Fourth in a series on Stuart A Kauffman’s Reinventing the Sacred which began with Reinventing the sand dune.]
I’m beginning to see we need to dig a bit more into the ‘determinism’ side of Kauffman’s combined ‘reductionism-determinism’, which we started to analyse in Reinventing the sand dune. This was where we also introduced Kauffman’s ‘simplest statement of reductionism’ – from Laplace –
who said that a sufficient intelligence, if given the positions and velocities of all the particles in the universe, could compute the universe’s entire future and past.
Now let’s add the bit I quoted last time:
Let’s be clear about what [Philip W] Anderson’s argument [about the ‘halting problem’ displayed by Turing machines] does not say: it does not say that any laws of physics are violated. Physics is perfectly intact, it is merely that a determined physicist cannot come up with a list of physical events that are jointly necessary and sufficient for the water buckets, the silicon chip, or whatever else to compute the sum of the first 100 digits.
Does that ‘necessary and sufficient’ point back to Laplace’s imaginary ‘intelligence’ which can compute the entire future and past of the universe?
If we take a simple-minded view of causality, we would say that if A causes B, then A is a sufficient condition of B. But that in itself would not rule out other possible causes of B. X or Y could possibly also cause B. So A is a sufficient condition of B but not a necessary condition of B.
In an actual specific event E where B happened, the thing that caused B was what actually caused B. So in our example it was A which caused B, not X or Y – even though X or Y could have caused B. But even in that event E, A was not a necessary condition of B. It was the actual condition (cause) of A, but the cause could have been something else – eg X or Y. We just happen to know it wasn’t X or Y.
Now take a tiny (and highly simplified) slice of an actual or imaginary universe where Laplace’s determinism applies. We could see A, and then ‘compute’ forward and predict that B would happen. We watch and see that B does in fact happen. Alternatively we could start at B and ‘compute’ back in time. If all we know is that B happened, then we do not know (and therefore cannot ‘compute’) whether it was A or X or Y which caused B. If we know that only A or X or Y can possibly cause B, then we know that A or X or Y must have happened. Narrowing it further, if – somehow or other – we know that neither X nor Y did happen, then we know that A must have happened.
What is the strength of this ‘must’? It is a ‘must’ of logical deduction, as when a detective solves a crime. It must have been Professor Plum in the kitchen with the lead piping, because we have ruled out every other possibility. But Professor Plum had not been driven by necessity to commit the murder. So if A is a necessary (as well as sufficient) condition of B, it is only a necessary condition in the Professor Plum sense. This is not as far as I know what we typically mean by ‘necessary condition’.
Determinism therefore only seems to need sufficient conditions, not necessary and sufficient conditions – in the familiar sense of necessity. So is it the reductionism part which needs the necessary and sufficient conditions?
Again let’s take a simple example: a chemical phenomenon (the sort of thing we might read about in a chemistry text book) ‘reducing’ to one or more physical phenomena (the sort of thing we might read about in a physics text book). For example (in chemistry) the proportions in which calcium and chlorine react to form calcium chloride obey an empirical formula CaCl2. This can be explained (in physics) by the electronic configurations of calcium and chlorine atoms, in terms of the tendency of a calcium atom to lose two electrons and the tendency of a chlorine atom to gain one, resulting in stable Ca2+ and Cl– ions.
But does this explanation (‘reduction’) rely on language of necessary and sufficient conditions?
Sufficient conditions certainly seem to apply. There is a state of affairs describable as one where (1) atoms of a particular element (Ca) have a particular electronic configuration such that the loss of two outer electrons per atom leads to an increase in stability (Ca2+); and (2) atoms of another particular element (Cl) have another particular electronic configuration such that they are already sharing electrons in pairs (Cl2) because this also results in an increase in stability; and (3) the two elements are brought together so they can react. This aggregate state of affairs (1 + 2 + 3) will be a sufficient condition for the creation of calcium chloride obeying an empirical formula CaCl2.
But is this aggregate state of affairs also a necessary condition for the creation of calcium chloride of empirical formula CaCl2? Compare the example of A being a cause (and therefore sufficient condition) of B, but where X or Y could also have caused B. In the case of calcium chloride, in theory there could be another explanatory state of affairs – eg a wizard weaving a spell which binds chlorine and calcium atoms in the magic proportion 2 to 1. But the state of our overall scientific knowledge is such that we know there is no other explanation.
So if there is a necessity, it seems to be the same kind of necessity as that of Professor Plum in the kitchen with the lead piping. We know of no other explanation, we understand the explanation and its context well enough to think that any other possible explanation (eg the chemical wizard) would be very unlikely indeed. So in the context of our overall scientific knowledge the aggregate state of affairs (1 + 2 + 3) is a necessaryPP and sufficient condition for the creation of calcium chloride with an empirical formula CaCl2 – where ‘necessaryPP’ means ‘necessary only in the Professor Plum sense’.
In this case at least, the ‘reduction’ of chemistry to physics reduces to something apparently quite straightforward: the chemical phenomenon admits of an explanation in terms of physical phenomena which is coherent and congruent with the sum total of our scientific knowledge.
Now let’s consider Kauffman’s statement about the ‘determined physicist’ unable to
come up with a list of physical events that are jointly necessary and sufficient for the water buckets, the silicon chip, or whatever else to compute the sum of the first 100 digits.
If we compare this with the calcium chloride example it does not seem at all remarkable. The sum total of our scientific knowledge is such that the physical explanation (1 + 2 + 3) does represent an aggregate necessaryPP and sufficient condition for the creation of calcium chloride with an empirical formula CaCl2. The sum total of our scientific knowledge effectively rules out other possible explanations and causations.
The Turing machine is something quite different. It is something more like the plane triangle, which can have an endless number of different physical manifestations. There is no limit to the variety of possible physical implementations of a Turing machine. So there can be a set of sufficient conditions for any device to compute the sum of the first 100 digits of a very large number. In fact there would be an infinite number of possible sets of sufficient conditions. So by definition there cannot be a set of necessaryPP conditions.
A program to add up the first 100 digits of a very large number is one we know will complete in a finite number of steps.
But consider a different program, one which we know from its internal logic that it will never complete. This can also run on an infinite number of physically implemented types of Turing machines. In this case there will also be no set of necessaryPP conditions, but also no set of sufficient conditions either, for any physically implemented device to complete. Because we know it will never complete.
Now consider a program about which we do not know if it will complete or not. There may or may not be a set of sufficient conditions for it to complete. In fact if there is a set of sufficient conditions then there will be an infinite number of sets of sufficient conditions. But while we remain in ignorance as to whether it will complete or not, all we can say is that either there is no set of sufficient conditions or there is an infinite number of sets of sufficient conditions for it to complete. And again, no set of necessaryPP conditions.
And again: so what? Am I missing something really fundamental about machines which do mathematics and logic? To be continued…
© Chris Lawrence 2011