Montag, 5. Dezember 2011

The Emergence of Law



For many scientists, the notion of a lawful, physical universe is a very attractive one -- it implies that in principle, everything is explicable through appeal to notions (more or less) directly accessible to us via scientific investigation. If the universe were not lawful, then it seems that any attempt at explanation would be futile; if it were not (just) physical, then elements necessary to its explanation may lie in a 'supernatural' realm that is not accessible to us by reliable means. Of course, the universe may be physical and lawful, but just too damn complicated for us to explain -- this is a possibility, but it's not something we can really do anything about.
(I have previously given a plausibility argument that if the universe is computable, then it is in principle also understandable, human minds being capable of universal computation at least in the limit; however, the feasibility of this understanding, of undertaking the necessary computations, is an entirely different question. There are arguments one can make that if the universe is computable, one should expect it to be relatively simple, see for instance this paper by Jürgen Schmidhuber, but a detailed discussion would take us too far afield.)
But first, I want to take a moment to address a (in my opinion, misplaced) concern some may have in proposing 'explanations' for the universe, or perhaps in the desirability thereof: isn't such a thing terribly reductionist? Is it desirable to reduce the universe, and moreover, human experience within the universe, to some cold scientific theory? Doesn't such an explanation miss everything that makes life worth living?
I have already said some words about the apparent divide between those who want to find an explanation for the world, and those who prefer, for lack of a better word, some mystery and magic to sterile facts, in this previous post. Suffice it to say that I believe both groups' wishes can be granted: the world may be fully explicable, and yet full of mystery. The reason for that is that even if some fundamental law is known, it does not fix all facts about the world, or more appropriately, not all facts can be deduced from it: for any sufficiently complex system, there exist undecidable questions about its evolution. Thus, there will always be novelty, always be mystery, and always be a need for creativity. That an underlying explanation for a system's behaviour is known does not cheapen the phenomena it gives rise to; in particular, the value of human experiences lies in the experiences themselves, not in the question of whether they are generated by some algorithmic rule, or are the result of an irreducible mystery.

In the previous discussion, I brought up, as an example for a rule-guided system that nevertheless can give rise to complex and unforeseen phenomena, simple 'games' known as cellular automata. A cellular automaton consists of a grid of cells, and a simple rule that determines what colour each cell to paint, depending on the previous state of the grid -- in the simplest cases, only the state of the cell itself, and its immediate neighbours.
The most basic such automata are those in which the grid only consists of a one-dimensional array of cells; their evolution is typically depicted as a two-dimensional grid, where each line represents the next 'time-step' in the evolution of the line before it, i.e. a one-time application of the rule. A typical evolution of such a cellular automaton looks like this:
Fig.1: Evolution of Rule 110
(The picture was generated using Wolfram|Alpha; if you fancy playing around with it a little, you can just type in 'rule' followed by a number between 0 and 255 -- there are 256 elementary cellular automata --, and it will show you the rule the automaton follows, and generate an example evolution.)
These automata form the paradigmatic example of what I will call an active or prescriptive law. Their evolution is described by a fixed rule, and every step of their evolution looks the way it does precisely because of that rule; they have no freedom, they could not have done otherwise (though it is possible to soften that condition by introducing a probabilistic law that, say, paints a certain cell black with 30% probability). The law determines their evolution.
When we think about (physical) laws, we usually think about active laws -- the stone fell down because of gravity, unstable atoms emit beta radiation because of the weak force, a massive object does not change its state of motion if no forces act upon it because of the law of inertia. Indeed, most people would perhaps hold that this is the only kind of law, or at least the only kind truly worthy of that name.
But if this were so, the proposal of a lawful, physical universe would face an apparently insurmountable obstacle. The reason for this is that the laws do not explain themselves: any explanation of a physical universe in these terms would be faced with the question, 'Why these laws?', and unable to answer it -- and hence, would be incomplete as an explanation. Stephen Hawking, in A Brief History of Time, laconically posed the question as: "What is it that breathes fire into the equations and makes a universe for them to describe?"
If there is a fundamental law, who or what put it there?
Certain attempts at ameliorating the problem have been made, which pursue broadly opposing directions: one is to insist that ultimately, there is only one set of laws that could possibly give rise to a universe, singled out by some criterion, often mathematical or logical in nature; the other one is to assume that all possible laws lead to universes, but we just happen to inhabit this one, because it is suited to our needs -- i.e. we couldn't exist anywhere else, hence, we exist here (this is subsumed under the umbrella of the 'anthropic principle', the subtleties of which I have no intention of getting into here).
Both paths, in my opinion, are faced with difficulties, that are interestingly similar in both cases. The most important is the lack of testability. If, in the first case, there is no way that the universe could have been otherwise, we loose the Popperian criterion of falsifiability that is supposed to differentiate good science from mere speculation (though here, also, a bit of discussion is lurking that has to be shelved for the moment), as there is no possible experiment that could falsify it -- for if there were one, then there would be a different way the universe could have been, and the theory would not be unique. The second case suffers from the same problem, but here, it is due to an embarrassment of riches: for every conceivable experiment, there exists a set of laws consistent with it, so an experiment can only tell us which universe we inhabit, not whether or not the notion of the existence of a 'multiverse' is true or false.
Fortunately, I believe that there is another possibility.

Passive laws
Not all laws are of the previously-defined active kind; there are also those laws that are what I will call passive or descriptive. The most clear examples are those obtained by some sort of 'averaging' procedure over a system's true fundamental dynamics. In certain cases, it may not matter what the system is doing in detail; an approximate, 'coarse-grained' description may be fully appropriate. This is the case, for instance, in statistical physics: if you consider a gas, it is usually immaterial what each and every one of its constituent atoms is doing; rather, we are more interested in the gas' macroscopic properties.
These macroscopic properties are defined by the aggregate dynamics of the gas' microscopic constituents: the temperature is the average kinetic energy of the atoms; the pressure is the average force exerted on some area of the wall of some container by the atoms colliding with them; etc.
One can find various relationships between these macroscopic properties, the most important of which is the ideal gas law, which states that the product of pressure and volume is proportional to the temperature, with the proportionality constant being related to the amount of gas we are considering.
These relationships have the same form as the rules of the cellular automata we discussed earlier: given some characteristics of the system, it is possible to use them to derive others; in particular, it is possible to describe the time evolution of the system, given the laws.
However, the interpretation of both kinds of laws must be different: while the laws governing the cellular automaton evolution exactly determine that evolution -- i.e. the evolution is a certain way because the laws say so --, laws such as the ideal gas law are the way they are because the evolution of the system happens the way it does; these laws do not prescribe a certain evolution, they merely describe the evolution that occurs. It is not the case that a future state of some gas is the way it is because the ideal gas law says so; rather, the ideal gas law has to be formulated the way it is in order to be able to describe the state of the gas.
The reason for this is that the cellular automaton's rule is a fundamental law, while the ideal gas law isn't -- it can be derived, as a relationship between statistical expectation values, from the dynamics governing the microscopic constituents of the gas -- i.e. the atoms it is made of. This entails the possibility that the ideal gas law may be violated! It is arrived at by 'throwing away' information about the fundamental laws, by going to a statistical description. But, statistics are only right on average; there is a certain probability that things might be different (however, a gas contains so fantastically many atoms, that violations of the statistics are spectacularly unlikely).
So far, we seem to have only managed to loose something, without any apparent gain: the new kind of laws we have found, the passive laws, only provide an approximate description of the system they apply to; it is only the forces of probability that make them hold. Nothing in the world says they can't be violated, they just will tend not to be, for sufficiently great sample sizes.
However, it is interesting to note that one very important law -- some say, the most important one -- is of just this passive kind: namely, the second law of thermodynamics, the law of entropy increase. Recall the discussion in this post: entropy is a measure of the number of microstates that yield the same observable macrostate -- for instance, the number of ways atoms in a volume of gas can be rearranged, without the gas looking any different. Macrostates having a greater number of associated microstates are more likely than macrostates that only few microstates give rise to -- for instance, the macrostate that corresponds to a gas of only half the volume of its container has less ways to rearrange the atoms (half of them, in fact) than the macrostate that fills the whole volume. Any change in state will typically lead to a more likely state -- simply by force of there being more of those to choose from --, which is a state of higher entropy. Thus, entropy always increases.
However, there is no fundamental law that says a gas can't spontaneously occupy just half the volume of the room -- it just will tend not to.
The interesting thing now is that this is not a law that needs to be built in, but one that will arise spontaneously, no matter the underlying dynamics -- it is thus a law exempt from the question: 'Why this law?'
In being descriptive rather than prescriptive, the second law justifies itself: it does not cause the system to behave the way it does; rather, it emerges out of the behaviour the system shows by itself. Distinctly from a cellular automaton rule, which has to be built into the cellular automaton by whoever created it, it arises spontaneously out of the system's behaviour -- indeed, even out of a cellular automaton's -- without having been put in beforehand!
That's all well and good, you might say, but underneath it all, there still is the fundamental rule governing the cellular automaton; whether or not we choose to forget about the microscopic laws, they still need to be there for any descriptive laws to emerge, don't they?
Well, actually, and perhaps (hopefully!) quite surprisingly, the answer to that question is: no, they don't! Even in the complete absence of a fundamental law, supervening, descriptive, passive laws can emerge.

Law Without Law
The argument is extremely simple. Consider a perfectly lawless object; any will do. In previous discussions, we have identified lawlessness and randomness: if there is no way to predict the behaviour of a certain system, then it is lawless; and equivalently, then it is random (as any prediction can then only be as good as chance allows). So, as a lawless object, we may take a random binary string, i.e. a sequence of 1s and 0s such that knowing the first (n-1) bits, guessing the nth is only successful with a probability of 50%. We may think of this string as being the record of the evolution of a certain physical system, or of a series of experiments performed on the system, but we can just as well consider it in the abstract -- as you recall, anything can be coded in binary.
Now let's imagine, instead of seeing the string up close, like this: 101001011110101110100..., we were sensitive only to certain 'macroscopic' properties of the string, for instance, the total number of 1s versus 0s. Imagine for instance that a macroscopic experiment corresponds to a great many microscopic ones -- a realistic model, if you consider that measuring, for instance, the temperature of a gas corresponds to measuring the kinetic energies of billions and billions of its constituent atoms. Surprisingly, while we can't predict anything on the microscopic bit-level, because indeed, the string is utterly lawless at that level, on the macroscopic level, we now gain the capacity to make predictions -- if only probabilistic ones. And even more strikingly, that capacity for prediction emerges precisely because of the fundamentally random, i.e. lawless nature of the string!
Because of this randomness, we know that, at each position, a 1 is as likely to show up as a 0; for a string of some length, thus, there will be as many 1s as there are 0s. We can thus predict that whenever we make our macroscopic experiment, we will receive a string in which there are as many 0s as there are 1s, and not a string that consists only of 0s, or in which 1s greatly outnumber the 0s. The more macroscopic our experiment, i.e. the longer the string, the more accurate this prediction will be. This is a law that emerges from fundamental lawlessness -- a law without need for justification. A universe built on such laws thus may be both  physical and lawful, and hence, explicable at least in principle.
However, the possibility to predict the uniformity of a bit string may not sound overly impressive, at first. But, consider that bits can be used to represent anything. Fundamentally, a bit is nothing but a distinction: between up and down, red and green, round and square. Whatever differs in one characteristic can be used to store one bit of information. Conversely, one bit of information can be used to differentiate two things in one property. Bit strings can thus be thought of as representing all the properties of one object, as distinct from other objects (this necessity, of referring to other objects, introduces an interesting, relational aspect into the description: in a set of identical elements, none can be told apart from the other (obviously), so there is no point of reference for ascribing to these elements properties of their own; you first need to introduce objects that differ from an element in at least one characteristic in order to meaningfully speak about that characteristic, and in order to be able to represent information using these objects. We will return to this notion at some later point.).
A whole string of bits might, in aggregate, then stand for a macroscopic property; and just as only aggregate properties of the object matter macroscopically, only aggregate properties of the bit string may matter to determine them. So, consider the proposition 'the bit string consists of equally many 0s and 1s' to stand for 'the moon is made of rock', and the proposition 'the bit string consists only of 0s' to stand for 'the moon is made of green cheese'. Whenever you look at the moon, you are fed a new bit string -- the property has no independent existence apart from a measurement context. Nevertheless, with overwhelming probability, you will observe a rocky moon, rather than one made out of green cheese, even though the fundamental laws of the universe don't require it -- even if nobody ever decreed it to be this way, rather than any other.
Or, as a last example, consider a set of bit strings, each of which is determined separately. From any state whatsoever -- say, all bit strings start out all 0, or all 1 -- this system will, bit string by bit string, evolve towards a state in which almost all bit strings are composed of equally many 0s and 1s, showing only small fluctuations away from this state. This already comes very close to the thermodynamic phenomenon of equilibration, i.e. of evolution towards a certain state of equilibrium, say one of uniform temperature and pressure, in the case of a gas.
This, then, is the main message of this post: law can come from non-law, and not all kinds of law beg the question of their own origin. In the next post(s), I will discuss how quantum mechanics implies that in fact, all of the laws that govern our universe can be considered to be of this passive type -- but first, we'll have to think a little about what quantum mechanics actually is, and come to terms with some of its effects and implications.

Keine Kommentare:

Kommentar veröffentlichen