Donnerstag, 29. Dezember 2011

Untangling Entanglement



What to Feynman was interference (see the previous post), to Erwin Schrödinger (he of the cat) was the phenomenon known as entanglement: the 'essence' of quantum mechanics. Entanglement is often portrayed as one of the most outlandish features of quantum mechanics: the seemingly preposterous notion that the outcome of a measurement conducted over here can instantaneously influence the outcome of a measurement carried out way over there.
Indeed, Albert Einstein himself was so taken aback by this consequence of quantum mechanics (a theory which, after all, he helped to create), that he derided it as 'spooky' action at a distance, and never fully accepted it in his lifetime.
However, viewing quantum mechanics as a simple generalization of probability theory, which we adopt in order to deal with complementary propositions that arise when not all possible properties of a system are simultaneously decidable, quantum entanglement may be unmasked as not really that strange after all, but in fact a natural consequence of the limited information content of quantum systems. In brief, quantum entanglement does not qualitatively differ from classical correlation; however, the amount of information carried by the correlation exceeds the bounds imposed by classical probability theory.

Samstag, 17. Dezember 2011

What is Quantum Mechanics?



So far, I've told you a little about where I believe quantum theory comes from. To briefly recap, information-theoretic incompleteness, a feature of every universal system (where 'universal' is to be understood in the sense of 'computationally universal'), introduces the notion of complementarity. This can be interpreted as the impossibility for any physical system to answer more than finitely many questions about its state -- i.e. it furnishes an absolute restriction on the amount of information contained within any given system. From this, one gets to quantum theory via either a deformation of statistical mechanics (more accurately, Liouville mechanics, i.e. statistical mechanics in phase space), or, more abstractly, via introducing the possibility of complementary propositions into logic. In both cases, quantum mechanics emerges as a generalization of ordinary probability theory. Both points of view have their advantages -- the former is more intuitive, relying on little more than an understanding of the notions of position and momentum; while the abstractness of the latter, and especially its independence from the concepts of classical mechanics, highlights the fundamental nature of the theory: it is not merely an empirically adequate description of nature, but a necessary consequence of dealing with arbitrary systems of limited information content. For a third way of telling the story of quantum mechanics as a generalized probability theory see this lecture by Scott Aaronson, writer of the always-interesting Shtetl-Optimized.
But now, it's high time I tell you a little something about what, actually, this generalized theory of probability is, how it works, and what it tells us about the world we're living in. First, however, I'll tell you a little about the mathematics of waves, the concept of phase, and the phenomenon of interference.

Montag, 5. Dezember 2011

The Emergence of Law



For many scientists, the notion of a lawful, physical universe is a very attractive one -- it implies that in principle, everything is explicable through appeal to notions (more or less) directly accessible to us via scientific investigation. If the universe were not lawful, then it seems that any attempt at explanation would be futile; if it were not (just) physical, then elements necessary to its explanation may lie in a 'supernatural' realm that is not accessible to us by reliable means. Of course, the universe may be physical and lawful, but just too damn complicated for us to explain -- this is a possibility, but it's not something we can really do anything about.
(I have previously given a plausibility argument that if the universe is computable, then it is in principle also understandable, human minds being capable of universal computation at least in the limit; however, the feasibility of this understanding, of undertaking the necessary computations, is an entirely different question. There are arguments one can make that if the universe is computable, one should expect it to be relatively simple, see for instance this paper by Jürgen Schmidhuber, but a detailed discussion would take us too far afield.)
But first, I want to take a moment to address a (in my opinion, misplaced) concern some may have in proposing 'explanations' for the universe, or perhaps in the desirability thereof: isn't such a thing terribly reductionist? Is it desirable to reduce the universe, and moreover, human experience within the universe, to some cold scientific theory? Doesn't such an explanation miss everything that makes life worth living?
I have already said some words about the apparent divide between those who want to find an explanation for the world, and those who prefer, for lack of a better word, some mystery and magic to sterile facts, in this previous post. Suffice it to say that I believe both groups' wishes can be granted: the world may be fully explicable, and yet full of mystery. The reason for that is that even if some fundamental law is known, it does not fix all facts about the world, or more appropriately, not all facts can be deduced from it: for any sufficiently complex system, there exist undecidable questions about its evolution. Thus, there will always be novelty, always be mystery, and always be a need for creativity. That an underlying explanation for a system's behaviour is known does not cheapen the phenomena it gives rise to; in particular, the value of human experiences lies in the experiences themselves, not in the question of whether they are generated by some algorithmic rule, or are the result of an irreducible mystery.

Samstag, 19. November 2011

The Origin of the Quantum, Part III: Deviant Logic and Exotic Probability



Classical logic is a system concerned with certain objects that can attain either of two values (usually interpreted as propositions that may be either true or false, commonly denoted 1 or 0 for short), and ways to connect them. Though its origins can be traced back in time to antiquity, and to the Stoic philosopher Chrysippus in particular, its modern form was essentially introduced by the English mathematician and philosopher George Boole (and is thus also known under the name Boolean algebra) in his 1854 book An Investigation of the Laws of Thought, and intended by him to represent a formalization of how humans carry out mental operations. In order to do so, Boole introduced certain connectives and operations, intended to capture the ways a human mind connects and operates on propositions in the process of reasoning.
An elementary operation is that of negation. As the name implies, it turns a proposition into its negative, i.e. from 'it is raining today' to 'it is not raining today'. If we write 'it is raining today' for short as p, 'it is not raining today' gets represented as ¬p, '¬' thus being the symbol of negation.
Two propositions, p and q, can be connected to form a third, composite proposition r in various ways. The most elementary and intuitive connectives are the logical and, denoted by ˄, and the logical or, denoted ˅.
These are intended to capture the intuitive notions of 'and' and 'or': a composite proposition r, formed by the 'and' (the conjunction) of two propositions p and q, i.e. r = p ˄ q, is true if both of its constituent propositions are true -- i.e. if p is true and q is true. Similarly, a composite proposition s, formed by the 'or' (the disjunction) of two propositions p and q, i.e. s = p ˅ q, is true if at least one of its constituent propositions is true, i.e. if p is true or q is true. So 'it is raining and I am getting wet' is true if it is both true that it is raining and that you are getting wet, while 'I am wearing a brown shirt or I am wearing black pants' is true if I am wearing either a brown shirt or black pants -- but also, if I am wearing both! This is a subtle distinction to the way we usually use the word 'or': typically, we understand 'or' to be used in the so-called exclusive sense, where we distinguish between two alternatives, either of which may be true, but not both; however, the logical 'or' is used in the inclusive sense, where a composite proposition is true also if both of its constituent propositions are true.

Samstag, 12. November 2011

Maxwell's Demon, Physical Information, and Hypercomputation



The second law of thermodynamics is one of the cornerstones of physics. Indeed, even among the most well-tested fundamental scientific principles, it enjoys a somewhat special status, prompting Arthur Eddington to write in his 1929 book The Nature of the Physical World rather famously:
The Law that entropy always increases—the second law of thermodynamics—holds, I think, the supreme position among the laws of nature. If someone points out to you that your pet theory of the universe is in disagreement with Maxwell's equations—then so much the worse for Maxwell's equations. If it is found to be contradicted by observation—well these experimentalists do bungle things sometimes. But if your theory is found to be against the second law of thermodynamics I can give you no hope; there is nothing for it but to collapse in deepest humiliation.
But what, exactly, is the second law? And what about it justifies Eddington's belief that it holds 'the supreme position among the laws of nature'?
In order to answer these questions, we need to re-examine the concept of entropy. Unfortunately, one often encounters, at least in the popular literature, quite muddled accounts of this elementary (and actually, quite simple) notion. Sometimes, one sees entropy equated with disorder; other times, a more technical route is taken, and entropy is described as a measure of some thermodynamic system's ability to do useful work. It is wholly unclear, at least at first, how one is supposed to relate to the other.
I have tackled this issue in some detail in a previous post; nevertheless, it is an important enough concept to briefly go over again.

Montag, 31. Oktober 2011

The Origin of the Quantum, Part II: Incomplete Evidence



In the previous post, we have had a first look at the connections between incompleteness, or logical independence -- roughly, the fact that for any mathematical system, there exist propositions that that system can neither prove false nor true -- and quantumness. In particular, we saw how quantum mechanics emerges if we consider a quantum system as a system only able to answer finitely many questions about its own state; i.e., as a system that contains a finite amount of information. The state of such a system can be mapped to a special, random number, an Ω-number or halting probability, which has the property that any formal system can only derive finitely many bits of its binary expansion; this is a statement of incompleteness, known as Chaitin's incompleteness theorem, equivalent to the more familiar Gödelian version.
In this post, we will exhibit this analogy between incompleteness and quantumness in a more concrete way, explicitly showcasing two remarkable results connecting both notions.
The first example is taken from the paper 'Logical Independence and Quantum Randomness' by Tomasz Paterek et al. Discussing the results obtained therein will comprise the greater part of this post.
The second example can be found in the paper 'Measurement-Based Quantum Computation and Undecidable Logic' by M. Van den Nest and H. J. Briegel; the paper is very interesting and deep, but unfortunately, somewhat more abstract, so I will content myself with just presenting the result, without attempting to explain it very much in-depth.

Dienstag, 11. Oktober 2011

The Origin Of The Quantum, Part I: An Incomplete Phase Space Picture


In the last post, we have familiarized ourselves with some basic notions of algorithmic information theory. Most notably, we have seen how randomness emerges when formal systems or computers are pushed to the edges of incompleteness and uncomputability.
In this post, we'll take a look at what happens if we apply these results to the idea that, like computers or formal systems, the physical world is just another example of a universal system -- i.e. a system in which universal computation can be implemented (at least in the limit).
First, recall the idea that information enters the description of the physical world through viewing it as a question-answering process: any physical object can be uniquely identified by the properties it has (and those it doesn't have); any two physical objects that have all the same, and only the same, properties are indistinguishable, and thus identified. We can thus imagine any object as being described by the string of bits giving the answers to the set of questions 'Does the object have property x?' for all properties x; note that absent an enumeration of all possible properties an object may have, this is a rather ill-defined set, but it'll serve as a conceptual guide.
In particular, this means that we can view any 'large' object as being composed of a certain number of 'microscopic', elementary objects, which are those systems that are completely described by the presence or absence of one single property, that may be in either of two states -- having or not having that particular property. Such a system might, for instance, be a ball that may be either red or green, or, perhaps more to the point, either red or not-red. These are the systems that can be used to represent exactly one bit of information, say red = 1, not-red = 0. Call such a system a two-level system or, for short, and putting up with a little ontological inaccuracy, simply a bit.

Sonntag, 18. September 2011

Information through the Lens of Computation: Algorithmic Information Theory



Up to now, previous discussions about information and computation have been rather informal -- I have neglected to give detailed mathematical introductions to concepts like Shannon entropy, Turing universality, codes etc. for the simple reason that for our purposes, a qualitative understanding has been fully sufficient.
However, if we want to make any quantitative statements about the notions of information and computation, it will benefit us to take another look at the subject in a slightly more formal context.
To this end, the most efficient framework seems to me to be that of algorithmic information theory, introduced and developed mainly by the mathematicians Ray Solomonoff, Andrey Kolmogorov, and Gregory Chaitin in the 1960s. According to Chaitin, AIT is "the result of putting Shannon's information theory and Turing's computability theory into a cocktail shaker and shaking vigorously"; as such, it will aid us in achieving a better understanding, in a quantitative way, of both subjects.
In a way, AIT is concerned with a concrete realisation of the somewhat abstract-seeming notion of information or information content within the realm of computation. To this end, its central concept is the so-called Kolmogorov complexity, which, loosely speaking, provides a measure of the information content of an object -- say, a string of binary digits -- by considering the shortest program, on a given computer, that outputs this object. This might seem at first like a rather arbitrary definition: different computers generally will have different programs, of different lengths, that generate the string, leading to different complexities for one and the same object. But it turns out that despite this ambiguity, Kolmogorov complexity is useful independent of the concrete computational implementation it is based on.

Samstag, 3. September 2011

The Universal Universe, Part IV: Computational Complexity



Let me start this post on a personal note and apologize for the unannounced break -- I was on vacation in Norway, and, due to poor planning on my part, had an exam right afterwards, which kept me from attending to the blog as I planned.
It all turned out for the best in the end, though, since during my stay in Norway, I stumbled across this great essay by Scott Aaronson (discussed on his blog here), which alerted me to something I haven't paid a lot of attention to, but should have -- the field of computational complexity, and especially its implications for philosophy. Having now had some time to digest the contents, I think there's some important issues for me to think, and write, about.
Of course, I was aware of the field of computational complexity and its basic notions prior to reading that essay, but thought of it as in some way concerning 'merely' problems of practicality -- making the classic philosopher's error (not that I'm a philosopher, but this apparently doesn't keep me from committing their errors) of being content with showing something 'in principle' possible, ignoring the issues involved in making it possible 'in practice' as mere technicality.
Aaronson, now, has taken up the challenge of breaking this unfortunate habit, and does so with great clarity, showing that often enough, thinking about merely what is possible in principle, or, in more appropriate terms, what is computable, without thinking about the difficulty involved in actually undertaking that computation, misses most of the good stuff. One particularly interesting argument is his resolution of the so-called 'waterfall problem', which essentially poses that any physical process can be interpreted as implementing any computation, making thus the proposal that sentience can come about merely through computation somewhat suspect -- apparently forcing us to conclude that if there is a computation that gives rise to sentience, every (or nearly every) physical system can be viewed as implementing that computation, and hence, giving rise to sentience.

Dienstag, 26. Juli 2011

Consciousness Explained Anyway



Today, we are going on a slight diversion from the course of this blog so far, in order for me to write down some thoughts on the nature of human consciousness that have been rattling around in my head.
In a way, this is very apropos to the overarching theme of computationalism (which I personally take to be the stance that all of reality can be explained in computable terms, a subset of physicalism) that has pervaded the posts so far (and will continue to do so), because the idea that consciousness can't be reduced to 'mere computation' is often central to supposed rebuttals.
In another way, though, consciousness is far too high-level a property to properly concern ourselves with right now; nevertheless, I wanted to write these things down, in part just to clear my head.
My thoughts on consciousness basically echo those of the American philosopher Daniel Dennett, as laid out in his seminal work Consciousness Explained. However, while what Dennett laid out should perhaps most appropriately be called a theory of mental content (called the Multiple Drafts Model), I will in this (comparatively...) short posting merely attempt to answer one question, which, however, seems to me the defining one: How does subjective experience arise from non-subjective fundamental processes (neuron firings, etc.)? How can the impression of having a point of view -- of being something, someone with a point of view -- come about?

Sonntag, 17. Juli 2011

The Universal Universe, Part III: An Answer to Wigner



Eugene Wigner was a Hungarian American physicist and mathematician, who played a pivotal role in recognizing and cementing the role of symmetries in quantum physics. However, this is not the role in which we meet him today.
Rather, I want to talk about an essay he wrote, probably his most well-known and influential work outside of more technical physics publications. The essay bears the title The Unreasonable Effectiveness of Mathematics in the Natural Sciences [1], and it is a brilliant musing on the way we come to understand, model, and even predict the behaviour of physical systems using the language of mathematics, and the fundamental mystery that lies in the (apparently) singular appropriateness of that language.
Wigner's wonder is two-pronged: one, mathematics developed in one particular context often turns out to have applications in conceptually far removed areas -- he provides the example of π (or τ if you're hip), the ratio of a circle's circumference to its diameter, popping up in unexpected places, such as a statistical analysis of population trends, which seems to have little to do with circles; two, given that there is this odd 'popping up' of concepts originally alien to a certain context in the theory supposedly explaining that very context, how can we know that there is not some other, equally valid (i.e. equally powerful as an explanation) theory, making use of completely different concepts?

Dienstag, 12. Juli 2011

The Universal Universe, Part II: ...but is it?



I have ended the previous post with the encouraging observation that if the universe is computable, then it should be in principle possible for human minds to understand it -- the reasoning essentially being that each universal system can emulate any other. But the question now presents itself: is the universe actually computable?
At first sight, there does not seem any necessity for it to be -- after all, computation and computational universality may be nothing but human-derived concepts, without importance for the universe as it 'really is'. However, we know that it must be at least computationally universal, as universal systems can indeed be build (you're sitting in front of one right now) -- the universe can 'emulate' universal systems, and thus, must be universal itself (here, I am using the term universal in the somewhat loose sense of 'able to perform every calculation that can be performed by a universal Turing machine, if given access to unlimited resources'). Thus, the only possibility would be that the universe might be more than universal, i.e. that the notion of computation does not suffice to exhaust its phenomenology.
And indeed, it is probably the more widespread notion at present that the universe contains entities that do not fall within the realm of the computable. The discussion is sometimes framed (a little naively, in my opinion), as the FQXi did recently in its annual essay contest, in the form of the question: "Is reality digital or analog?"

Freitag, 8. Juli 2011

The Universal Universe, Part I: Turing machines



In the mid-1930s, English mathematician Alan Turing concerned himself with the question: Can mathematical reasoning be subsumed by a mechanical process? In other words, is it possible to build a device which, if any human mathematician can carry out some computation, can carry out that computation as well?
To this end, he proposed the concept of the automated or a-machine, now known more widely as the Turing machine. A Turing machine is an abstract device consisting of an infinite tape partitioned into distinct cells, and a read/write head. On the tape, certain symbols may be stored, which the head may read, erase, or write. The last symbol read is called the scanned symbol; it determines (at least partially) the machine's behaviour. The tape can be moved back and forth through the machine.
It is at first not obvious that any interesting mathematics at all can be carried out by such a simplistic device. However, it can be shown that at least all mathematics that can be carried out using symbol manipulation can be carried out by a Turing machine. Here, by 'symbol manipulation' I mean roughly the following: a mathematical problem is presented as some string of symbols, like (a + b)2. Now, one can invoke certain rules to act on these symbols, transform the string into a new one; it is important to realize that the meaning of the symbols does not play any role at all.
One rule, invoked by the superscript 2, might be that anything that is 'under' it can be rewritten as follows: x2 = x·x, where the symbol '=' just means 'can be replaced by'; another rule says that anything within brackets is to be regarded as a single entity. This allows to rewrite the original string in the form (a + b)·(a + b), proving the identity (a + b)2 = (a + b)·(a + b).
You can see where this goes: a new rule pertaining to products of brackets -- or, on the symbol level, to things written within '(' and ')', separated by '·' -- comes into effect, allowing a re-write to a·a + b·a + a·b + b·b, then rules saying that 'x·y = y·x', 'x + x = 2·x', and the first rule (x2 = x·x) applied in reverse allow to rewrite to a2 + 2·a·b + b2, proving finally the identity (a + b)2 = a2 + 2·a·b + b2, known far and wide as the first binomial formula.

Montag, 4. Juli 2011

A Difference to Make a Difference, Part II: Information and Physics



The way I have introduced it, information is carried by distinguishing properties, i.e. properties that enable you to tell one thing from another. Thus, whenever you have two things you can tell apart by one characteristic, you can use this difference to represent one bit of information. Consequently, objects different in more than one way can be used to represent correspondingly more information. Think spheres that can be red, blue, green, big, small, smooth, coarse, heavy, light, and so on. One can in this way define a set of properties for any given object, the complete list of which determines the object uniquely. And similar to how messages can be viewed as a question-answering game (see the previous post), this list of properties, and hence, an object's identity, can be, too. Again, think of the game 'twenty questions'.
Consider drawing up a list of possible properties an object can have, and marking each with 1 or 0 -- yes or no -- depending on whether or not the object actually has it. This defines the two sides of a code -- on one side, a set of properties, the characterisation of an object; on the other side, a bit string representing information this object contains. (I should point out, however, that in principle a bit string is not any more related to the abstract notion of information than the list of properties is; in other words, it's wrong to think of something like '11001001' as 'being' information -- rather, it represents information, and since one side of a code represents the other, so does the list of properties, or any entry on it.)

Freitag, 1. Juli 2011

A Difference to Make a Difference, Part I: Introducing Information



Picture a world in which there are only two things, and they're both identical -- let's say two uniform spheres of the same size and color, with no other distinguishable properties.
Now, ask yourself: How do you know there are two of them? (Apart from me telling you there are, that is.)
Most people will probably answer that they can just count the spheres, or perhaps that there's one 'over there', while the other's 'right here' -- but that already depends on the introduction of extra structure, something that allows you to say: "This is sphere number 1, while that is sphere number 2". Spatial separation, or the notion of position, is such extra structure: each sphere, additionally to being of some size and color, now also has a definite position -- a new property. But we said previously that the spheres don't have any properties additionally to size and color. So, obeying this, can you tell how many spheres there are?
The answer is, somewhat surprisingly, that you can't. In fact, you can't even distinguish between universes in which there is only one sphere, two identical ones, three identical ones etc. There is no fact of the matter differentiating between the cases where there are one, two, three, etc. spheres -- all identical spheres are thus essentially one and the same sphere.
This is what Leibniz (him again!) calls the identity of indiscernibles: whenever two objects hold all the same properties, they are in fact the same object.
Now consider the same two-sphere universe, but one sphere has been painted black. Suddenly, the task of determining how many spheres there are becomes trivial! There's two: the one that's been painted black, and the one that hasn't. But how has this simple trick upgraded the solution of this problem from impossible to child's play?

Sonntag, 26. Juni 2011

Leibniz' Dream

The Edge of Chaos
Gottfried Wilhelm Leibniz, who holds the distinction of being the only philosopher
a biscuit was named after, once expressed the hope that every philosophical dispute may be settled by calculation:
"The only way to rectify our reasonings is to make them as tangible as those of the Mathematicians, so that we can find our error at a glance, and when there are disputes among persons, we can simply say: Let us calculate [calculemus], without further ado, to see who is right." [1]
To this end, he proposed two concepts, which were to be employed in this calculation: the characteristica universalis, which was to be a kind of conceptual language, able to symbolically represent concepts of mathematics, science, and metaphysics alike, consisting of what he called real characters, capable of directly corresponding to an idea, embodying it in the way a number embodies the idea of quantity, as opposed to merely referring to it, as words do; and the calculus ratiocinator, a system or device used to perform logical deductions within the framework set by the characteristica.
It is not precisely clear whether Leibniz intended for the ratiocinator to be an actual machine -- after all, Leibniz was one of the pioneers of mechanical calculation machines with the construction of the Stepped Reckoner --, or merely an abstract calculus, a forerunner to modern symbolic logic -- whether it was software or hardware, so to speak.