Fact and Fiction

Thoughts about a funny old world, and what is real, and what is not. Comments are welcome, but please keep them on topic.

Sunday, October 30, 2005

What do you care what other people think?

In this week's New Scientist there is Creativity special: Looking for inspiration which discusses the issue of how creativity emerges in the human brain, and why some individuals have so much more of it than others.

The last word was given to various luminaries. One comment caught my eye because it was so close to my own viewpoint:

Lee Smolin (theoretical physicist at the Perimeter Institute for Theoretical Physics in Waterloo, Ontario): "The main ingredients in science are intensive immersion in a problem, fanatical desire to solve it (big problems are rarely solved by accident), familiarity with previous attempts leading to an original critique of where they went wrong, reckless disregard for what other experts think, and the courage to overcome your own doubts and hesitations, which are much scarier than anything anyone else can say because you know best how vulnerable your new idea is."

I think the most important point made above is to have a "reckless disregard for what other experts think". Too much respect for other people's ideas causes you to do their research for them, rather than doing your own research for yourself. You must follow your own nose, but remember to be honest with yourself so you don't fool yourself into thinking that things are going well when they are not.

Lastly, my apologies to Richard Feynman for stealing his book title for this posting.

Quantum computers can't be backed-up

In this week's New Scientist there is an article entitled Attack of the quantum worms in which the problem of defending a quantum computer against malicious software attack is discussed. Even the leading quantum computer theorist David Deutsch says that he hadn't anticipated this problem. Frankly, I am amazed that he hadn't forseen this possibility; maybe he has never suffered an attack on his computer.

One of the key parts of your defence strategy is backing up your software, so that if something gets corrupted by an attack then it can be repaired afterwards. This is where quantum mechanics is not very helpful to you, because it is fundamentally impossible to make an independent copy of a QM state, so you can't do a safe backup. This is such an important property of QM that it has been elevated to the status of being called the No-cloning theorem.

This sounds crazy! How is it possible that QM should prevent you from making backups?

I have already discussed in an earlier posting Quantum mechanics is not weird the reason why many people think that QM seems to be crazy. It all boils down to people insisting that the everyday intuition that they have built up through exposure to the world through their senses will also work in situations where their senses are blind. One such example is QM, which exercises its effects in places that we don't directly see with our senses.

This problem of everyday intuition being inappropriately applied also gets in the way of understanding why QM prevents backups from being made. It is tempting to imagine that you can just grab the data and make a copy of it to keep somewhere else. The problem here is that in QM the implementation of the words "data", "grab", and "copy" have to be defined precisely. I already did something like this in my earlier posting Spooky action at a distance?, but the situation is much simpler here.

Suppose that a quantum computer contains only one particle (1 qubit), which is represented as A↑ (spin pointing up) or A↓ (spin pointing down). The power of a quantum computer comes from the fact that its state can simultaneously hold A↑ and A↓, so it can do truly parallel computations. In the intuitively comprehensible classical (i.e. non-quantum) computer these states are mutually exclusive possibilities, so classical computers can do only one computation at a time. It is this reality of doing parallel computations in quantum computers that gives them their enormous (a factor of 2N for an N-particle quantum computer) speed advantage over classical computers.

Assume that the backup store is also a single particle, which is part of a QM backup system that denoted as U. The spin-up and spin-down states of the backup particle in U will be denoted as U↑ and U↓, respectively.

  1. The initial state of the quantum computer and backup store is then U (a A↑ + b A↓), where a and b are the amplitudes of the two possible states of the quantum computer.
  2. The state of the quantum computer and backup store after an attempt has been made to do a backup is then a U↑ A↑ + b U↓ A↓. Arrows have now been attached to U because interactions have occurred between A and U that cause the backup particle in U to become correlated with the state of the quantum computer.
OK, so that's it. We have apparently done a backup; A↑ has been copied as U↑, and A↓ has been copied as U↓. However, U is not a safe backup of A, because in QM the A and U particles are still connected to each other. A malicious software attack on A will thus propagate to U, and will thus be an attack on both A and U.

This is exactly the same effect that appeared in Spooky action at a distance?, where separating the particles does not destroy the connection between them. That means that if you start with a U↑ A↑ + b U↓ A↓, and then you separate the A and U particles you get something that can be represented as a U↑ ••• A↑ + b U↓ ••• A↓, where the ••• remind us of the fact that the QM connection between the particles is unchanged by separating them. This means that A and U behave as if they were the same particle, which was what Einstein called "Spooky action at a distance".

It is this sameness that destroys any pretence that U can be a safe backup of A, because effectively U is A, rather than U is a copy of A.

So the No-cloning theorem prevents backups from being made in a quantum computer. The only defence against attacks by malicious software is to ensure that the connection between the quantum computer and the outside world is switched on for only a negligible fraction of the time, and a further countermeasure is to choose the on-times randomly. This slows down the communication between the quantum computer and the outside world by a fixed fraction, but it does not affect the internal speed of quantum computation.

If intelligent design is science then so is astrology

I have posted before on End of the Enlightenment, and I can't resist returning to the theme. In this week's New Scientist editorial the focus is on the court case on whether Intelligent Design pseudo-science should be taught alongside evolution science in the classroom. The leading article God goes to court in all but name contains a gem on how science is defined, which I quote (who is on which side is clear from the context):

The packed courtroom came alive for Behe's cross-examination. Eric Rothschild, an attorney for the plaintiffs, sparked a heated debate about the definition of a scientific theory. The National Academy of Sciences says it is, "a well-substantiated explanation of some aspect of the natural world that can incorporate facts, laws, inferences, and tested hypotheses".

In court Behe accepted that ID fails to pass muster, but argued that in practice scientists use the word more broadly. He offered an alternative: "A scientific theory is a proposed explanation which points to physical data and logical inferences. "

Rothschild saw his opportunity to move in for the kill. "But you are clear, under your definition, the definition that sweeps in intelligent design, astrology is also a scientific theory, correct?"

"Yes, that's correct," Behe said, as the court erupted in laughter.

"You've got to admire the guy," comments Robert Slade, a local retiree and science enthusiast. "It's Daniel in the lions' den."

As I read it, the distinction that is being drawn here is between the following two different definitions of "science":
  1. NAS definition: "well-substantiated explanation".
  2. ID definition: "proposed explanation".

The key point about the NAS definition is that your explanation must be backed up by experimental observations, whereas this is isn't the case with the ID definition.

This distinction was discussed at length in the excellent book The Fellowship by John Gribbin, where he describes the dawn of western science, where "natural philosophy" (i.e. explanations unsupported by experiments) was replaced by "natural science" (i.e. explanations supported by experiments).

Thursday, October 27, 2005

Quantum mechanics is not weird

In my two previous postings State vector collapse? and Spooky action at a distance? I have talked (ranted?) at some length about commonplace misunderstandings of quantum mechanics. I find the awe in which QM is held to be quite annoying. It is described using words like "spooky" or "weird" or "mysterious", which are used by journalists and scientists alike. Remember the parrot cartoon? Yes, this is another example that is very aptly described by that cartoon.

QM has been around since 1925. How long does it have to be around before people accept it as is? Why should anything about the universe be called "weird"? The only explanation for this behaviour is that we start with a prior prejudice that those phenomena that are directly accessible to us via our senses are representative of all phenomena in the universe. When we unearth something that is not directly accessible to our senses, we therefore register surprise if it does not fit into our "standard" intuitive understanding that serves us so well for those phenomena that are directly accessible to our senses.

One of the benefits of a scientific education is that it extends one's standard intuition into areas that were not previously accessible. QM is just one example where one's intuition needs to be built up from almost no prior intuitive understanding of QM. At first, QM will seem weird because it behaves in ways that are very dfferent from standard intuition. But once one has understood that one's standard intuition must be limited in scope, it is easy to open one's mind up to the novel features of QM, and thus to achieve an "extended" (i.e. "standard" plus the extra bits needed to incorporate QM) intuition.

Of course, there are also people who love things that don't fit into their standard intuition, because they are then things to be worshipped rather than to be understood. QM is a perfect candidate for these people, because they see QM as having a mystical favour that eludes direct comprehension. Certainly QM eludes standard intuition, but that doesn't mean that QM is mystical.

So, QM is not weird, provided that you are humble enough to acknowledge that the standard intuition (e.g. common sense) that you develop using your standard senses (e.g. eyes, ears, etc) is necessarily limited to the sorts of phenomena that they can sense. You need to use entended sensing apparatus (e.g. laboratory apparatus) in order to build an extended intuition (e.g. feeling for QM).

Sunday, October 23, 2005

The artilect war

I posted before on Human life: The next generation by Ray Kurzweil, which suggests that the rate of technological advance is such that it won't be long before we significantly upgrade humans to a better model. I disagree with that prediction, or at least I think that the time scale for things of that sort to happen will be quite long (i.e. many human lifetimes, at least).

In this week's New Scientist there is a letter Cosy Kurzweil whose author Hugo de Garis paints a far less rosy picture of our technological future than Kurzweil. de Garis has written a book The Artilect War (precis), about ARTIficial intelLECTs that have massive intellectual powers, which could reasonably be expected to be created (or create themselves from earlier prototypes?) sometime during the 21st century. [Update on 30 December 2008: The links in this paragraph are now dead. A PDF of The Artilect War is here. Hugo de Garis has a web page here.]

I strongly urge you to read The Artilect War. I find myself in a similar quandary to de Garis, who is working on building artifical brains yet he is worried about the possible consequences of his research.

One course of action that would be incredibly naive to take would be to unilaterally abandon research in the area of artifical brains. That would be as stupid as to unilaterally abandon defence research, where it wouldn't be long before your vulnerability to the hostile actions of others would soon be your undoing. Similarly, in artificial brain research, you at least have to understand what the potentialities of the technology are in order to protect yourself against hostile actions using such technology against you. The only moral course of action is to continue the research.

Here is a worst case, and a best case:

  1. Worst case: Building artilects may turn out to be just another step in evolution, and that ultimately we (i.e. humans) would then be "viewed" (or whatever artilects do when they are "thinking") as just a stepping stone along their prior evolutionary path. In this scenario, the human species (as we know it) does not ultimately survive the appearance of artilects. Naturally, being humans, we don't exactly relish this prospect.
  2. Best case: An artilect would be an artificial brain-like prosthesis that would greatly enhance the abilities of its human "wearer". To use a present-day analogy, imagine what it would be like to have direct brain access to the internet, rather than having to type with your fingers at a keyboard and receive results through your eyes. Assuming the interface was designed so that you could use it efficiently just by thinking, then you would be phenomenally knowledgeable. Now imagine upgrading this direct brain-internet access to include the ability to do massively more intelligent thinking (let's call it a "brain graft"), to add to the massively greater amount of information that you already had from your brain-internet access. What if you were able to program this "brain graft" just by thinking about it, so that you could offload some of the more tedious things that you now do laboriously with your existing biological brain (e.g. a trivial example would be mental arithmetic)? The possibilities of what you can do with a "brain graft" are endless.

Naturally, my vision is for a future that is something like the second case above, provided that the technology is used sympathetically. None of us wants to be like the Borg.

However, like de Garis, I am not optimistic about how different groups of people, using different levels of artilect technology, would smoothly interact with each other. This will be a big problem, which is discussed by de Garis as the "Terrans" versus "Cosmists" issue in The Artilect War. For instance, if they wished, groups of people could opt to not use this technology, much as some people people currently opt to live low-technology lives in tipis, but the tipi dwellers don't always have a smooth time with their technology-using neighbours. This type of problem could be exacerbated by many orders of magnitude by an artilect-based technology.

What do we do to get from where we now are to the vision of the good future described above? The only sensible course of action is to continue research in the area of artifical brains, and to ensure that whatever technology is created is integrated sympathetically into our human framework. We have to always be on the look-out for potential instabilities, where small groups of people can create dangerous versions of the technology, and to protect ourselves against this. A contemporary example of this problem (on a trivial scale compared to artilect technology) would be the fight against writers of assorted malware (e.g. software viruses, etc). The "arms escalation" that exists between the "good" and the "bad" guys ends up making the good guys much stronger, provided that they recognise early on that they are in a fight for survival.

Spooky action at a distance?

Einstein said of quantum mechanics that it had a "spooky action at a distance". He wrote a scientific paper with two colleagues (Podolksy and Rosen) on what has become known as the EPR paradox. E, P & R genuinely believed that they had discovered something paradoxical in QM (that's why they wrote the paper), and that therefore QM had to be wrong. What they had actually done (although they didn't realise it) was to show that the universe behaves in stranger ways than they were prepared to believe.

What EPR had stumbled on was one of the consequences of what we now call "quantum entanglement". This entanglement is an obvious consequence of QM, assuming you have an Everett-like interpretation of QM, which I discussed in my earlier posting State vector collapse?.

So, why does EPR annoy me? It because EPR has become a wrong part of QM folklore. Some people think that EPR were right, and not that they were wrong. This manifests itself in various ways, one of which is that people believe that QM somehow allows faster-than-light (or even instantaneous) communication.

This is complete rubbish. Let me tell you why. This description is quite long and detailed, but it has a very simple logic.

I will describe the basics of EPR from the correct point of view, rather than the incorrect point of view that EPR themselves used in their EPR paradox paper. I want to do it this way because I see no point in perpetuating a misunderstanding by presenting the wrong argument first. This means that I change lots of details in order to tell the story the way I want to. Note that I am not going to discuss technical issues relating to particle statistics, because they don't affect the basic "quantum entanglement" result.

Here is the correct version of EPR:

  1. Create a pair of identical particles (call them A and B) in such a way that their spins in the up/down direction point in opposite directions. This physical state is represented as A↑ B↓ + A↓ B↑, where the spin-arrows ↑ and ↓ are used to denote the direction of spin. Because the particles are identical, both ways of assigning spin to the particles (i.e. A↑ B↓ and A↓ B↑) are equally valid, and both possibilities actually and simultaneously occur in practice, so the real physical situation is correctly represented as the sum A↑ B↓ + A↓ B↑, rather than only one or other of the pieces A↑ B↓ and A↓ B↑.
  2. Pull the particles apart until they are separated by an enormous distance, but make sure that you don't mess up their spin directions whilst separating them. You could represent this physical state as A↑ ••• B↓ + A↓ ••• B↑, where the ••• indicate the physical separation between A and B.
  3. Introduce two observers U and V who are tasked with observing A and observing B, respectively. The real physical situation is now represented as U (A↑ ••• B↓ + A↓ ••• B↑) V, where I have placed U at the left and V at the right to indicate where they are located (i.e. near to A and near to B, respectively).
  4. The two observers U and V now observe A and B to see what their spins are. The word "observe" here means that an observer interacts with a particle, in such a way that the state of their brain becomes correlated with the state of the particle (this will become clearer below). There are two possible outcomes of this experiment. The brains of U and V become correlated with A↑ ••• B↓ to create the state U↑ A↑ ••• B↓ V↓, or become correlated with A↓ ••• B↑ to create the state U↓ A↓ ••• B↑ V↑ (a spin-arrow ↑ or ↓ written next to U or V means that the observer's brain has observed the corresponding spin). The real physical situation is the sum of these two, which is U↑ A↑ ••• B↓ V↓ + U↓ A↓ ••• B↑ V↑.
  5. The net effect of the observation above is to transform the state from U (A↑ ••• B↓ + A↓ ••• B↑) V to U↑ A↑ ••• B↓ V↓ + U↓ A↓ ••• B↑ V↑. The process that leads to this transformation is defined in detail by the dynamical equations of QM. Any other conjectured transformation must bring in assumptions from outside the dynamical equations of QM.
  6. These results show that either (U observes A↑ and V observes B↓) or (U observes A↓ and V observes B↑), which means that what U observes and what V observes are deterministically associated with each other. Even though the particles are separated by an enormous distance when they are observed, they nevertheless produce observations in which A↑ is associated with B↓, and A↓ is associated with B↑.
  7. This is the bit that Einstein said was "spooky action at a distance" because he maintained a distinction between the particles being observed, and the observers themselves. He would not accept that the observers were also a part of the whole QM state, so he never accepted that U↑ A↑ ••• B↓ V↓ + U↓ A↓ ••• B↑ V↑ described a real physical situation. His view was (in a QM style of notation) that the real physical situation was described by (U↑ A↑ or U↓ A↓) and (B↑ V↑ or B↓ V↓), where (X or Y) allows only one of X or Y to occur (this is actually an exclusive-or), and (X and Y) requires that both of X and Y occur. This prescription (i.e. figmant of Einstein's imagination, if you want) is an example of a conjecture that is brought in from outside QM, as described in step 5 above.
  8. Thus Einstein thought that the outcome of observing A was a random result that was either A↑ or A↓, and similarly the outcome of observing B was an independent random result that was either B↑ or B↓. He therefore thought that there was no reason why the results for A and B should be correlated with each other, provided that A and B were so far apart that there was no possibility of some other means of communication between them that might cause the results of the observations to be correlated.

In summary, we have an advantage over Einstein, because we know that after the observations have been made the (correct) real physical situation is actually described by U↑ A↑ ••• B↓ V↓ + U↓ A↓ ••• B↑ V↑, whereas Einstein simply refused to believe that this was what reality was doing, and insisted that the (incorrect) real physical situation was described by (U↑ A↑ or U↓ A↓) and (B↑ V↑ or B↓ V↓). The correct description of reality makes it obvious that the QM associations were set up when the particles were originally close together, and were then preserved as the particles were pulled apart. The incorrect description of reality has been plucked from thin air, based on a prior prejudice about how the universe works, rather than being derived scientifically from QM. No wonder Einstein wrongly thought that QM was paradoxical.

The diagram below summarises the steps in the above argument.

  1. A: Initial state of the particles A↑ B↓ + A↓ B↑.
  2. B: State of the particles after being pulled apart A↑ ••• B↓ + A↓ ••• B↑.
  3. C: Show the observers tasked with observing A and B as yellow squares, which together with the particles describes the state U (A↑ ••• B↓ + A↓ ••• B↑) V before the observations have been made.
  4. D: Show the observers and the particles after the observations have been made. This describes the state U↑ A↑ ••• B↓ V↓ + U↓ A↓ ••• B↑ V↑.

Can we do faster than light (or instantaneous) communication between the A and B particles (which are separated by an enormous distance) in the above description?

  1. If you think like Einstein (who never accepted the reality of states like U↑ A↑ ••• B↓ V↓ + U↓ A↓ ••• B↑ V↑ in QM) you would say "yes", because you would have no other way of understanding how the observations of A and B came to be deterministically interrelated, and therefore arrive at a paradox (assuming you believe that faster than light travel is paradoxical!), so you would deduce that QM must be wrong because it is what is allowing this faster than light communication to occur.
  2. If you do accept the reality of states like U↑ A↑ ••• B↓ V↓ + U↓ A↓ ••• B↑ V↑ in QM then you have no problem in saying that the communication between A and B occurred whilst they were still close together, and that the consequences of this communication are preserved as the particles are pulled apart, and are then "observed" (i.e. correlated with the brain states of U and V).

I used the suggestive "•••" notation to indicate the separation between A and B, because it also suggests that A and B are linked together no matter how apart they are. This linking is also called "quantum entanglement".

Spooky action at a distance? No way!

Saturday, October 22, 2005

State vector collapse?

This is a story of setbacks and revelations along my route to understanding quantum mechanics properly. The lesson that I have learnt along this route (and elsewhere) is never to accept what people tell you without first checking it all for yourself. If you don't have the resources to do these checks, then you must "label" the information as being potentially unsound.

Why did my undergraduate physics teachers insist that QM states collapse when you observe them? They did it because that's what they were taught themselves. They then went on to describe "paradoxes" in QM, with whimsical names like Schrödinger's Cat, Wigner's Friend, etc. Of course, as an innocent physics undergraduate I ignored the "paradoxes" and concentrated on doing QM calculations so that I could get the answers to come out right. As Richard Feynman said "Shut up and calculate" (or maybe it wasn't Feynman - see here), so that's what I did, and it worked pretty well for me.

The trouble came later when I had more time to think about QM. By then I had forgotten about the "paradoxes", but nevertheless on deep reflection I realised that something was not quite right about QM. I turned it over in my head for most of the time that I was doing my PhD on quantum chromodynamics, and eventually came to the conclusion that some of what my QM teachers had been teaching me was rubbish. What they had taught me was an "effective theory" (i.e. something that works, but which you shouldn't look at too closely) and not a "fundamental theory" of QM, yet they had given me (and everyone else, including themselves) the impression that they were teaching a fundamental theory of QM.

If you are told that something is fundamental then you tend to attribute to it an exalted status, where you are supposed to be able to derive everything from it. It takes on the role that axioms have for mathematicians; fundamental and immutable (actually, nothing is immutable in science). Unfortunately, just as you can write down contradictory axioms, you can also write down contradictory QM. What is the evidence for this? The above mentioned QM "paradoxes", of course!

How do we fix this problem of the QM "paradoxes"? In my musings during my PhD I rebuilt my understanding of what QM was about (this took a long time with many false starts), and the one part that didn't fit naturally was the so-called state vector collapse, where observing a physical system caused its state to collapse from a linear combination of alternatives into a single one of the alternative physically permitted possibilities. The QM equations simply didn't specify how this collapse occurred (or even that it occurred at all), so why were we taught that it did occur? I came to the conclusion that it was mainly for calculational convenience (i.e. an effective theory), and that it simply did not happen that way in practice. In fact, I found out later on that the interpretation of QM that I had derived for myself was already well-known as the Everett interpretation of QM (see the The Everett FAQ), but because I had been conducting my QM musings in secret (at the physics laboratory where I did my PhD it was thought to be distinctly unsound to be questioning the foundations of QM) I knew nothing of this prior work. Later on, as I mused deeper and deeper about QM, I refined my viewpoint further, but it still has a distinctly Everett-like flavour. The details are too technical to be repeated here.

It took a long time for me to flush out the errors that my QM teachers had taught me. All attempts at discussion about this with other physicists met with blank stares and uneasy behaviour. The implication was that they thought that I was a crackpot, which didn't form a good basis for building confidence in the correctness of my ideas. Anyway, over the following years it gradually became clear that I had been right all along. For instance, I took instantly to quantum computation, which was so self-evident to me (given my Everett-like view of QM) that I wondered what all the fuss was about. A very good exponent of these quantum computation ideas is David Deutsch, who has written an excellent book on the subject called The Fabric of Reality.

Of course, I can't say that state vectors do not collapse, but just that it is not necessary to assume that they do, and there is nothing at all in the QM equations of motion that says anything about collapse. If there is ever any experimental evidence for collapse, I would be interested to see how the underlying dynamics of collapse is then added into the QM equations of motion.

Unfortunately, QM still appears to be taught in the same way that I was taught it, producing hordes of people who "shut up and calculate". There will be a few of them who will go through the same rediscovery process that I went through. I hope it is easier for them than it was for me.

State vector collapse? No way!

Wednesday, October 19, 2005

The beauty of branes

This month Scientific American has an article on The Beauty of Branes, which describes what Lisa Randall has been doing on the theoretical physics of higher dimensions, and all that sort of thing.

What I find amazing is what is written on her blackboard in the photo that accompanies the article. Actually, what is more to the point is what is not written on the blackboard. The blackboard is totally clean apart from one fragment of maths, which looks like a bit of standard Dirac algebra when I look at it through my magnifying glass.

This is totally unprecedented for a theoretical physicist!

What right does Lisa Randall have to call herself a card-carrying theoretical physicist if she doesn't even have the appropriate blackboard credentials? At the very least, I would expect to have to solve a complicated inverse problem (of the type that geologists regularly solve) just to deinterleave the various layers of equations that had been deposited on the blackboard over time.

Lisa! Please get a grip!

Update: I just noticed that the photo in the contents section of the print edition of Scientific American shows Lisa Randall standing in front of a blackboard that is suitably encrusted with chalky equations. So that's alright then!

Sunday, October 16, 2005

Why time keeps going forwards

This week New Scientist has an article entitled Why time keeps going forwards, in which the reason why time flows in one direction (i.e. from past to future) is explained.

Why is this a puzzle? The problem is that the basic laws of physics, which are so successful at explaining experimental results, are written in a way that makes no distinction between time going forwards and time going backwards.

What does that mean? It means that if the laws of physics allow something to occur, then they also allow the the time-reversed version of that "something" to occur (technically it's a bit more complicated than just time-reversal, but the essence is right).

But that's crazy! If a tea cup falls on the floor then it breaks, but we never see the reverse chain of events occurring. So the laws of physics must be wrong.

Not so! The key point to realise is that the following two items are not the same thing:
  1. Equations that describe how things behave in general.
  2. Solutions of those equations that describe specific instances of how things behave.

The solutions must be consistent with the equations, and they must also be consistent with any additional conditions that are imposed on them.

In the case of the flow of time, the asymmetry between the forwards and backwards flow directions is caused by the imposition of an initial condition, in which the universe initially has a highly ordered state. Of all the possible solutions to the equations of the whole universe, only an extremely small number of solutions that respect the highly ordered initial condition are allowed. These are exactly the solutions that we (i.e. our brains) interpret as having a forwards flow of time.

The phrase "flow of time" is a subjective quantity that we can use to summarise the asymmetric behaviour of systems generally. At the level of the whole universe the flow of time is a reliable quantity that goes on and on seemingly for ever, because human time scales are much shorter than the age of the universe. At the level of a small system that we prepare ourselves in a highly ordered initial state and then allow it to evolve (i.e. rather like a mini universe), such as gas atoms in a box with an initial condition that they are all in one corner of the box, there is a clear flow in one direction away from the initial condition as the gas atoms spread out inside the box. However, the small system behaves differently from the whole universe, because very soon the gas atoms come into equilibrium and uniformly fill the box, after which the gas behaves the same way whether you look at it running forwards or running backwards in time, so the flow of time as defined by the behaviour of the gas atoms no longer has a clearly defined direction. From the point of view of a gas atom there is no flow of time when the gas is in equilibrium; that is why "flow of time" is subjective.

Flying spaghetti monster relics discovered

The deity known as the Flying Spaghetti Monster has consolidated his hold over the minds of his worshippers by guiding a few of his disciples (the "chosen ones") to discover some of his ancient relics.

This week New Scientist reports that:

Ancient noodle rewrites history

Who invented the noodle is a hotly contested topic - with the Chinese, Italians and Arabs all staking a claim.

But the discovery of a pot of thin yellow noodles preserved for 4000 years in Yellow river silt may have tipped the bowl in China's favour. It suggests that people were eating noodles at least 1000 years earlier than previously thought, and many centuries before such dishes were documented in Europe.

"These are undoubtedly the oldest noodles ever found," says Houyuan Lu at China's Institute of Geology and Geophysics in Beijing. His team found the noodles buried 3 metres deep in flood-plain sediment at Lajia in northeastern China after lifting out an upturned bowl. The "spaghetti-like" noodles, up to 50 centimetres long, sat atop a mound of silt which had sealed them in the bowl following a major earthquake and flood.

Lu's team report in Nature (vol 437, p 967) that the noodles came from two species of millet grass grown in north-eastern China at that time. They identified the species by examining starch grains in the noodles and phytoliths, silica particles formed in seed husks while plants are alive but which survive as fossils.

They believe the noodles were made by pulling dough into long strands before boiling.

Naturally, being religious relics, I don't expect this noodly experimental result to be reproducible.

Kate Bush video

Channel 4 screened a "video exclusive" last night, in which we saw the new video for "King of the Mountain" by Kate Bush (I already mentioned KOTM here). The video was intriguing (to say the least!), and caused lots of puzzlement amongst her many fans on the main KB news forum (see here).

The video features Kate dressed in what looks like a trench coat, singing whilst swaying back and forth to the music, whilst being filmed from a point somewhere above her. A theme that runs through the whole video is Elvis Presley's clothes (minus Elvis himself) moving around (walking, flying, etc). Elvis is the target of the song's lyrics (see here). However, because this is all coming from the mind of Kate, it will have a deeply layered meaning, and maybe Elvis is a metaphor for someone else. Nothing KB does is as it superficially seems, and we can all have lots of fun trying to guess what the truth is.

Update: Judging by the Opinions on video thread on the Kate Bush discussion group her fans don't know what to make of the KOTM video. There is a lot of drivel written by people who have watched the video only once; that's a big mistake with KB material. Equally, there are overly protective people who want to silence the critics, whilst fondly imagining that KB is reading their postings. I think KB is doing nothing of the sort; her long-time track record is to ignore trends and opinions.

Ig Nobel prize

The Ig Nobel prizes are awarded each year for "achievements that cannot or should not be reproduced" (see here). They are an accidental by-product of the self-questioning system known as "science", because when you allow any question to be asked, then inevitably some very silly questions get asked. These questions lead to experiments being done in order to obtain the answers to the questions, and the results are duly reported in the scientific literature.

Science, by its very nature, encourages repeated asking of the same question, the aim of which is to reproduce the experimental results in order to increase confidence in the correctness of the results. The Ig Nobel prize specifically suggests that this rule should be waived for certain "achievements that cannot or should not be reproduced".

To get an idea of what this is about here is an example of a prize-winning study:

The 2005 winner in the Fluid Dynamics section is Pressures produced when penguins pooh - calculations on avian defaecation (270kB PDF file), Polar Biology, vol. 27, 2003, pp. 56-8.

Sunday, October 09, 2005

Heisenberg's uncertainty principle

I keep seeing Heisenberg's uncertainty principle described in popular journalese as allowing a temporary violation of the law of energy conservation (or of the law of momentum conservation). The argument goes that HUP allows you to lend or borrow energy as long as settlement is made very soon, and that this arrangement represents a temporary violation of the law of energy conservation.

The truth is that there is no violation of the law of energy conservation.

The lending or borrowing of energy (and momentum) is done in a way that always respects energy (and momentum) conservation. How can this be? Just as with financial transactions where you lend to someone or borrow from someone, with energy transactions you lend to something or borrow from something. What is that something? The details depend on the precise circumstances, but I can illustrate one of the possibilities with the diagrams below.

Walk through these diagrams:
  1. A: This shows a particle going along all by itself conserving energy (and momentum) as per usual.
  2. B: This shows a particle that goes along as in A, but then it emits another type of particle (shown as the dashed line), after which the energy (and momentum) of the original particle have changed. Call these particles of type 1 (solid line) and type 2 (dashed line).
  3. C: This shows particles of types 1 and 2 going along, but then the type 2 particle is absorbed by the type 1 particle, after which the energy (and momentum) of the type 1 particle have changed.
  4. D: This shows diagrams B and C combined. The type 1 particle goes along all by itself, emits a type 2 particle, later on it reabsorbs the type 2 particle, and then goes along all by itself again.

That is the basic structure of how particles behave in physics. Energy (and momentum) conservation are always respected, so moving along each of the lines in the above diagrams each particle conserves its energy (and momentum), and at each of the vertices where a particle is emitted or absorbed the sum of the energies (or momenta) over all particles coming into the vertex is the same as the sum going out of the vertex.

Thus in diagram B the incoming type 1 particle's energy (and momentum) are shared between the outgoing type 1 and type 2 particles. This sharing between the type 1 and type 2 particles has to respect only the fact that the sum of the energies (or momenta) have to add up to the energy (or momentum) of the incoming type 1 particle. That means that one of the particles can have a negative energy and the other a positive energy, as long as the sum has the correct value.

The notion of a negative energy is counterintuitive. What does it mean? The physicist's definition of "energy" is the "frequency of oscillation" of the wave that is associated with the particle. Frequency can have any value (positive or negative) just as the rate of advance of a clock can be anything (forwards or backwards), so analogously energy can have any value.

Conservation of energy and momentum at the vertex where a particle is emitted or absorbed means that the particles don't have complete freedom to choose whatever energy and momentum they want to have, because their sum is constrained to be the same as whatever it was at the start. When a particle is going along all by itself (as in diagram A) there is a definite relationship between its energy and its momentum. After the particle has emitted another particle (as in diagram B), although the total energy and momentum are conserved the individual particles have energy and momentum that do not have the harmonious relationship that exists when the particle is going along all by itself. The physical consequence of this conflict in each particle is that they cannot not travel very far before they have to get their energy and momentum back into a harmonious relationship, and this requires the further emission or absorbtion of a second particle (as in diagram C). Diagram D brings it all together, where energy (and momentum) are conserved everywhere in the diagram (i.e. along each line, and passing through each vertex), and the conflict between each particle's energy and momentum in the "loop" part of the diagram means that the loop cannot be very large.

I think that this conflict between the energy and momentum of each particle, which is caused by the exact energy (and momentum) conservation everywhere in diagram D, is what is wrongly referred to (in popular journalese) as a temporary violation of energy (and momentum) conservation.

The relationship between the size of the conflict between each particle's energy and momentum and extent of the loop in diagram D is given by Heisenberg's uncertainty principle. The greater the conflict the smaller the loop. A particle going along all by itself has perfect consistency between its energy and momentum, so it is not part of a finite-sized loop.

End of the Enlightenment

New Scientist has a rather worrying article titled End of the Enlightenment, which has the tagline "Why is so much of the world bent on rejecting reason, tolerance and freedom of thought?". It discusses the relationship between religious fundamentalism and secularisation. Should we build our understanding of the world based on empirical evidence from experiments or should we base it on faith and the reading of scriptures? The worrying part about the article is that religious fundamentalism appears to be gaining in popularity, thus risking everything that has been gained during the age of Enlightenment.

These two approaches can be summarised as follows:

  1. The "Enlightenment" (or science) is an intellectual revolutiom that consists of asking questions about how the world works (i.e doing experiments), and based on that asking more questions, and so on. Gradually this accumulates to lead to a consistent understanding of the way the world works. This framework is open to revision in the light of new experimental observations.
  2. Religion offers a stable framework based on sources called "scriptures". The stability of religious fundamentalism creates a consistent framework within which people can live their lives in relation to the world. This framework is not open to revision, although in non-fundamentalist religions the scriptures are frequently reinterpreted in the light of unforseen circumstances.
A nasty problem arises when people try to apply the above two "principles" to the same area of life. This is a big mistake. "Religion" is a set of rules that can be used to help people to live in harmony, and "science" is a set of rules can be used to help people to understand how the world around them works. These two rule systems have completely different areas of application.

The words "science" and "religion" frequently manifest themselves informally as follows :

  1. Science commonly makes an appearance as "know-how", which is the general common sense that is used by intelligent people who have not been exposed to science. This does not involve the study of science as practiced by professional scientists.
  2. Religion commonly makes an appearance as a "moral code", which is the general common sense that is used by intelligent people who have not been exposed to religion. This does not involve any fundamentalist reading of scriptures.

The above informal type of practitioner of science and/or religion is usually a well-adjusted individual who is pleasant to know.

Why should some people feel threatened (as the New Scientist article observes) by secularisation? In essence, the aim of science is only to provide a concise framework for inter-relating the meter readings that you get in different experiments. Despite pronouncements by various "scientific" luminaries, science does not make specific assertions about the way the world actually works. The most it can say is that things seem to behave this or that way, because we can't find any counter-examples that show otherwise. This doesn't sound very threatening to me.

Now back to the tagline "Why is so much of the world bent on rejecting reason, tolerance and freedom of thought?". I presume that it is because some people prefer the security of a stable religious framework to the ever changing face of a self-questioning scientific framework, and they wrongly think that the two frameworks are competing for the same territory so they must be in conflict.

Friday, October 07, 2005

Mobile phone radiation

Is the electromagnetic radiation from mobile phones and mobile phone transmitters damaging to your health?

Uncontrolled experiments on myself reveal a tingling sensation in the general area of the ear to which I am pressing the mobile phone. Is this an effect caused by mobile phone radiation? Is it because I am getting cramp from clutching my phone too tightly? Or it is merely my over-fertile imagination? That's what controlled experiments are supposed to disentangle, so you know which effects occur because of which causes. Nevertheless, the tingling sensation is sufficiently strong that I now always use an earpiece for long calls on my moble phone.

I also hear about electromagnetic standing waves on the wire leading to the earpiece, but I ignore these stories, which has thus created an impregnable defence against any ill effects that such waves might have on my brain.

I would have thought that any form of electromagnetic radiation could potentially have bad (or good) effects on us, simply because everything in our body acts as an electrolyte which can therefore respond to electromagnetic radiation.

The frequencies used by mobile phones are in the 1-2 GHz range, which correspnds to a wavelength range 15-30 cm. This is in the same ballpark as the size of a human head, so surely it is possible for some sort of resonance to occur?

The mobile phone vested interests are strong. I know someone who has lost around £500k unsuccessfully fighting a court case to have a mobile phone transmitter mast removed from their building.

A key problem is that in science you can't prove a negative. Absence of evidence is not evidence of absence. No-one can prove that an effect does not exist, because people can always claim that the effect in fact does exist, but people were just looking for it in the wrong places. So when people claim that mobile phone radiation is not a problem they are being economical with the truth. What they mean to say is that they haven't yet seen evidence that it is a problem. Usually that means that they have turned a blind eye to things that might be positive evidence, just so they can say they haven't seen any such evidence. I'm being cynical? No, I don't think so.

Human life: The next generation

New Scientist has an article entitled Human life: The next generation by Ray Kurzweil, which suggests that the rate of technological advance is such that it won't be long before we significantly upgrade humans to a better model.

The key to Kurzweil's argument is Kurzweil's Law (aka "the law of accelerating returns"), which says that future advances will give us an exponential growth of technology, rather than merely linear growth. Potentially, that could mean enormous advances over a human lifetime. Past experience shows that this may indeed be true.

The problem is to predict in what direction technological growth will occur. You know that the technology is going to be mind-bogglingly more advanced than current technology, but you don't know where these advances are going to manifest themselves.

Kurzweil says that:

"...information technologies will grow at an explosive rate. And information technology is the technology that we need to consider. Ultimately everything of value will become an information technology: our biology, our thoughts and thinking processes, manufacturing nd many other fields..."

I agree with all of that.

We are way past the stage where owning a steam engine was a good way of investing your money. Useful engines move bits nowadays.

Kurzweil also says that:

"...By the 2020s, nanotechnology will enable us to create almost any physical product we want from inexpensive materials, using information processes."

That is mostly rubbish.

It is certainly true that nanotechnology holds the potential for doing this. However, the design of a nanotechnological "factory" may require far more information than we can accumulate in the few years between now and 2020.

An example is manufacturing a human being, or a fish, or an ant or whatever. Assuming you start from nothing I don't think you will be able to manufacture any of these by 2020. We can already make a virus starting from nothing, but the process is rather hands-on.

Surely, Kurzweil is not suggesting that the same hands-on approach can be used for making more complicated objects? I assume not.

The production process has to be automated somehow. The obvious way is to first of all design a set of simple "tools", that then take over to do the next stage of designing more complex "tools", and so on up the scale of complexity, until you arrive at the object you wanted to manufacture in the first place. This sounds very much like the sort of solution that evolution discovered over a rather long period of random shuffling about and selection by the environment for "fitness".

How exactly is this evolutionary type of process going to be compressed into the time remaining between now and 2020? Er ... it's not. The reason is that Kurzweil assumes an exponential growth rate that is far too fast if his Kurzweil's Law is to achieve everything it needs to between now and 2020.

Nevertheless, Kurzweil's Law is probably broadly correct, and we will be able to drive evolution forwards at an increasing rate, so his future with nanotechnological "factories" making almost anything we want is much closer than we think it is. The trick will be not to specify in advance which particular complex objects should be made, but to wait and see which complex objects are feasible to make, and then to try to match these objects up with applications that we find useful.

Having said that, we'll still have some really cool technology in 2020, and some of it will be built by nanotechnological "factories".

Update: There is an interesting book called Accelerando that follows up the consequences of Kurzweil's Law. This was pointed out by John Baez here.