Fact and Fiction

Thoughts about a funny old world, and what is real, and what is not. Comments are welcome, but please keep them on topic.

Sunday, February 04, 2007

Bottom-up design of high energy physics theories

In last week's New Scientist there was an article entitled The Large Hadron Collider: Bring it on! which discusses how physicists are going to set about interpreting the flood of data that will emerge from the Large Hadron Collider (LHC) at CERN. This grabbed my attention because the method of bottom-up construction of physical theories that is described in the article is closely related to how self-organising networks are designed.

The standard approach is to start with the various candidate theories about how particles interact when they hit each other inside the LHC, and then make predictions about what each of these theories expects to find in the products of such collisions. This allows you to discover which theory gives you predictions that correspond most accurately with what you actually observe in the LHC data, and if the correspondence is exact (or nearly so) then this thory becomes an accepted law of physics.

There are lots of other constraints that moderate this process. For instance, it is necessary but not sufficient that the newly accepted theory agrees with what is observed in the LHC data. It must also agree with everything else that we have observed or will observe, so the process of testing the theory goes on in perpetuity as repeated attempts to falsify it continue to be made.

Also it may turn out that none of the candidate theories fits the data, in which case you need to think up some new theories. Or it may turn out that more than one of the candidate theories fits the data, in which case you need to think up some new experiments that might discriminate between the remaining candidate theories.

That's the standard way of finding the right theory: the scientific method.

The article in New Scientist describes an interesting alternative approach to finding the right theory. It was introduced in a paper by Bruce Knuteson and Stephen Mrenna entitled Bard: Interpreting New Frontier Energy Collider Physics (www.arxiv.org/hep-th/0602101), which describes a way of building physical theories from scratch from the experimental data. Actually, the method described there is not an alternative to the scientific method, but rather it is an attempt to automate part of the scientific method.

The approach goes roughly like this. You start with a lot of raw experimental data, and based on what incoming and outgoing particles you see in each collision you hypothesise the existence of particle interactions that can give rise to the observed experimental data. That alone would not achieve very much because it just says that you see what you see. However, if you go further than this by imposing constraints on what sorts of particle interactions you allow yourself to use, then it is not usually possible to explain all of the experimental data directly, and you are forced to build composite interactions that consist of more than one of the allowed basic interactions joined together in various ways (i.e. you build Feynman diagrams out of elementary vertices joined together by propagators). The hope is that you can use fairly simple composite interactions to successfully explain each set of experimental data, and if you can't do this then you enlarge your allowed set of allowed basic interactions until you can explain the data.

In order for this bottom-up approach to designing physical theories (or models) from observed experimental data to work successfully you need to have strict rules to control what interactions you can add to your allowed set of basic interactions. In effect, you need to emulate the refined sense of judgement that theoretical physicists use when crafting new theories. This consists of a combination two rather different abilities: (1) the inspiration needed to invent a radically new class of models (this is hard to automate), (2) the patience and stamina to check out the individual members of a class of models (this is relatively easy to automate). The bottom-up approach can readily be applied to type-2 modelling, but it becomes progressively harder as you penetrate type-1 modelling territory. Nevertheless, provided you can write a "template" for each class of models, then it is possible in principle to automate the search over model space to find the candidate that is the best fit to the data.

A big advantage of the bottom-up approach is that it is relatively unprejudiced, because it treats each of the candidate models in an even-handed way, so there is no possibility of prejudice making you overlook a viable candidate. Nevertheless, the candidates are limited to only those classes of models for which templates have been defined at the outset, so there is a global form of prejudice hard-wired into the bottom-up approach.

What really caught my eye about this bottom-up approach to the design of physical theories is its "equivalence" to the data-driven design of self-organising networks. In both cases you try to "explain" the structure of the data by first of all attempting to fit the data with the allowed building blocks: basic interactions in physics, or links in networks. If (or usually, when) this fails to give an accurate fit to the data, you then move to a composite explanation in which you introduce another layer of explanation to explore a larger space of models, within which you hope that a good fit to the data can be found.

In physical theories this larger space of models would allow additional types of basic interaction, or it might be just a refined version of the basic interactions that you were already using (e.g. using more loops). In self-organising networks this larger space is usually (but not invariably) constructed by adding another layer of nodes to the network, which has the effect of introducing indirect links between the nodes in the previously existing network layer(s). In both physical theories and self-organising networks the effect of enlarging the space of models is essentially the same; progressively more indirect explanations of the data become candidates to be considered.

I wonder to what extent self-organising networks (of whatever type) might be used to automate the discovery of physical theories.

0 Comments:

Post a Comment

<< Home