Fact and Fiction

Thoughts about a funny old world, and what is real, and what is not. Comments are welcome, but please keep them on topic.

Wednesday, May 09, 2012

Where have all my images gone?

Very funny, Google! All of the images have disappeared from this blog.

Saturday, August 25, 2007

This blog is now closed

Please go to The Spline for future blog postings.

Sunday, February 25, 2007

Widescreen laptop computers

My veteran laptop PC is a (less than 1 GHz) Pentium 3 powered Compaq Presario 1800, which has a mere 320MB of RAM and a 30GB hard disk, and it even needs an expansion card to enable it to talk to a wireless network. The real reason that I bought it around 5 years ago was the quality and size of its LCD screen; it has a 15 inch LCD screen which comfortably runs at 1400 by 1050 pixels (16 bits per pixel).

The LCD screen makes heavy duty technical wordprocessing relatively painless to do, especially as I use Publicon to do my technical writing. So, despite being an old and underpowered laptop PC, it creates worthwhile results because it is well-matched to the needs of the software that I run on it.

However, it would be nice to upgrade my laptop PC now that its processor is two generations out of date, having been overtaken by the Pentium 4 and the Core 2 Duo. A new laptop PC would also have a much larger RAM and hard disk, which would make the computer more generally useful to me.

So off I went to PC World to do some window-shopping, and I was really disappointed with what I discovered there (and at various other places that I also visited). Every laptop PC on display had a widescreen format. That would be fine if the height of the screen was as good as what I already have on my 5 year old laptop PC, and some extra width had been added to give it a widescreen format. However, none of the screens used the full height that was available in its clamshell lid housing. Instead, they had a thick plastic border area both above and below the screen to act a "filler" for a missing area of screen, so that the overall effect was to make the screen have a widescreen format (i.e. more like a letter box than a window).

It seems that the laptop PC manufacturers think that the aspect ratio of the screen itself is more important than fitting the largest possible screen in the clamshell lid housing, even in their top-of-the-range laptop PCs. The only reason that I can think for doing this is that it is fashionable to have a widescreen format LCD screen, and that the laptop PC will sell only if its LCD screen satisfies this criterion, even if there is room in the clamshell lid to fit a larger (i.e. higher) LCD screen.

I will never buy a laptop PC that doesn't allow me to do heavy duty technical wordprocessing with maximum facility. Currently, I have 1050 pixels of screen height on my 5 year old laptop PC, and I will not settle for fewer pixels than this. There was not a single laptop PC on display at PC World that satisfied this criterion; I also looked in various other places with a similar lack of success, so this comment is not a criticism of PC World. Later, I checked on-line and I found a few models of laptop PC that were OK for my purposes, but they were in a tiny minority.

Another thing I noticed at PC World was that all of the laptop PCs on display had highly reflective LCD screens, whereas I am used to using LCD screens that are not very reflective. I checked how easy it would be to use these LCD screens when there was a lot of light coming from behind me. My conclusion is that this sort of highly reflective LCD screen is unusable, unless the lighting conditions are of the sort that you would get in an ergonomically designed office. You certainly couldn't use it if there was a significant amount of light coming from behind you. Worse still, you certainly couldn't use it outdoors on a sunny day, especially if you were wearing a light-coloured shirt.

I'm glad that I went on my window-shopping expedition to PC World. My interest in upgrading my laptop PC has now definitely been put on hold until the manufacturers get the ergonomics of their laptop PC designs sorted out.

I didn't even try to put to serious use any of the laptop PCs that were on display; no doubt I would have found other things to moan about if I had tried them out. I'll leave it for 6 months before I do another window-shopping expedition, and hope that things have improved by then.

Update: 6 weeks have now passed by, and I could not resist trying out Windows Vista on some of the laptop PCs - the ones with 2 Gbyte of RAM. Superficially, there is a lot of "eye candy", but I hoped that I would find it was more interesting underneath. Sadly, I didn't get very far, because I was astounded at how slow Windows Vista is, even on a fairly powerful laptop PC (e.g. the ones costing around £1000 from Sony and HP). I deliberately booted from cold to see how long it took to start up, and I thought something had gone wrong because nothing seemed to happen for a long time. The whole booting process took at least a couple of minutes! This sluggishness was not limited to booting the PC; the whole user experience was that you were being held back by a PC that was unable to keep up with you. I now have even more reasons (see my complaints above about the latest types of LCD display) to stay with my old 2001 vintage 1GHz Pentium 3 laptop running Windows XP in a paltry 320MB of RAM.

Friday, February 09, 2007

Enigma variations

Am I the only one, or has anyone else noticed how a lot of the New Scientist brainteasers (in the Enigma section) appear to have been constructed so that they can be solved by brute force? In fact, brute force makes it very easy to construct a "brain" teaser in the first place, because all you need to do is to describe a largish (but not too large) ensemble of potential solutions (e.g. all possible n-by-n grids of digits), then state a set of conditions that a unique member of the ensemble has to satisfy, then ask the reader to find that unique member, and submit it as their solution to the "brain" teaser.

Enigma 1428 in the most recent New Scientist appears to belong to this category of "brain" teaser. Here it is:

Foursight. Enigma No. 1428

If you place a 1, 2, 3 or 4 in each of the sixteen places in this grid, with no digit repeated in any row or column, then you end up with a “Latin square”. Then you can read eight four-figure numbers in the grid, namely across each of the four rows and down each of the four columns. Your task today is to construct such a Latin square in which those eight numbers are all different, none is a prime, no two are reverses of each other, and:

1st column X 3rd column
-------------------------------
2nd row X 3rd row

is greater than 4.


The relatively small size of this problem, and the fact that it looks numerically messy, immediately suggested to me that it was designed on computer, and that the quickest way to get the correct result was a brute force computer attack. I got out my trusty Mathematica and did the following steps (inputs in bold), where I made abolutely no attempt to streamline the code, I generated each input by copying/pasting/modifying earlier inputs, and the goal was to get to the answer as quickly as possible. The code is not really intended to be read except by masochists; it is a special write-only style that I use for problems like this.

How many candidate Latin squares with all digits different in each single row?

4!^4

331776

Construct all possible single rows.

perms = Permutations[{1, 2, 3, 4}]

{{1,2,3,4},{1,2,4,3},{1,3,2,4},{1,3,4,2},{1,4,2,3},{1,4,3,2},{2,1,3,4},{2,1,4,3},{2,3,1,4},{2,3,4,1},{2,4,1,3},{2,4,3,1},{3,1,2,4},{3,1,4,2},{3,2,1,4},{3,2,4,1},{3,4,1,2},{3,4,2,1},{4,1,2,3},{4,1,3,2},{4,2,1,3},{4,2,3,1},{4,3,1,2},{4,3,2,1}}

Join together in all possible ways to make all possible candidate Latin squares.

perms2 = Flatten[Table[{perms[[i]], perms[[j]], perms[[k]], perms[[l]]}, {i, 24}, {j, 24}, {k, 24}, {l, 24}], 3];
Length[perms2]

331776

Keep only the ones in which each single column has all digits different. The rows already satisfy this condition.

perms3=Select[perms2,Apply[And,Map[Length[Union[#]]==4&,Transpose[#]]]&];
Length[perms3]

576

Keep only the ones in which all 8 rows and columns are different numbers.

perms4=Select[perms3,Length[Union[Map[FromDigits,Join[#,Transpose[#]]&[#]]]]==8&];
Length[perms4]

480

Keep only the ones in which none of the 8 rows and columns is prime.

perms5=Select[perms4,!Apply[Or,Map[PrimeQ[FromDigits[#]]&,Join[#,Transpose[#]]&[#]]]&];
Length[perms5]

88

Keep only the ones in which none of the 8 rows and columns is the reverse of another.

perms6=Select[perms5,Length[Union[Map[FromDigits,Join[#,Transpose[#],Map[Reverse,#],Map[Reverse,Transpose[#]]&[#]]]]]==16&];
Length[perms6]

24

Keep only the ones in which col 1 * col 3 / (row 2 * row 3) > 4.

perms7=Select[perms6,#[[5]]#[[7]]/(#[[2]]#[[3]])>4&[Map[FromDigits,Join[#,Transpose[#]]&[#]]]&];
Length[perms7]

1

The unique answer.

perms7[[1]]

{{3,2,4,1},{1,4,3,2},{2,3,1,4},{4,1,2,3}}

Winter has come

Here is a photo showing the view from my house this afternoon. On a clear day you would see hills/woods/fields in the middle distance of the photo. But yesterday and today the snow monster visited instead.


I live way up on the side of the Malvern Hills facing the prevailing weather, so I get more than my fair share of the weather when it comes. The depth of the snow is nearly twice what it is in regions neighbouring my microclimate.

I was going to go out for a snowy walk on the Malvern Hills, but decided not to when I saw how deep the snow was. So I have now spent a couple of days holed up in my house, because my car is notorously difficult to drive in snow (it is a heavy BMW with an automatic gearbox) and I would never get it back up the hill driving from work to home. I have put my time to good use creating some tutorial material to help people at my place of work understand all about the research that I have been doing for the past 20 years. Somehow I think the effort will not be as worthwhile as I had initially hoped it would be.

I must get out and play in the snow before it disappears, which the weather forecast says will happen over the weekend.

Sunday, February 04, 2007

Bottom-up design of high energy physics theories

In last week's New Scientist there was an article entitled The Large Hadron Collider: Bring it on! which discusses how physicists are going to set about interpreting the flood of data that will emerge from the Large Hadron Collider (LHC) at CERN. This grabbed my attention because the method of bottom-up construction of physical theories that is described in the article is closely related to how self-organising networks are designed.

The standard approach is to start with the various candidate theories about how particles interact when they hit each other inside the LHC, and then make predictions about what each of these theories expects to find in the products of such collisions. This allows you to discover which theory gives you predictions that correspond most accurately with what you actually observe in the LHC data, and if the correspondence is exact (or nearly so) then this thory becomes an accepted law of physics.

There are lots of other constraints that moderate this process. For instance, it is necessary but not sufficient that the newly accepted theory agrees with what is observed in the LHC data. It must also agree with everything else that we have observed or will observe, so the process of testing the theory goes on in perpetuity as repeated attempts to falsify it continue to be made.

Also it may turn out that none of the candidate theories fits the data, in which case you need to think up some new theories. Or it may turn out that more than one of the candidate theories fits the data, in which case you need to think up some new experiments that might discriminate between the remaining candidate theories.

That's the standard way of finding the right theory: the scientific method.

The article in New Scientist describes an interesting alternative approach to finding the right theory. It was introduced in a paper by Bruce Knuteson and Stephen Mrenna entitled Bard: Interpreting New Frontier Energy Collider Physics (www.arxiv.org/hep-th/0602101), which describes a way of building physical theories from scratch from the experimental data. Actually, the method described there is not an alternative to the scientific method, but rather it is an attempt to automate part of the scientific method.

The approach goes roughly like this. You start with a lot of raw experimental data, and based on what incoming and outgoing particles you see in each collision you hypothesise the existence of particle interactions that can give rise to the observed experimental data. That alone would not achieve very much because it just says that you see what you see. However, if you go further than this by imposing constraints on what sorts of particle interactions you allow yourself to use, then it is not usually possible to explain all of the experimental data directly, and you are forced to build composite interactions that consist of more than one of the allowed basic interactions joined together in various ways (i.e. you build Feynman diagrams out of elementary vertices joined together by propagators). The hope is that you can use fairly simple composite interactions to successfully explain each set of experimental data, and if you can't do this then you enlarge your allowed set of allowed basic interactions until you can explain the data.

In order for this bottom-up approach to designing physical theories (or models) from observed experimental data to work successfully you need to have strict rules to control what interactions you can add to your allowed set of basic interactions. In effect, you need to emulate the refined sense of judgement that theoretical physicists use when crafting new theories. This consists of a combination two rather different abilities: (1) the inspiration needed to invent a radically new class of models (this is hard to automate), (2) the patience and stamina to check out the individual members of a class of models (this is relatively easy to automate). The bottom-up approach can readily be applied to type-2 modelling, but it becomes progressively harder as you penetrate type-1 modelling territory. Nevertheless, provided you can write a "template" for each class of models, then it is possible in principle to automate the search over model space to find the candidate that is the best fit to the data.

A big advantage of the bottom-up approach is that it is relatively unprejudiced, because it treats each of the candidate models in an even-handed way, so there is no possibility of prejudice making you overlook a viable candidate. Nevertheless, the candidates are limited to only those classes of models for which templates have been defined at the outset, so there is a global form of prejudice hard-wired into the bottom-up approach.

What really caught my eye about this bottom-up approach to the design of physical theories is its "equivalence" to the data-driven design of self-organising networks. In both cases you try to "explain" the structure of the data by first of all attempting to fit the data with the allowed building blocks: basic interactions in physics, or links in networks. If (or usually, when) this fails to give an accurate fit to the data, you then move to a composite explanation in which you introduce another layer of explanation to explore a larger space of models, within which you hope that a good fit to the data can be found.

In physical theories this larger space of models would allow additional types of basic interaction, or it might be just a refined version of the basic interactions that you were already using (e.g. using more loops). In self-organising networks this larger space is usually (but not invariably) constructed by adding another layer of nodes to the network, which has the effect of introducing indirect links between the nodes in the previously existing network layer(s). In both physical theories and self-organising networks the effect of enlarging the space of models is essentially the same; progressively more indirect explanations of the data become candidates to be considered.

I wonder to what extent self-organising networks (of whatever type) might be used to automate the discovery of physical theories.

Saturday, February 03, 2007

Bell Labs: Over and out

In this week's New Scientist there is an article entitled Bell Labs: Over and out, which is about the decline and fall of Bell Labs. As the article puts it, Bell Labs was "formerly the world's premier industrial research laboratory". So, what went wrong?

The article says

What, then, was the key to its success? A large part of it was the way it encouraged its employees to strive for great ideas and tackle the toughest problems. The company trained technical managers to inspire staff, with ideas rather than meddle with details, and could afford to have multiple teams try different approaches at once. No doubt it also benefited from the security of working for a regulated monopoly insulated from the whims of the marketplace.

and also

Eventually Bell's success ended too. After years of litigation, AT&T spun off its regional telephone service as seven separate companies in 1984, ending the decades of cosy monopoly. A dozen years later, it spun off most of Bell Labs along with its equipment division as Lucent Technologies, which initially prospered but then stumbled badly, shrinking from a peak of 16o,ooo employees to 30,5oo before merging with Alcatel ... It will be missed - it already is. The greatest loss is not so much Bell's vaunted basic research, but its unique ability to marshal teams of top technologists to transform bright ideas into effective technology.

Bell Labs was a laboratory that did great research because it employed top-notch researchers, it was protected from the marketplace, and because its managers could operate in a hands-off mode rather than micromanage everything, which allowed its researchers to get on with doing basic (read "long term") research. When these preconditions (i.e. protection from the marketplace, and hands-off management) are removed the structure of the organisation begins to change irreversibly, e.g. basic research ceases to be done.

I'm not so sure that I agree that "the greatest loss is not so much Bell's vaunted basic research", because basic research provides the source material for future technology. Even if you employ the best people in the world for turning the results of basic research into usable technology, you can get away without doing basic research for only so long before the cellar full of fine wines laid down in earlier years runs dry.

I particularly liked the phrase "transform bright ideas into effective technology" that was used in the article, because it sounds like the sort of "mission statement" that could be used by any organisation that wanted to plunder its cellar to convert its past basic research results into technology.

Because I do a lot of basic research myself, I have an interest in freedom to do basic research being granted to individuals who have a flair for this sort of activity (not many people, in my experience); I have commented on this in an earlier posting here. It seems that wherever I look conditions are changing in ways that are hostile to this civilisation-creating activity; see here for my earlier posting on this.

Tuesday, January 30, 2007

Marvin Minsky bashes neuroscience

From KurzweilAI.net I learn that Marvin Minsky has given an interview to Discover magazine here. Minsky is one of the pioneers of artificial intelligence, and he is a very articulate and outspoken character. In the interview he comments on the activities of neuroscientists.

Q (Discover). Neuroscientists' quest to understand consciousness is a hot topic right now, yet you often pose things via psychology, which seems to be taken less seriously. Are you behind the curve?

A (Minsky). I don't see neuroscience as serious. What they have are nutty little theories, and they do elaborate experiments to confirm them and don't know what to do if they don't work. This book [The Emotion Machine] presents a very elaborate theory of consciousness. Consciousness is a word that confuses possibly 16 different processes. Most neurologists think everything is either conscious or not. But even Freud had several grades of consciousness. When you talk to neuroscientists, they seem so unsophisticated; they major in biology and know about potassium and calcium channels, but they don't have sophisticated psychological ideas. Neuroscientists should be asking: What phenomenon should I try to explain? Can I make a theory of it? Then, can I design an experiment to see if one of those theories is better than the others? If you don't have two theories, then you can't do an experiment. And they usually don't even have one.

I'm sure the activities of neuroscientists are well-intentioned, as they adopt a reductionist approach to the analysis of a highly complex system (i.e. the brain) by working upwards from the detailed behaviour of individual neurons. However, neuroscientists' theorising about AI is bound to be wildly off-target, since AI lives at a much higher level than the relatively low level where they are working. Tracing the detailed neural circuitry of small parts of the brain (or even the entire brain) will not lead to AI; discovering the underlying principles of AI (whatever those turn out to be) will lead to AI, and it will not necessarily need biological neurons to "live" in.

In the early 1980's I jumped on the "neural network" bandwagon that had restarted around that time. There was a lot of hype back then that this was the rigorous answer to understanding how the brain worked, and it took me a few years to convince myself that this claim was rubbish; the "neural network" bandwagon was based solely on some neat mathematical tricks that emerged around that time (e.g. back-propagation for training multi-layer networks, etc), rather than better insight into information processing or even AI. My rather belated response was to "rebadge" my research programme by avoiding use of the phrase "neural networks", and instead using phrases like "adaptive networks" and the like; I wasn't alone in using this tactical response.

Q (Discover). So as you see it, artificial intelligence is the lens through which to look at the mind and unlock the secrets of how it works?

A (Minsky). Yes, through the lens of building a simulation. If a theory is very simple, you can use mathematics to predict what it'll do. If it's very complicated, you have to do a simulation. It seems to me that for anything as complicated as the mind or brain, the only way to test a theory is to simulate it and see what it does. One problem is that often researchers won't tell us what a simulation didn't do. Right now the most popular approach in artificial intelligence is making probabilistic models. The researchers say, "Oh, we got our machine to recognize handwritten characters with a reliability of 79 percent." They don't tell us what didn't work.

This caricature of the cargo-cult science that passes itself off as genuine science made me laugh. As it happens, I use (a variant of) the probabilistic models that Minsky alludes to, and I find the literature on the subject unbelievably frustrating to read. A typical paper will contain an introduction, some theory, a computer simulation to illustrate an application of the theory, and a pathetically inadequate interpretation of what it all means. The most important part of a paper (the "take home message", if you wish) is the interpretation of the results that it reports; this comprises the new conceptual tools that I want to take away with me to apply elsewhere. Unfortunately, the emphasis is usually on presenting results from a wide variety of computer simulations and comparisons with competing techniques, which certainly fills up the journal pages, but it doesn't do much to advance our understanding of what is going on.

Where are the conceptual tools? This is like doing "butterfly" collecting rather than doing science. We need some rigorous organisational principles to help us gain a better understanding of our large collection of "butterflies", rather than taking the easy option of simply catching more "butterflies".

It seems to me that the situation in AI is analogous to, but much more difficult than, the situation in high energy physics during the 1950's and 1960's, when the "zoo" of strongly interacting particles grew to alarming proportions, and we explained what was going on only when the eightfold way and the quark model of hadrons were proposed. I wonder if there are elementary degrees of freedom underlying AI that are analogous to the quark (and gluon) DOF in hadrons.

I'll bet that the "elementary" DOF of AI involve the complicated (strong?) mutual interaction of many neurons, just as the "elementary" DOF in strong interactions are not actually elementary quarks but are composite entities built out of quarks (and gluons). I'll also bet that we won't guess what the "elementary" DOF of AI are by observing the behaviour of individual neurons (or even small sets of neurons), but we will postdict (rather than predict) these DOF after someone (luckily) observes interesting information processing happening in the collective behavour of large sets of neurons, or if someone (even more luckily) has a deep insight into the theory of information processing in large networks of interacting processing units.