Fact and Fiction

Thoughts about a funny old world, and what is real, and what is not. Comments are welcome, but please keep them on topic.

Tuesday, January 30, 2007

Marvin Minsky bashes neuroscience

From KurzweilAI.net I learn that Marvin Minsky has given an interview to Discover magazine here. Minsky is one of the pioneers of artificial intelligence, and he is a very articulate and outspoken character. In the interview he comments on the activities of neuroscientists.

Q (Discover). Neuroscientists' quest to understand consciousness is a hot topic right now, yet you often pose things via psychology, which seems to be taken less seriously. Are you behind the curve?

A (Minsky). I don't see neuroscience as serious. What they have are nutty little theories, and they do elaborate experiments to confirm them and don't know what to do if they don't work. This book [The Emotion Machine] presents a very elaborate theory of consciousness. Consciousness is a word that confuses possibly 16 different processes. Most neurologists think everything is either conscious or not. But even Freud had several grades of consciousness. When you talk to neuroscientists, they seem so unsophisticated; they major in biology and know about potassium and calcium channels, but they don't have sophisticated psychological ideas. Neuroscientists should be asking: What phenomenon should I try to explain? Can I make a theory of it? Then, can I design an experiment to see if one of those theories is better than the others? If you don't have two theories, then you can't do an experiment. And they usually don't even have one.

I'm sure the activities of neuroscientists are well-intentioned, as they adopt a reductionist approach to the analysis of a highly complex system (i.e. the brain) by working upwards from the detailed behaviour of individual neurons. However, neuroscientists' theorising about AI is bound to be wildly off-target, since AI lives at a much higher level than the relatively low level where they are working. Tracing the detailed neural circuitry of small parts of the brain (or even the entire brain) will not lead to AI; discovering the underlying principles of AI (whatever those turn out to be) will lead to AI, and it will not necessarily need biological neurons to "live" in.

In the early 1980's I jumped on the "neural network" bandwagon that had restarted around that time. There was a lot of hype back then that this was the rigorous answer to understanding how the brain worked, and it took me a few years to convince myself that this claim was rubbish; the "neural network" bandwagon was based solely on some neat mathematical tricks that emerged around that time (e.g. back-propagation for training multi-layer networks, etc), rather than better insight into information processing or even AI. My rather belated response was to "rebadge" my research programme by avoiding use of the phrase "neural networks", and instead using phrases like "adaptive networks" and the like; I wasn't alone in using this tactical response.

Q (Discover). So as you see it, artificial intelligence is the lens through which to look at the mind and unlock the secrets of how it works?

A (Minsky). Yes, through the lens of building a simulation. If a theory is very simple, you can use mathematics to predict what it'll do. If it's very complicated, you have to do a simulation. It seems to me that for anything as complicated as the mind or brain, the only way to test a theory is to simulate it and see what it does. One problem is that often researchers won't tell us what a simulation didn't do. Right now the most popular approach in artificial intelligence is making probabilistic models. The researchers say, "Oh, we got our machine to recognize handwritten characters with a reliability of 79 percent." They don't tell us what didn't work.

This caricature of the cargo-cult science that passes itself off as genuine science made me laugh. As it happens, I use (a variant of) the probabilistic models that Minsky alludes to, and I find the literature on the subject unbelievably frustrating to read. A typical paper will contain an introduction, some theory, a computer simulation to illustrate an application of the theory, and a pathetically inadequate interpretation of what it all means. The most important part of a paper (the "take home message", if you wish) is the interpretation of the results that it reports; this comprises the new conceptual tools that I want to take away with me to apply elsewhere. Unfortunately, the emphasis is usually on presenting results from a wide variety of computer simulations and comparisons with competing techniques, which certainly fills up the journal pages, but it doesn't do much to advance our understanding of what is going on.

Where are the conceptual tools? This is like doing "butterfly" collecting rather than doing science. We need some rigorous organisational principles to help us gain a better understanding of our large collection of "butterflies", rather than taking the easy option of simply catching more "butterflies".

It seems to me that the situation in AI is analogous to, but much more difficult than, the situation in high energy physics during the 1950's and 1960's, when the "zoo" of strongly interacting particles grew to alarming proportions, and we explained what was going on only when the eightfold way and the quark model of hadrons were proposed. I wonder if there are elementary degrees of freedom underlying AI that are analogous to the quark (and gluon) DOF in hadrons.

I'll bet that the "elementary" DOF of AI involve the complicated (strong?) mutual interaction of many neurons, just as the "elementary" DOF in strong interactions are not actually elementary quarks but are composite entities built out of quarks (and gluons). I'll also bet that we won't guess what the "elementary" DOF of AI are by observing the behaviour of individual neurons (or even small sets of neurons), but we will postdict (rather than predict) these DOF after someone (luckily) observes interesting information processing happening in the collective behavour of large sets of neurons, or if someone (even more luckily) has a deep insight into the theory of information processing in large networks of interacting processing units.

Sunday, January 07, 2007

Who is in control?

New Scientist ran a New Year competition in which you were invited to imagine that you were an alien who had recently arrived on Earth, and you had to send a short text message home describing what you found there. The winners have now been announced here, and my two favourites are:
  1. Arr. Earth. Dominant species "car". Colourful exoskeleton and bizarre reproduction via slave biped species. Aggressive but predictable. Intelligence uncertain. (from David Armstrong)
  2. Parallel evolution of intelligent life. One carbon based, one silicon based. Carbon form domesticated by silicon form to feed it with all its needs. (from Dennis Fox)

As you can see, my two winners have a common theme because they both ask "who is in control?".

I suspect that the message about the carbon/silicon hybrid is going to get a lot more serious as time goes on. There are people (such as Ray Kurzweil and Nick Bostrom) who make entire careers out of predicting where this sort of symbiotic man/machine hybrid will go.

Here is an entertaining little exercise, which I was told about many years ago so I don't know its origin, but I certainly have seen a science fiction film (title unknown) in which a spaceship full of troopers is subjected to this "experiment". Think about what happens if your biological brain cells (i.e. neurons) are replaced one at a time by functionally equivalent artificial brain cells. At the start of this process you have your original biological intelligence, and at the end you have a functionally equivalent artificial intelligence.

It is tempting to say that the AI version of you isn't really you; after all, it is only a load of silicon (or whatever). However, the AI version is reached by a series of infinitesimally small steps, where only one neuron at a time is transformed. What would your subjective feeling be as each neuron was transformed in this way? By definition, there should be no subjective change, because each biological neuron is replaced by a functionally equivalent artificial neuron, so whatever it is that each neuron does, it does the same thing before and after the transformation into an artificial neuron. Thus, artificial you = biological you.

Of course, I slipped an assumption past you in the above "proof"; I assumed that "you" and "brain" are one and the same thing. This is the assumption made in Francis Crick's book The Astonishing Hypothesis, in which Crick claims "You, your joys and your sorrows, your memories and your ambitions, your sense of personal identity and free will, are in fact no more than the behavior of a vast assembly of nerve cells and their associated molecules". I think that until we have actually done the biological-to-artificial transformation experiment (or something like it) we cannot know for sure that there is no subjective difference between our subjective biological and artificial intelligences.

I will not be offering myself for this experiment (even if we had the technology to do it), because there is too much to lose if (for some as yet unknown reason) our functionally equivalent artificial neurons are not actually functionally equivalent. Absence of evidence is not the same as evidence of absence, so just because we haven't observed something doesn't mean that that something does not exist. Neurons may (and probably do) communicate in ways that we do not yet suspect, and there may also be lots of things other than neurons involved in our "biological" intelligence. Mother Nature is always more imaginative than we are.

Anyway, none of that changes the truth contained in the message about the carbon/silicon hybrid. We are already carbon/silicon hybrids, because the everyday lives of a significant fraction of people on the planet depend on computer-based things going on in the background (and this relationship is reciprocal). This dependence is going to become more and more direct and intimate as time goes on.

Who is in control?


Who is in control?