Fact and Fiction

Thoughts about a funny old world, and what is real, and what is not. Comments are welcome, but please keep them on topic.

Sunday, January 07, 2007

Who is in control?

New Scientist ran a New Year competition in which you were invited to imagine that you were an alien who had recently arrived on Earth, and you had to send a short text message home describing what you found there. The winners have now been announced here, and my two favourites are:
  1. Arr. Earth. Dominant species "car". Colourful exoskeleton and bizarre reproduction via slave biped species. Aggressive but predictable. Intelligence uncertain. (from David Armstrong)
  2. Parallel evolution of intelligent life. One carbon based, one silicon based. Carbon form domesticated by silicon form to feed it with all its needs. (from Dennis Fox)

As you can see, my two winners have a common theme because they both ask "who is in control?".

I suspect that the message about the carbon/silicon hybrid is going to get a lot more serious as time goes on. There are people (such as Ray Kurzweil and Nick Bostrom) who make entire careers out of predicting where this sort of symbiotic man/machine hybrid will go.

Here is an entertaining little exercise, which I was told about many years ago so I don't know its origin, but I certainly have seen a science fiction film (title unknown) in which a spaceship full of troopers is subjected to this "experiment". Think about what happens if your biological brain cells (i.e. neurons) are replaced one at a time by functionally equivalent artificial brain cells. At the start of this process you have your original biological intelligence, and at the end you have a functionally equivalent artificial intelligence.

It is tempting to say that the AI version of you isn't really you; after all, it is only a load of silicon (or whatever). However, the AI version is reached by a series of infinitesimally small steps, where only one neuron at a time is transformed. What would your subjective feeling be as each neuron was transformed in this way? By definition, there should be no subjective change, because each biological neuron is replaced by a functionally equivalent artificial neuron, so whatever it is that each neuron does, it does the same thing before and after the transformation into an artificial neuron. Thus, artificial you = biological you.

Of course, I slipped an assumption past you in the above "proof"; I assumed that "you" and "brain" are one and the same thing. This is the assumption made in Francis Crick's book The Astonishing Hypothesis, in which Crick claims "You, your joys and your sorrows, your memories and your ambitions, your sense of personal identity and free will, are in fact no more than the behavior of a vast assembly of nerve cells and their associated molecules". I think that until we have actually done the biological-to-artificial transformation experiment (or something like it) we cannot know for sure that there is no subjective difference between our subjective biological and artificial intelligences.

I will not be offering myself for this experiment (even if we had the technology to do it), because there is too much to lose if (for some as yet unknown reason) our functionally equivalent artificial neurons are not actually functionally equivalent. Absence of evidence is not the same as evidence of absence, so just because we haven't observed something doesn't mean that that something does not exist. Neurons may (and probably do) communicate in ways that we do not yet suspect, and there may also be lots of things other than neurons involved in our "biological" intelligence. Mother Nature is always more imaginative than we are.

Anyway, none of that changes the truth contained in the message about the carbon/silicon hybrid. We are already carbon/silicon hybrids, because the everyday lives of a significant fraction of people on the planet depend on computer-based things going on in the background (and this relationship is reciprocal). This dependence is going to become more and more direct and intimate as time goes on.

Who is in control?

Indeed!

Who is in control?

4 Comments:

At 9 January 2007 at 20:34, Blogger jj mollo said...

I believe it has been reliably shown that consciousness is just software. Pet scans have shown that the consciousness functions in the brain are not triggered until after the supposedly "conscious" actions or choices have already taken place.

The problem with the neuron replacement scheme you have described is that the human brain was created by evolution. The mechanisms that make it work the way it does are the result of a billion years of broad-spectrum natural selection, not by engineering. For one thing, can a mechanical system be designed to change in the same way as a dynamic organic neural network?

It is likely that every level, from atomic, molecular to physical movement are involved somehow in the sense of self and thought processing. Unlike an engineer who believes that simplicity/modularity is a virtue in and of itself, Evolution does not care whether a process is simple or complex. It just cares about performance. It may turn out that part of the thinking process is molecular, or carried by the blood flow, or dependent on Brownian motion. It is potentially complex beyond our ability to unravel.

Another question is whether the individual would notice if there were significant changes in the software. The software and the hardware change constantly. Learning takes place as well. We have also recently discovered that brain cells are not immutable. They grow and die and are replaced. Some brain injuries are repaired. The atoms and molecules themselves are used, recycled, discarded, replaced.

The recent proposal that Martian life might be based on hydrogen peroxide shows the problems aliens might have understanding life on Earth. Since we didn't know what to look for, we may have inadvertantly destroyed evidence that was right before our eyes. One hopes that these hypothetical aliens aren't prone to similar mistakes. (This mistake also shows the difficulty of identifying every component of the thinking process. It's analogous to an alien life form.)

It's amusing to think that Carl Sagan's blimp-like entities of Jupiter might decide that aircraft are the dominant form. Arthur Clarke's helio-whales might relate only to the electrical grid.

 
At 10 January 2007 at 20:07, Blogger Stephen Luttrell said...

I think we are saying the same thing, i.e. it is very difficult to create an artificial neuron that is functionally equivalent to a biological neuron. As you say, it may be extraordinarily difficult, on a scale of difficulty that involves all levels of physics (and who knows what else), so we would not be able to ever use advanced technology to create "perfect" artificial neurons.

As for "consciousness", the problem I have with this is that I find it hard to imagine that an artificial machine could have the same subjective experiences as biological me. If it did then I would be logically forced into treating it as if it were human, and thus I would have to give it the same rights as a human. This seems to be distinctly odd to me, and maybe the problem goes away because it is impossible to create "perfect" artificial neurons.

On the other hand, if the artificial neurons were imperfect, then I wonder how much of our subjective experience might survive intact. How important is it that our neurons (and whatever else) work the particular way that they do?

Maybe it is my mere lack of familiarity with such machines (I assume I have never met one yet!) that limits my imagination. The result of the biological to artificial neuron transformation "experiment" that I described in my original posting is the nearest that I can come to imagine what it like to be such a machine.

Even if we haven't got the technology to do this "experiment" right now, it doesn't stop us from asking ourselves what the consequences of the experiment might be? I don't think this is idle philosophical musing. It is a real problem.

 
At 12 January 2007 at 01:23, Blogger jj mollo said...

Blade-runner! We won't enjoy giving up our unique place to so-called automatons. What if they end up being more conscious than we are?

The neuron replacement example is actually a thought experiment designed to show you precisely what you've been talking about. When we get around to doing it, we will be using much cheaper methods, and it won't be for the purpose of preserving a particular individual.

 
At 12 January 2007 at 20:03, Blogger Stephen Luttrell said...

If it is true that the transformation from biological to functionally equivalent artifical neurons causes no subjective change to the individual concerned (and this isn't obvious a priori for reasons I have already given), then I see no reason why we couldn't also enhance their cognitive abilities by "patching" their brain with some additional artifical neurons, and perhaps making them more conscious than they were before the transformation.

Of course, a corollary is that you could skip the use of a human brain to bootstrap the transformation process, and instead build the final "automaton" from scratch. Perhaps we might use a kind of molecular program (e.g. like DNA) to do the basic construction of an artificial brain, and this might be very cheap to do if we knew how to design (or evolve) the molecular program.

If there are no no-go-theorems (e.g. consciousness and all that might turn out not to have a simple Crick-like neural activity interpretation) in the way of this, then this sort of technology will totally rewrite the "meaning of life". I think that this sort of technology will creep up on us in a beguiling way, and we will actually encourage each little step of this process. No wonder these ideas bother people who take them seriously.

Anyway, to respond to your Blade Runner jibe, my view is that (no-go-theorems permitting) androids dream of the same sheep as us.

 

Post a Comment

<< Home