The artilect war
I posted before on Human life: The next generation by Ray Kurzweil, which suggests that the rate of technological advance is such that it won't be long before we significantly upgrade humans to a better model. I disagree with that prediction, or at least I think that the time scale for things of that sort to happen will be quite long (i.e. many human lifetimes, at least).
In this week's New Scientist there is a letter Cosy Kurzweil whose author Hugo de Garis paints a far less rosy picture of our technological future than Kurzweil. de Garis has written a book The Artilect War (precis), about ARTIficial intelLECTs that have massive intellectual powers, which could reasonably be expected to be created (or create themselves from earlier prototypes?) sometime during the 21st century. [Update on 30 December 2008: The links in this paragraph are now dead. A PDF of The Artilect War is here. Hugo de Garis has a web page here.]
I strongly urge you to read The Artilect War. I find myself in a similar quandary to de Garis, who is working on building artifical brains yet he is worried about the possible consequences of his research.
One course of action that would be incredibly naive to take would be to unilaterally abandon research in the area of artifical brains. That would be as stupid as to unilaterally abandon defence research, where it wouldn't be long before your vulnerability to the hostile actions of others would soon be your undoing. Similarly, in artificial brain research, you at least have to understand what the potentialities of the technology are in order to protect yourself against hostile actions using such technology against you. The only moral course of action is to continue the research.
Here is a worst case, and a best case:
- Worst case: Building artilects may turn out to be just another step in evolution, and that ultimately we (i.e. humans) would then be "viewed" (or whatever artilects do when they are "thinking") as just a stepping stone along their prior evolutionary path. In this scenario, the human species (as we know it) does not ultimately survive the appearance of artilects. Naturally, being humans, we don't exactly relish this prospect.
- Best case: An artilect would be an artificial brain-like prosthesis that would greatly enhance the abilities of its human "wearer". To use a present-day analogy, imagine what it would be like to have direct brain access to the internet, rather than having to type with your fingers at a keyboard and receive results through your eyes. Assuming the interface was designed so that you could use it efficiently just by thinking, then you would be phenomenally knowledgeable. Now imagine upgrading this direct brain-internet access to include the ability to do massively more intelligent thinking (let's call it a "brain graft"), to add to the massively greater amount of information that you already had from your brain-internet access. What if you were able to program this "brain graft" just by thinking about it, so that you could offload some of the more tedious things that you now do laboriously with your existing biological brain (e.g. a trivial example would be mental arithmetic)? The possibilities of what you can do with a "brain graft" are endless.
Naturally, my vision is for a future that is something like the second case above, provided that the technology is used sympathetically. None of us wants to be like the Borg.
However, like de Garis, I am not optimistic about how different groups of people, using different levels of artilect technology, would smoothly interact with each other. This will be a big problem, which is discussed by de Garis as the "Terrans" versus "Cosmists" issue in The Artilect War. For instance, if they wished, groups of people could opt to not use this technology, much as some people people currently opt to live low-technology lives in tipis, but the tipi dwellers don't always have a smooth time with their technology-using neighbours. This type of problem could be exacerbated by many orders of magnitude by an artilect-based technology.
What do we do to get from where we now are to the vision of the good future described above? The only sensible course of action is to continue research in the area of artifical brains, and to ensure that whatever technology is created is integrated sympathetically into our human framework. We have to always be on the look-out for potential instabilities, where small groups of people can create dangerous versions of the technology, and to protect ourselves against this. A contemporary example of this problem (on a trivial scale compared to artilect technology) would be the fight against writers of assorted malware (e.g. software viruses, etc). The "arms escalation" that exists between the "good" and the "bad" guys ends up making the good guys much stronger, provided that they recognise early on that they are in a fight for survival.