The artilect war
I posted before on Human life: The next generation by Ray Kurzweil, which suggests that the rate of technological advance is such that it won't be long before we significantly upgrade humans to a better model. I disagree with that prediction, or at least I think that the time scale for things of that sort to happen will be quite long (i.e. many human lifetimes, at least).
In this week's New Scientist there is a letter Cosy Kurzweil whose author Hugo de Garis paints a far less rosy picture of our technological future than Kurzweil. de Garis has written a book The Artilect War (precis), about ARTIficial intelLECTs that have massive intellectual powers, which could reasonably be expected to be created (or create themselves from earlier prototypes?) sometime during the 21st century. [Update on 30 December 2008: The links in this paragraph are now dead. A PDF of The Artilect War is here. Hugo de Garis has a web page here.]
I strongly urge you to read The Artilect War. I find myself in a similar quandary to de Garis, who is working on building artifical brains yet he is worried about the possible consequences of his research.
One course of action that would be incredibly naive to take would be to unilaterally abandon research in the area of artifical brains. That would be as stupid as to unilaterally abandon defence research, where it wouldn't be long before your vulnerability to the hostile actions of others would soon be your undoing. Similarly, in artificial brain research, you at least have to understand what the potentialities of the technology are in order to protect yourself against hostile actions using such technology against you. The only moral course of action is to continue the research.
Here is a worst case, and a best case:
- Worst case: Building artilects may turn out to be just another step in evolution, and that ultimately we (i.e. humans) would then be "viewed" (or whatever artilects do when they are "thinking") as just a stepping stone along their prior evolutionary path. In this scenario, the human species (as we know it) does not ultimately survive the appearance of artilects. Naturally, being humans, we don't exactly relish this prospect.
- Best case: An artilect would be an artificial brain-like prosthesis that would greatly enhance the abilities of its human "wearer". To use a present-day analogy, imagine what it would be like to have direct brain access to the internet, rather than having to type with your fingers at a keyboard and receive results through your eyes. Assuming the interface was designed so that you could use it efficiently just by thinking, then you would be phenomenally knowledgeable. Now imagine upgrading this direct brain-internet access to include the ability to do massively more intelligent thinking (let's call it a "brain graft"), to add to the massively greater amount of information that you already had from your brain-internet access. What if you were able to program this "brain graft" just by thinking about it, so that you could offload some of the more tedious things that you now do laboriously with your existing biological brain (e.g. a trivial example would be mental arithmetic)? The possibilities of what you can do with a "brain graft" are endless.
Naturally, my vision is for a future that is something like the second case above, provided that the technology is used sympathetically. None of us wants to be like the Borg.
However, like de Garis, I am not optimistic about how different groups of people, using different levels of artilect technology, would smoothly interact with each other. This will be a big problem, which is discussed by de Garis as the "Terrans" versus "Cosmists" issue in The Artilect War. For instance, if they wished, groups of people could opt to not use this technology, much as some people people currently opt to live low-technology lives in tipis, but the tipi dwellers don't always have a smooth time with their technology-using neighbours. This type of problem could be exacerbated by many orders of magnitude by an artilect-based technology.
What do we do to get from where we now are to the vision of the good future described above? The only sensible course of action is to continue research in the area of artifical brains, and to ensure that whatever technology is created is integrated sympathetically into our human framework. We have to always be on the look-out for potential instabilities, where small groups of people can create dangerous versions of the technology, and to protect ourselves against this. A contemporary example of this problem (on a trivial scale compared to artilect technology) would be the fight against writers of assorted malware (e.g. software viruses, etc). The "arms escalation" that exists between the "good" and the "bad" guys ends up making the good guys much stronger, provided that they recognise early on that they are in a fight for survival.
5 Comments:
One will view Terminators 1-3 in a different light from now on. Perhaps the most affecting of these was T2, in which a techno boffin, at the top of his profession, chose self immolation to frustrate the super brains, aided by a machine that was in touch, if not with its feminine, then at least with its human side. Save for seeing the Gubernator have his head shoved through a urinal (and the wall behind it) T3 hasn't got that much going for it though.
I seem to remember that the boffin in Terminator 2 was mortally wounded when he decided to sacrifice himself; oh well, it was a small but significant gesture. Anyway, it makes you wonder whether artilect technology should be moderated by having a human in the loop, done in such a way that the artilect is subordinate to the human. However, I don't see how to get around the problem of "rogue" organisations using artilect technology in ways that circumvent this safety feature.
Another fine post, Steve. Adds a whole new dimension to the old phrase, "may you live in interesting times." Here's hoping that humanity's cleverness and ingenuity haven't so rapidly outstripped our advancement beyond our primordial aggression that we doom ourselves. Though I think that aggression itself plays a key role in discovery, and it's silly to think that the whole complex that is human behavior can be neatly unraveled for inspection. Cheers.
What a crock. _The Artilect War_ is completely non-sensical. In the first place, we already have processing systems that are billions of times more powerful than a human. A single human didn't put a man on the moon -- it was a group effort.
Secondly, a single machine that is trillions of trillions of trillions of times more powerful than a human is going to take another 120 years to build -- if we can speed up the doubling time to once per year. We aren't going to be fighting about that sort of machine this century.
Maybe in another 30 years, 2040, the amount of artificial processing will be comparable to the amount of human processing. That's 20 years *after* we drop our circuits down to being a few atoms wide.
Thanks for drawing my attention back to this posting of mine. I have fixed some of the broken links.
Team effort, rather than individual effort, certainly gives you a gain factor, but it is not in the billions. I would have thought that the number of team members gives you a greatest upper bound on the gain factor, and then a more realistic (and much lower) upper bound would take account of the need for intra-team communications and general "red tape".
As for how long it is going to take to achieve the necessary increase in processing speed, you don’t need to have 10^36 times the bit-flipping speed of the human brain in order to potentially have massive intelligence. From the human’s point of view, an artificial intelligence that is “merely” a 1000 times faster than himself will have a "massive" intelligence.
Of course, all this depends on us not only being able to construct (directly, or indirectly) entities that have the required fast processing speed, but also on us knowing how to use the bit-flips in these entities usefully to implement artificial intelligence. I think processing speed is the “easy” problem (because it can be achieved in bite-sized increments of progress), whereas artificial intelligence is the “hard” problem (because real progress probably depends on some deep and yet to be achieved insights ).
The current research on reverse engineering the brain in order to implement simulations of it on massively parallel computers will no doubt be interesting, but will not necessarily lead to a fundamental understanding of brain-like processing. My gut feeling is that we need a more abstract approach than focussing on the details of the processing that goes on in the brain. This might allow us to get at “B-theory” (or “brain theory) whatever that is, but progress of that sort generally needs deep thinking rather than massive computer simulations, though appropriate simulations can help.
Post a Comment
<< Home