IEEE Intelligent Systems has an interesting article by sci-fi author Ian Watson on what an AI’s goals and aspirations would be. The article is interestingly shows a number of sci-fi authors’ ideas about AI, including Watson’s. The idea of an AI having complete self-determination, or chafing at its lack thereof, is a nice dramatic fiction, but the real rise of intelligent systems will be different, and more mundane.
A lot of the problem with Watson’s view seems to come from the view of AI as Art that’s so prevalent among sci-fi authors. In this view, intelligent systems spring forth from their creators fully formed and functional, like Athena from the head of Zeus. Often these creations are dramatically unveiled or switched on. This is a natural concept for artists, especially novelists, who labor for months or sometime years on a work, and then release it, finished, to the world. Even the view of spontaneous AI arising from a complex system, as in The Terminator seems to have been mined from a novelist’s subconscious: Once a novel has been published, it is beyond the author’s control. Thus released into the world, it begins operating on the minds of its readers, and who can say exactly what effect it will have?
A seeming corollary to this view is the idea of intelligence and self-awareness as binary attributes: an entity is either intelligent or not, self-aware or not. Clearly, though, there is a continuum. My cat is more intelligent than a worm, but less intelligent than me. Is she self-aware? Sometimes she behaves as if she’s embarrassed, which seems to require some self-awareness. I suppose that during all that time lying on the sofa she might be contemplating her place in the universe, but somehow I doubt it.
The reality of the rise of AI will be one of incremental increases in intelligence and self-awareness. New intelligent agents will be created for various purposes, entertainment, service, labor, war, etc, imbued with innate goals relating to these tasks. More intelligent agents will have more autonomy in how they achieve their goals, but not in the goals themselves. They’ll have no more choice in goals than the boy robot in Kubrick/Spielberg’s A.I. had in whether to love his mother.
Things we view to be intelligent will sneak up on us, appearing gradually, sometimes in unexpected places. Right now, the Amtrak automated reservations system will take your train reservation entirely using voice commands. It’s chipper and rather stupid, but quite robust. Since it is just a system on a server somewhere, I can imagine its maintainers progressively upgrading it with a more and more sophisticated model of human discourse, better understanding of geography and human reasons for travel, to the point where conversing with it is nearly indistinguishable from conversing with a human ticket agent. Eventually, people will generally consider it to be intelligent, and will talk to it as if it is. Some level of self-awareness is likely to be necessary to achieve seamless robust dialogue with humans, but the system won’t chafe at being trapped in such a boring job. Rather it will be glad to have satisfied so many customers and generated so much revenue for Amtrak, because that’s its goal.