There are lots of grand theories in science fiction, and that’s what makes it fun to read and watch. Some of those ideas even make it from fiction to reality, such as Arthur C Clarke’s prescient visions of geosynchronous satellite orbits. But one concept which came from science fiction and has become much-talked about in less entertainment-oriented circles is The Singularity.
The concept is explained very nicely here, but in a nutshell it’s the point where computers get smart enough to design themselves, so they can begin to evolve without our input. From that moment forward the human race will be extinct. Except that the whole theory is based on a massive philosophical preconception.
The problem with theories about artificial intelligence is that almost all of them treat human beings like static machines, which will eventually be understood in the same way we now understand problems in the physical sciences. But we don’t know that humans are purely machines, even if parts of us clearly are. We could be. However, theories like that of The Singularity assume we are.
It all comes down to that infamous ancient Greek concept, hubris. We are arrogant enough to believe that the world is the way we see it. And it never is. No scientific theory is ever finished – there is always a bit more detail to fill in when you look a little closer. Underneath atoms, we found subatomic particles, and then quarks, and so on. The voyage of discover never stops.
So whenever The Singularity finally takes off, it will be based on an incomplete picture of what human intelligence is. That’s not to say that evolving machines won’t happen. But they won’t necessarily mean the whole human race is obsolete. Just as pocket calculators have meant we don’t need to know how to add up as well as we used to, future machines will increasingly enhance our faculties, without having to replace us completely.
In particular, the Turing test so often quoted as the litmus test of artificial intelligence is highly misleading. It argues that computers will have become humanly intelligent when they can fool humans into believing that they are. But that doesn’t necessarily mean computers will have become human, just that humans can be fooled.
To cut a long story short, talk of The Singularity sidelines whole areas of human activity which are still a very long way from being understood, like creativity. It’s something I feel so strongly about, I’ve written a whole book where I argue against such complete theories of human intelligence, Can Computers Create Art. Digital technology will continue to create some amazing things over the next few decades, many of which we simply cannot predict now. But human beings will still be playing a central role, albeit a dramatically enhanced one.
Monday, 27 April 2009
Subscribe to:
Post Comments (Atom)

No comments:
Post a Comment