9.24.2005I just wanted to clarify something about my position. During the Goodman reading group Dave commented on how part of my project was to redefine the Turing Test in such a way as to recognize that machines already pass it. Dave suggested that this sort of philosophical analysis could be used to cash in on Loebner Prize and other science awards and make a tidy profit in the name of philosophical progress. I say we try to defend a Leibnizian theodicy and get our hands on one of them Nobel Peace prizes.
Kidding aside, it would be absurd to say that machines currently pass the Turing test, understood in the conventional way. The Turing test as implemented in the Loebner contest is concieved as a test for linguistic competence, where being able to fool a human that the computer is also human indicates a certain degree of intelligence. But machines clearly have no linguistic competence whatsoever. Machines still have a long way to go before they can be considered intelligent speakers.
It is worth noting, however, that the prize for this test doesn't go to the computer, but to the designers (as opposed to, for instance, dog shows, where the dog itself is considered the champion, and not its breeder). And this is the source of my objection to the traditional interpretation of the Turing test.
Turing originally conceived of the test as a game to be played by humans and computers, and the computer's intelligence is judged relative to how well it played the game, both by the standards of the game and by the willingness of its human collaborator to attribute to it intelligence in playing the game. Turing thought that written language was sufficiently medium independent to be an objective determining factor in judging intelligence, but the idea of language use itself wasn't the focus of his imitation game. The point more generally is: how well can computers act like humans? With some of our more complicated behavior like language, the computers have a ways to go. With our more basic activities, computers are trotting along with us just fine, if not better.
Two points to make on this:
1) All that shit about the supposed 'singularity' is just stupid, because if machines do something radically different than us (even if it is in some sense 'better') we just wont consider it an intelligence anymore. In other words, an 'incomprehensible intelligence' is not intelligent at all.
2) Computers try and keep up with us, just as much as we try and keep up with them. In other words, there is no static thing that it is to "act like a human". Thus, humans and machines are inevitably bound together in symbiotic evolution.