thinking is dangerous

Fair play

Keep the ball moving.

The list:

1) Nature and machines

1a) With Descartes, and all philosophers who worried about the determinism of the new science, mechanization was to be associated with natural processes- with the laws governing matter and the mindlessness of the animals. Man, in an effort to distance himself from the machine, was also distanced from nature itself. Thus the dualisms of mind over body, and of reason and intelligence over mere mechanical processing

1b) The machine's position in relation to nature has shifted as our understanding of the natural world has grown. Now philosophers are by and large naturalists of some stripe or other, with few exception. And yet we still fear an alliance with the machine. Man is now natural, and the machine has become unnatural. The machine is the product of design; its rhythms don't carry the beat of biological life but of function and technology and modernity.

Corrollary: The mental vs material distinction becomes updated on the naturalist view as a distinction between the natural and the designed. Although the naturalist is committed to the claim that a machine in principle could do everything a human could, because "humans just are such machines", the design distinction permits the naturalist to in fact draw a sharp distnction between what humans do and what a given machine does. That machine X can perform task Y is a reflection of its designer, and not of the nature of X itself. Thus, without sacrificing his committments to naturalism, man can still draw a safe distance between him and the machine.

Lemma: The problem of design runs much deeper than the debate over the place of machines in nature. The lamentable evolution 'debate' that occupies so much time and energy among even those who otherwise have no philosophical or scientific stake is an example of how deep and far this design distinction goes in our intuitive and common sense distinctions. This is a conceptual problem with the notion of design, and deserves serious philosophical analysis. But such analysis is outside the bounds of my project, at least for the moment.

1c) Our retrograde back into nature has left the status of machines uncertain with respect to human activities. An ontological or metaphysical distinction between humans and machines has been abandoned through the embrace of the new sciences, but the social and normative impact and contributions of machines have remained outside the realm of a fully humanist and anthropocentric philosophy. Machines, if they are discussed at all, are relegated to the status of mere tools, built and ready for the manpulation by humans to further exclusively human projects. The legitimate contributions of machines to our practices has been shielded by an endemic bias against machines, and this bais remains even after the enlightening touch of naturalistic philosophy and increased scientific understanding.

2) Turing's call for change.

2a) For as long as philosophers have attempted to distance themselves from the machine, there have been others who embraced the human's status as machine. Thus we have La Mettrie in 1748 urging us to "break the chain of your prejudices, [and] render to nature the honor she deserves" by "conclud[ing] boldly that man is a machine", and Putnam, 113 years later, suggesting less boldly that "a Turing machine plus random elements is a reasonable model for the human brain". But such attempts at naturalization, though commendable, play into the same anthropocentric bias that motivates their opponents.

2b) Turing was the first, and by my count the only philosopher to look beyond the superficial attempts at direct comparisons or identifications between man on the one hand, and his increasingly complex and sophisticated technology on the other. The various analogies drawn between the human and the computer, and the similarities and dissimilarities between the two, only serve to distract from our central concern.

It has for instance been shown that with certain logical systems there can be no machine which will distinguish provable formulae of the system from unprovable, i.e. that there is no test that the machine can apply which will divide propositions certainly into these two classes. Thus if a machine is made for this purpose it must in some cases fail to give an answer. On the other hand if a mathematician is confronted with such a problem he would search around and find new methods of proof, so that he ought to be able to reach a decision about any given formula. This would be the argument. Against it I would say that fair play must be given to the machine. Instead of it sometimes giving no answer we could arrange that it gives occasional wrong answers. But the human mathematician would likewise make blunders when trying out new techniques. It is easy for us to regard these blunders as not counting and give him another chance, but the machine would probably be allowed no mercy. In other words, then, if a machine is expected to be infallible, it cannot also be intelligent.

Turing is, of course, not arguing that somehow making the computer fallible increases its intelligence. Rather, he is pointing out the absurdities inherent in such direct comparisons to the respective competencies and abilities of humans and machines. The gainsaying response to a any particular machine's abilities of "Well, thats not how humans do it" is unfair to the machine, and overlooks the legitimate accomplishments and contributions of the machine.

Corrollary: Ironically, the Turing test is often taken as an argument for the position that machines are intelligent when they can successfully imitate the behavior of humans, and thus Turing's own contribution to the AI literature is often misconstrued as merely reinforcing the dominant view. However, Turing offered his test not as a means of testing the capacity for mimcry in machines, interlocution between machines and humans. The ability for humans and machines to converse, to play the same 'language game', as it were, rests on the assumption of fair play in assessing the game.

2c) I would like to take this idea of 'fair play for machines' seriously, and evaluate the contributions machines make to our social and normative practices in a fair light. We should not be too quick to write off machines as merely passive- or worse, inert- containers and tools for human interactions. Instead, we should be open to describing certain machines and artificial systems as agents, actively and interactively participating in our social practices, and as themselves contributing new dimensions to those practices. But we should likewise not be too quick to spot the structural similarities or dissimilarities between humans and machines as evidence for this participation. Google as a system, which looks, acts, and responds in ways radically distinct from even the most strained human analogy, contributes a great deal to our practice of using words and phrases, and in locating the meanings, references, definitions, and interrelations between those words. In many ways, Google can be considered an expert with respect to the meanings of most words, both in principle and in fact. Google is also a competent interlocutor, demonstrating not only expertise but understanding of the language. And Google performs these tasks for the most part autonomously; or at least without direct human intervention.

Lemma: Google is just the most visible example of so-called 'Artificially intelligent agents' that inhabit our environment, and that play some role in our daily interactions. Other examples abound.

3) What I want to accomplish

3a) My project is at once constructive and deconstructive. It is an attempt at deconstructing the traditional view of machines both in nature and in meaningful and normative human social interactions, and to recommend Turing's alternative approach to the state of play between humans and machines. But I also hope to modify and extend existing normative frameworks to allow this Turing's assumption to bear fruit. This task is at once easy and difficult. It is easy in the sense that most viable philosophical positions today are extremely sympathetic to the general naturalistic framework, and extension to the domain of active machines will not require any serious fundamental reshuffling. However, the bias against machines often appears in subtle ways, and picking out this detritus may prove difficult work.
16:18 :: :: eripsa :: permalink