10.12.2004The initial motivation for this line of inquiry came from a comment made on one of my in-class questions from the rule-following course. I was talking about a computer hard drive light flickering on and off as it reads data, as an example of rule-governed behavior designed to communicate the internal state of the machine- a form of rudimentary language use. However, what I took as an innocuous example was in fact deeply contentious: “Computer are not normally understood as language users”
Why not? I had (naively) taken my undergraduate training in computer science as learning ways of using computer languages to get machines to behave in certain ways. ‘Programming’ seemed to require an understanding of computers as users of a very precise, formal language. The theme of the rule following course, however, indicated that this (syntactic) understanding of language was not enough to get computers to speak natural (human) languages. What was missing was a way to build semantics into the computer’s language. However, this wasn’t as simple as merely positing the meanings of the terms. What was required, in fact, was to have these computers enter into human shared linguistic practices.
There is a long history in CS and Psychology of attempting to get computers to speak human languages. These attempts stem from an understanding of the brain as itself a computer, which contains certain symbolic representations that it manipulates computationally (syntactically) in certain ways. This manipulation is supposed to account for the entirety of though- both how we think about things, and how we generate the language we use to describe these thoughts. The idea is that, if we can find the set of mental representations and the rules that govern their manipulation then we will have a complete theory of the mind. The hope, which for the most part has since been abandoned, was that once these mental structures and rules were found, it would be a simple exercise to import them into a universal computer, and we would instantly have artificial intelligence. The reason these attempts have largely been abandoned stem directly from the inadequacies of a representational view of the mind. However, this general theory has largely remained in tact, and the faults in AI have mostly been attributed to the particular representations used. The representational view itself is rarely attacked. When it is, the attacks are usually simply dismissed or ignored.
I see, at this early stage, three parts of a research project in this area. The first stage will involve an analysis of the representational view of the mind, and whether this can give an adequate understanding of both the process of thought and the meaning of language. My intent is to read these theories as plausibly as possible; however, my intuitions are far from optimistic. The second part will by an analysis of the different attempts to get beyond the representational paradigm. These views, referred to generally as ‘embodied cognition’, attempt to locate thought not within a representational structure, but within a larger context- that is, within a world. It seems to me that these views make the possibility of entering computers into our linguistic practices plausible in a way the representational theories do not. The third part is more speculative, and I one I have admittedly not thought much about. If the most plausible theory of the mind requires locating thought not in isolation but within a world, then in order to understand computers as part of our linguistic community requires a new understanding of the nature of a computer’s computation as itself within a world. This section would analyze out relationship with computers, which in some ways is already more robust and intimate than our relationship with any other living creature. This new understanding would attempt to break some of the theoretical barriers to understanding computers and humans as part of a single community, and to allow linguistic interaction between the two to become a real possibility.