<$BlogRSDUrl$>
eripsa
thinking is dangerous

I Think, Therefore I Am — Sorta

7.23.2005
D&D thread

INTERNET's own rJames said:
Is intelligence simply the ability to inductively and deductively solve problems (of a more general variety than board games)?


I think this misses the point, or is at least too simplistic. Computers are good at traditional forms of reason: logic, math, deduction. Intelligence surely requires inductive knowledge as well; the problem, however, is that 'inductive knowledge' doesn't pick out any formal set of methods or procedures as it does with deductive knowledge. A scientist's ability, for instance, to pick out the important implications of a particular data set isn't simply a pure rational procedure, and is therefore extremely difficult to port this ability over to computers.

What is required is not another set of logical and mathmatical laws, but a kind of deep background knowledge, and in particular one that is keyed into human interests- which includes a robust understanding of psychology and social situations. The social dimension is probably made a bit easier in a military setting, since rank and protocol are so sharply defined. And I disagree with Stewart that 'psychology' is just metaphoric- the above program of course uses only a subset of general human psychology, but using concepts or 'frames' that classify, store, and evaluate information is itself the very essence of psychology. Multi-agent simulation might be important, but is by no means a necessary part of psychology.

INTERNET's own Stewart said:
You cant just 'program in' goals.


Somehow, you think you are justified in simply asserting this, and then conclude that any talk of goals or artificial psychology is just fanciful metaphor. This intuition, however, is wrong, unfalsifiable, unscientific, and just stupid.

I agree that 'goals' aren't a simple thing, but they are far less mysterious than you make them out to be. Having goals, or being understood as intentional in any sense, entails being understood as participating in the relevant practice wherein those goals and intentions can be seen as meaningful. Deep Blue, insofar as it is playing chess, can be understood to have certain goals and intentions that explain its behavior, even if these goals and intentions are not directly observable in its program or architecture. Similarly, simulations, insofar as they can be understood as participating in, say, a training exercise, can be understood to have different goals and intentions. To a large extent their behaviors are 'programmed in', as you so lovingly put it. But its goals do not come from its program, they come from the rules and circumstances that govern the activity itself.

INTERNET's own Stewart said:
Well for me, having goals is very fundamental to having a psychology in general. Perhaps even necc and suff. My problem with your discussion here is its too broad, in that we can interpret hand calculators and thermostats and maybe even domino lines as having goals. And I dont think we want to start confusing simple physical causality with goal-directed behavior, which I think is what you are forced to do, but maybe you have no problem with that. To me I think there is a principled and philosophically important difference between me working an electric switch to regulate my room's temperature, and a thermostat doing the same thing.

I agree with you that social context is of fundamental importance in creating human-level AIs, but I dont see how just asserting program X understands such context and has various goals really makes it so. I could make a complicated "Choose Your Own Adventure"-type book and make similar claims.


Dominos dont have goals. Dominos dont do anything. Thermostats do have goals, when understood in the context of temperature regulation, and when properly functioning as part of a temperature regulation system. The thermostat apart from this context (ie, apart from this specific activity) is just a strip of metal that bends according to temperature.

I'm not confusing causality with goal-directed behavior, I am offering a view that explains the source of the goals themselves- the activity in which the system functions. You have stressed the importance of goals, and I agree that goals are important in particular sorts of activities (those we might describe as 'games'), but goals aren't things you can point to or that have any sort of causal role in a system's behavior, so your view comes off as metaphysical and mysterious.

My point isnt that this gives 'human-level' AI, but that to be considered intelligent in any capacity requires some level of participation with humans in particular activities. And without robust knowledge of our social and psychological structures in particular domains, machines cannot be considered intelligent.

The upshot is that it isn't a matter of the causal structures in the human or the machine that give it intelligence; the causal structures just regulate behavior. That behavior can be judged intelligent relative to the particular activity in which it is engaged, since only in the context of an activity can we make judgements about goals and intentions. And intelligent behavior (as opposed to mere logical behavior) entails that the activities are uniquely human.

Understanding, like intelligence, is a activity-specific quality. I dont see how what is going on 'inside' the system is more relevant to its intelligence and understanding than its activity within some domain.

INTERNET's own SRG said:
I agree that "goals," and related sorts of things like "meaning," can only be understood as a part of a system or set of conventions. I disagree, though, that thermometers have goals, and I'm surprised that you would say they do. Thermometers have an intended purpose for which they are used, but they don't have goals. The user of the thermometer has the goal. How do you defend the assertion that the thermometer has the goal? It doesn't, for instance, react to a failure to achieve the goal by trying a new strategy. That would be, to me, a criterion for goal-oriented behavior.


I admit that your interpretation here is standard, so I hope I dont seem dense by arguing against it.

However, I do find contradictions in your interpretation that aren't easily remedied short of the mystical explanation Stewart seems to want to defend, where goals are somehow transcendent over the behavior of the system. Goals aren't things; and nothing, strictly speaking, 'has' goals. As you say, goals are understood as part of convention. Or better, goals emerge from particular activities or practices that various systems and agents participate in. Goals aren't to be attributed to the agent independent of those practices.

So the thermometer doesn't have any goals, taken in isolation. Rather, as I said before, when the thermometer is hooked up in the right way to a well-functioning system of thermoregulation- that is, when it is part of the right activity- then it can be understood as having a goal.

Saying it has a goal is more than saying it has a function, but also that its function serves some end. A common response, as you point out, is that the end derives from a designer, who intended the system for this or that end. After all, the temperature of the room is nothing to the thermostat- it behaves the same whether or not it is hooked up to the heater and can control the temperature.

But the designer isn't part of the activity of temperature regulation in any direct sense, nor does he play any active role in the thermometer's behavior. The designer engineered the mechanism to function in a particular way to achieve some end. This requires some understanding of the activity, but does not imply that the ends are the designer's alone. Rather, the ends derive from the activity, as I have been arguing; and the themostat, insofar as it participates in that activity, can be said to be directed towards that end.

There are two points to make here. The first is that the question of 'design' or 'original intention' is irrelevant to determining whether or not a system is goal-oriented. For instance, I might use a hammer as a paper weight, independent of its original intention. Second, there is nothing 'inside' a system, in terms of mechanism, material, or complexity, that determines whether or not it is goal-oriented. A thermostat is a simple mechanism, but its simplicity doesn't detract from the fact that it is well-suited for its task.

Even still, the thermostat is dumb and can't vary its responses to its environment, to try a 'new strategy'. Of course the reason for this is how well a thermostat functions- there is no need to make alternative strategies available to it. But this only implies the thermostat isn't intelligent, not that it isn't goal-directed. And your point here suits my view well. For instance, an organism with no defense that gets eaten by another organism is also incapable of trying new strategies, since it is now merely food. But surely the species can adapt to new environments and adopt or evolve new strategies- so is the proper locus of goal-directedness the species? It is much easier and more coherent to see the goals as emerging from the activity in which the organism participates- in living and trying to survive.
07:01 :: :: eripsa :: permalink