8.26.2005Posted by Cloud 9 in this shitty thread.
I basically used this as an opportunity to try and defend the view Melnick presented in class in a public arena. I took some liberties of extending it in places, and obviously some places I had to fudge.
Thought is the pooling of action-guiding resources to attend to specific detections and information from the environment in order to plan courses of behavior. These action guiding resources are perfectly general with respect to detections.
This is to be distinguished from purely 'instinctual' stimulus-response, which is detection-specific.
Abstract thought just depends on which action-guiding resources the creature has available. If the creature can have thoughts about thoughts, or thoughts about classes of objects, it increases the power and generality of those resources.
The most powerful of these action-guiding resources is language, which shifts thought from being a distributed, parallel analog process to a serialized, digital, quasi-logical process.
In case you were wondering, consciousness is the concentration of these action-guiding resources on a specific detection.
Cloud 9 said:
If I'm reading this correctly what you're saying is that all living beings are genetically pre-disposed to do everything in thier life, and that thought is just the road map kicking in?
I dont want to say anything about genetic predispositions, and it seems to me that some form of learning is required to achieve the sort of novelty and generality required for the kind of action-guiding resources I am talking about.
I dont mean anything extremely sophisticated. Think about the lioness stalking her prey. This isn't pure stimulus-response (like the frog shooting out its tongue at any small sark object). The lioness is constantly updating information about its surroundings and its target, and adjusting its behavior to fit this information in order to best achieve its goals. Given the information it detects, there are several behaviors the lion can do in response; thought is the procedure for sorting through these possibilities with a particular goal or plan in mind.
A parallel example in humans is, for instance, having to get across town, and planning a route to get there, and being able to compensate for any detours or distractions along the way. That is the essence of thought.
Now, a lot of these action-guiding resources are probably genetic, and definitely have to do with our physiology- they are functions the brain can perform. But as we go up the cognitive scale, these resources become more general and learned, and can be quite distant from the genetic starting point, like language.
[after a derail about behaviorism]
I was looking forward to this opportunity to post my newest formulation of a theory of mind, and no one has challenged it at all.
I have to be totally honest and say I didn't understand it.
I want to post my latest formulation of a theory of conscious thought: consciousness is the human brain's sophisticated backup plan. It's an extremely general (but not completely general, as I think Eripsa's just claimed) problem-solving module. If we can't do it "without thinking," we do it consciously, which is usually not nearly as efficient. Eventually, after doing something consciously for long enough, it can become something we do without thinking. A perfect being would not be conscious.
I made it purposively confusing so someone would call me one it. It is actually my prof's view, and I just want to take it for a test run, and the best way to test something is to practice defending it. Its not at all new in any of its particulars, but it does constitute a full-blown theory of mind.
Thought is a kind of procedure for planning actions and achieving goals. The brain has certain resources at its disposal for achieving these ends, which includes body-centric things like learned skills (how to walk) and innate behaviors (how to move your legs), but can also includes more powerful and all-purpose faculties, like attention to novelty, innovation, creativity, globality (big-picture or long-term thinking) and so on. This can also include language, on which reason and logical thinking piggyback. I called these collectively the 'action-guiding resources'- tools, basically, that each contribute to the continuous adjustment of behavior to environment. So external stimulus gets fed into these resources, gets processed (or better- digested) by the brain, and this results in some action or behavior.
A lot of the resources I list above are rather indicative of 'higher' types of cognition, but I dont want this to be overly complex. Thought (or cognition) is just the adjustment of behavior to match environment, mediated by the processing of these various resources. So back to my lioness example: when the lioness is stalking her prey, she is constantly and carefully adjusting behavior to optimally stalk her prey. It is the attention of action-guiding resources to specific detections that constitutes thought. Or you could say that stimulus results in action via the mediation of these cognitive resouces.
That, I think, is pretty straightforward and uncontroversial, except in saying that that is all thought is. The main point though is to distinguish it from the pure stimulus-response of lower animals, which is unmediated. Responses in lower animals is detection-specific, and no general purpose resources are put to use in processing those detections. They simply act upon detection.
The payoff of the view, though, is that it provides a theory of consciousness. Consciousness is just the concentration of these action-guiding resources on specific detections. The brain resources can go churning away on whatever stimulus it wants, but when a certain stimulus becomes the focus of lots of these resources it becomes conscious. I am still working on this part, and I have more to say, but I'll leave it there for now.
This sounds way to abstract, but there are some interesting consequences of the theory that have empirically testable conclusions. For instance, the above would explain why talking is almost always a conscious act. I dont mean just rambling on, or reading from a teleprompter, but actually using language requires a lot of resources to be focused on this behavior, including motor resources, language resources, creative and novel resources, and so on. That is a testable hypothesis, but it also seems right.
edit: if by perfect you mean able to attend to all input at once, on this view the perfect being would be totally conscious.
Well, the idea is that consciousness comes from attending to excess (unneeded) input. A "perfect" being is one that has an efficient module for everything. I got this from thinking about those savants that can do complex cube roots in their heads, and they can't tell us how they do it, if this helps you understand what I'm thinking.
Hmm. I am guessing the idea here is that consciousness is something of a power drain or crutch that is best avoided? I dont know, and I haven't worked out quite what consciousness does to the benefit of an organism yet, so I really can't address this. It seems to me, however, that I am usually conscious of the thing I am working on. The crickets chirping outside as I write this dont make an appearance in consciousness unless I attend or concentrate on them. But of course the sounds are still entering my ear canal, and are being processed by my brain, without any involvement of consciousness. Consciousness is a kind of focusing on a particular set of detections.
So the savants have extremely good cognitive resources without the ability to focus or bring them into consciousness. But I dont understand how that implies that their modules are somehow more perfect than ours, or that the object of conscious concentration is somehow unnecessary or excessive. It certainly isn't necessary for cognition, but consciousness is derivative on cognition, not the other way around. In other words, its not like the savant has somehow bypassed inessential wiring via some more 'perfect' shortcut, but that their resources simply can't organize and focus in the right way to form consciousness.
The Artificial Kid said:
Just last week I was thinking about the connection between consciousness and serial and symbolic computational processes in human thought. When you do things that are automatic or ingrained, and perhaps that are based on long-term changes to your neuronal network, it seems like they're more likely to happen unconsciously and in parallel, whereas when you sort through a recently learned list, for example, you tend to see the items and the search is parallel. Serial search like that fits well with symbolic computational models of information processing, so perhaps when you're "thinking" through a task you're using a symbolic computational system built out of neurons, a system that has attention that can be directed to encoded items, whereas when you're doing something ingrained you're just producing a response based on a well-worn neural track, either for your conscious mind or for outside observers.
Would it be able to distinguish between rambling or teleprompter reading and "true" talking? It seems like rambling uses at least all three of the resources you mentioned to a certain degree, though the degree could be important.
On the other hand, it seems like the resources listed are a priori assumptions of some functional modules that exist, so I don't know how one would verify that they are being used if we don't know that they are in fact real entities.
edit: Specifically the "creative and novel" resources seem hard to measure. I'll grant that we can measure motor resources by the lips moving, but what are the criteria for "creative and novel?"
I am talking about processing resources for the creative and the novel, which isn't as intractable to detect.
But lets start from the beginning. You are right, it is a matter of degree. But that seems to fall directly out of the theory of cognition I gave before. Cognition is a mediation between stimulus and response. So the degree of involvement of cogntion should be seen in the time between stimulus and response. So I would guess that the degree can be tested through time-delay experiments that are already extremely common to psychology. Of course, we have to be careful, because certain mediating processing could take place extremely rapidly (again, the calculations performed by the lioness takes place in fractions of a second), but if we constrain ourselves to a particular domain, for instance language, we should have interesting results. I haven't thought about this much, but maybe some test where you flash people a picture and have them just ramble off whatever words the image brings to mind without paying attention to grammar or sentences or anything, timing the delay between responses, and then flash the same picture to other subjects but have them engage in a discussion about the picture in full discursive language.
Similarly, novelty and creativity can be tested in similar ways. Novelty is just an ability to react to unexpected phenomena, and creativity is to come extent the ability to compensate for novel phenomena with the resources available at the time. If we are talking about langauge use, this manifests as an ability to relate to what someone else is saying even if it differs wildly from what I might expect them to say, and to attempt to respond to what they are saying in their words or words you might not normally use. I am making this up off the top of my head, but something like the $10,000 pyramid game show in experiment form might tease these resources out.
In any case, though, I am talking about resources, which means tools, which means abilities and skills. So they aren't exactly entities, although they are wholly captured in particular brain configurations. But they are only manifested in action and certain capacities for action.
So anything incapable of planning future behavior is not thinking, even though multiple action-guiding resources may pool together to guide its complex immediate behaviors?
Take the flatworm. If you shine a lamp near the little creature, it's eyespots will detect the light, and depending on its current need for heat or cold, it will migrate towards or away from the lamp-lit waters. Yet the worm itself has no memory, nor a capacity for planning ahead. If the lamp is removed, it will cease moving, at least for the purposes of finding or avoiding lamp-lit water. Therefore, the flatworm's immediate and complex behavior only exists during its stimulation.
Would you consider the flatworm's behavior a purely instinctual stimulus-response, even though it is an example of two action-guiding resources--eyespots and temperature detecting organelles--pooling to direct unplanned behavior?This is to be distinguished from purely 'instinctual' stimulus-response, which is detection-specific.
I agree that single stimulus, single-response actions are detection-specific, and do not require thought. However, I'm a little leery at the implication that complex behaviors caused by multiple stimuli are thoughtless, even if the individual stimuli--once divorced from any complex behaviors--are purely instinctual.Abstract thought just depends on which action-guiding resources the creature has available.
I agree with this, though one wonders if these action-guiding resources you name are specific material things within the brain ("Here is the loop that makes a mammal think of its own thoughts!") or if they are merely useful abstractions of aspects of general behaviors in the brain. Fortunately, in either case the concept is still solid.The most powerful of these action-guiding resources is language, which shifts thought from being a distributed, parallel analog process to a serialized, digital, quasi-logical process.
I don't know if the shift is as abrupt as you suggest. Isn't all abstraction simply the creation of a symbol loaded with semantic content? If so, then language could simply be a framework to communicate these abstractions coherently to another individual. Of course, I'm stepping into unknown territory here, so for now accept these initial thoughts as nothing more than mere speculation.In case you were wondering, consciousness is the concentration of these action-guiding resources on a specific detection.
Now this is the only part I disagree with. I suspect, confessedly without much science to back this up, that conciousness is a specific function useful for organizing and planning in terms of high-level thoughts, and does not necessarily result from the concentration of action-guiding resources.
I only find weakness in my position when I try to imagine a 'hyper-instinctual' creature with the faculties of abstraction, self-referential thought, and language, but no conciousness to organize and create high-level thoughts.
By the way, high-level thoughts, at least in this post, are thoughts created and manipulated by conciousness.
Thats an interesting case, thanks for bringing it to my attention. There are a couple of ways to respond, though I dont know the specifics about the worm. You might say that it does have a memory, though it isn't neurally implemented. Its memory is stored as its current temperature, and this is a sort of on/off switch that determines how it will react to the light.
The point of cognition isn't the planning part, and goals dont need to be explicit or represented. To say that cognition is for planning behavior is just to say that it is purposive. The flatworm's behavior is definitely purposive, even if it is too stupid to understand how its behavior relates to its goals. Similarly, the frog's behavior in shooting out its tongue is purposive. But there are no action-guiding resources mediating stimulus and response in the flatworm. Detection feeds direction into action for the flatworm, even if this is complicated by multiple detection systems. I mean, you might even hypthosize that this is how cognition came about in the first place: the detection information got so complicated that we needed more sophisticated ways of sorting through the information in order to determine proper action. But the flatworm detection systems aren't that complicated, and so there is no reason to dam up the flow from the stimulus to response.
However, I'm a little leery at the implication that complex behaviors caused by multiple stimuli are thoughtless, even if the individual stimuli--once divorced from any complex behaviors--are purely instinctual.
I understand cognition as a pool of (neural) resources that digest or process input stimulus prior to (or determining) action. So whether or not thought is occuring isn't a matter of looking at the complexity of the behavior, or the complexity of the stimulus-detection systems, but looking at the complexity of the processing that mediates the two.
Does this make sense? Like I said, this is my first time defending this view, and I am sort of winging it.
Fortunately, in either case the concept is still solid.
I think thats right, although I also think this suggests a whole lot of research avenues into consciousness that may prove to be fruitful. This is one of the reasons I am excited about the view.
I don't know if the shift is as abrupt as you suggest.
Oh, I agree, the shift was almost certainly gradual over many generations. Language didn't appear, it evolved, starting with a basic capacity for hoots and hollers and slowly building, via physiological changes in the brain and throat, and via the pressures of socialization, into full blown language. I mean, logic didn't really kick in until we became farmers and started civilization and needed basic forms of arithmetic and geometry, though there were probably more primitive forms of language prior to that advancement.
Now this is the only part I disagree with.
The view of consciousness I am defending here is supposed to be an attempt at something like a 'workspace' view of consciousness, so I am not entirely disagreeing with you. However, I'm not so sure that consciousness is a function itself- consciousness doesn't seem to do anything at all, like the savant examples SRG brought up indicate. Consciousness doesn't plan or process anything, nor is it only related to higher-level thoughts. I am conscious of the color blue, for instance, and that seems to be a direct effect of my most basic sort of visual detection.
I mean, it is sort of a crude example, but consciousness is something like the desktop GUI of my computer. All the processing is taking place below the surface, but certain aspects of the computer's processing or internal structure can be brought up to the level of the desktop for all to see. The desktop is certainly helpful for organizing things, but the organization itself doesn't take place on the desktop- that all goes on underneath. Similarly, consciousness isn't doing anything itself, but its used to pool the underlying cognitive resources and focus them on some particular information coming in.
This also explains why consciousness can be divided- I can be conscious of visual and auditory stimulation simultaneously, for instance, or any number of other input modalities, because the underlying resources are focused on both inputs. Sort of like I can have two folders open on my desktop simultaneously.
So the important part, and really part two of my cental thesis here (the first was about the nature of cognition), is that consciousness is not involved in the creating of thoughts, or the manipulation of thoughts, or anything else like that. Thats the job of cognition. Consciousness is the result of the focusing of resources on detections that get sent to the cognitive processing. Consciousness is not simply higher-level thoughts, or thoughts about thoughts, or the brain scanning itself for activity, because none of these things explain why consciousness has an associated phenomenology and a perspective from which the objects of consciousness is percieved.
This is important, and has something to do with my reaction against functionalism earlier. Cognition is a kind of procedure or processing, so theoretically any one of the functions that cognition performs could be instantiated in any other system- a computer, or the nation of china, or whatever. But this alone isn't enough for consciousness. In other words, cognition is necessary, and not sufficient, for consciousness. Incidentally, a computer can perform all the functions you mention of a 'hyper-instinctual' creature: it can be as abstract (data structures of any size and scope) or self-referential (127.0.0.1) as we want it to be. But the computer doesn't occupy a point of view, there is no perspective from which the computer operates, and there is nothing it is like to be a computer. So the computer can think and perform all the procedures necessary for cognition, but still lacks a necessary feature of consciousness. Without any perspective or point of view, there is nothing relative to which those cognitive resources can be focused or concentrated, and thus no consciousness.
Now the obvious question to ask is, what does it mean to say the computer doesn't have a perspective? Well, thats the other hard problem of consciousness. And I dont know the answer to that. But gimme a few weeks, and I should have something.