<$BlogRSDUrl$>
eripsa
thinking is dangerous

What is thought II

8.30.2005
What is thought II

Part 2 of this conversation. I get argued into a corner, and ultimately lose to SRG, even though his approach and view is wrong. But I have since discovered more of Melnick's view, and I believe I now am able to respond to his criticism. So stay tuned, faithful readers.


Mishkinman said
Since you've been doing your own pedantic Philosophy lessons, I feel ok in telling you that you should know not to reword your declarations, lest you inanely throw in a "is the result of," which you might not mean. Do you mean that it is a result of the focusing, or is the focusing itself?

It seems that the theory you're defending is very similar to Noe and O'reagans enactive approach to thought and percetion http://www.imprint.co.uk/pdf/NOE.PDF and http://ist-socrates.berkeley.edu/~noe/oregan.noe.pdf They just replace your "action guiding resources" with the slightly more intimidating "sensorimotor contingencies." I dont mean to butcher it (its late) but, briefly, it necessitates that experience/consciousness occurs by an agent through and because of its interactions with its environment. Perception here is not something that occurs inside of the animal, but is what the animal enacts as it explores the environment in which it is situated. This is as vague as any other theory of mind, but it least seems like a nice stepping stone for your theory on action-guidance.

What I like about these newer theories coming out is that behind the scenes they are founded on what always seemed to me to be very common sensical statement, that consciousness will not arise without sensation. I know not everyone believes this, but damn if I dont wish for times when ethics and decency wouldnt stand in the way of sticking some newborn in a sensory deprivation chamber and fMRI-ing him for life.


I'm sorry for being pedantic, I hope it doesn't come across as condescending. The object of my pedantry isn't really you guys but myself; like I said, I'm taking this for a test drive by subjecting it to the collective wisdom of my fellow philosophy goons.

But thanks for pointing out the inconsistency, though I think it is only an apparent one. The problem here is that I haven't broached the subject of the structure of consciousness, which is much thornier territory. But this gives me an opportunity to do that.

Let me take a running start at this by going back and recapping the view. Any theory of consciousness will need at least an account of the following three features:

1) The function of consciousness
This is what consciousness is from a 3rd person or scientific perspective. The function of consciousness tells us where to look for consciousness. My account of this is that consciousness is the concentration of action-guiding resources, so to look for consciousness we need to look at where the brain pools its various resources to attend to specific detections. These resources, considered alone, constitute cognition; so it would also be correct to say that consciousness is the concentration of cognitive resources, but where the emphasis is on action (where this view meets Noë's). Again, cogntion is necessary but not sufficient for consciousness.

2) The quality of consciousness
Any plausible (that is, non-functionalist) account of consciousness will need to provide some explanation for why consciousness looks or feels the way it does from the inside (otherwise, you just aren't talking about consciousness). It is consciousness from the first person perspective. This is the qualia problem, or Chalmer's hard problem (or Fodor's impossible problem). It comes to us from the Aussies, and the big problem is how to squeeze it into the functional account above, under the assumption that if we can't then consciousness must be non-physical. The problem here is that almost no one in the analytic tradition takes seriously the following issue:

3) The Structure of consciousness
This is the stuff that Sartre and Merleau-Ponty spend inordinate amounts of time on, under the guise of phenomenological ontology. The continental tradition here is hard to swallow for us analytics, though, since they don't seem to take the problem of the physicality of consciousness seriously, as if we have transcended the problem or can simply ignore it without giving an account of how anything like an 'inside' or perspective is possible in a physical universe. The structure of consciousness isn't quite a description from the 3rd person, and not quite from the 1st.

But there are some things we can say about the structure of consciousness. Consciousness always has an act -> object structure. The act of consciousness is the act of concentrating resources, and this act is directed at (or attends to) its object. So it isn't that my experience itself is blue, but my experience is of a blue object, or concentrated on the blue background, or whatever. The act of consciousness itself is never complete- it must be of something, it must have an object. So, for instance, you can't be conscious of nothing.

In other words, the act of focusing is only one half of a duality that is central to consciousness.

Again, I havent broached the subject of how this structure admits a perspective, and how such perspective is possible in the physical world. But I dont have answers to those problems yet.

SRG said
I'm going to do a reply about the definiton of thought later, but I just want to say something about the so-called "hard problem" of consciousness. What is it that makes me have a subjective experience? You've admitted that, using computation, we can in principle explain everything about what makes me have a subjective experience from a third-person perspective. That is, we can explain what makes me talk and act and, in fact, think like a being with a subjective experience (as far as we can tell from an fMRI scan), and we can build a robot that will talk and act and think just like me, including claiming to have a subjective experience.

Yes, but that's just from an outside perspective, right? What about the phenomenology or the qualia or the intentionality or some other word that's a different word depending on who you ask. In other words, what about the part that we all know is substantial about the subjective experience? The part that isn't just an abstract process, but that is there?

In other words, what makes my universe exist? I think this is a very similar question to the one, "What makes the universe exist?" In other words, why is there anything? Why isn't there just nothing? I asked this question when I was a child. "Well maybe God created the universe." "Yes, but why is there a God? Why isn't there just nothing?" Nobody could answer me, and eventually I stopped thinking about it. But enough about that. Here's a thought experiment. I've posted something like this several times, but I've never heard a satisfying response.

Imagine the algorithm for your brain instantiated by the nation of china. The nation of china is going to do exactly what your brain does, on a functional level. The being they create will be able to answer questions, and it will always answer them exactly as you do. In fact, it won't know it's not you.

So we ask it: "Do you have a subjective experience/phenomenology/qualia/soul/whatever?" And, being funtionally equivalent to you, it answers, "Yes, of course I do." Three possibilities here:

(A) It really does have the thing it claims to have.

(B) It's purposely lying about having the thing it claims to have. But this can't be right, because it does whatever you would do in the same situation, and in this situation you wouldn't purposely lie. You wouldn't even be in a position of having to purposely lie. You would just tell the truth.

(C) It's not purposely lying, but it's mistaken. But this implies that it's possible to be mistaken about this sort of thing. In other words, you might not have a subjective experience either. You might just think so because the architecture of your brain gives you that illusion. This is like John Searle claiming, "Computers don't have beleifs. They just believe they do."

A possible reply might be, okay, maybe the thing instantiated by the nation of china really does have subjective experiences. But even so, those can't just be the result of abstract computation. Maybe abstract computation just gives rise to them. Subjectivity is epiphenomenal in other words.

But that's nonsense, because it's not the epiphenomenon that's answering the question. The answer to the question is entirely determined by the abstract computation itself. There's no way the computer (brain) can even know it's producing epiphenomenal subjectivity unless that subjectivity is acting on the brain in some way -- which would be strict metaphysical dualism. Otherwise, any reasoning the computer uses to claim it knows it's producing epiphenomenal subjectivity must be fallacious.

It simply must be true that computation is sufficient to produce subjectivity, to whatever extent subjectivity really exists. It cannot be merely necessary.

So either the whole question you're trying to ask must just be nonsense, or mere abstract computation must be something more real than you think it is.


There is a lot to talk about with this example, and I dont have much time. However, I will say that I think you are begging the question right here for epiphenomenalism, and it directly stems from an assumption of functionalism. Why wouldn't the system of china answer no? After all, what makes me want to answer yes is the fact that I am conscious, not the underlying functions that constitute my cognitive process. And the plain fact of the matter is that the sum of people in China simply dont have any unified perspective from which it makes sense to answer the question in the affirmative. They dont even have a distributed perspective, except in a kind of socio-historical sense, which is mostly besides the point of the analogy.

I am not arguing for epiphenomenalism, and I insist that consciousness is directly causally interacting with the world. And part of that interaction would entail responding to questions of 'Are you conscious?' with a yes. It is not cognition that is the justification for my assent, but consciousness itself.

In other words, zombies aren't really a possibility; and conversely, if computers start telling us about their conscious experiences, perhaps we should believe them.

SRG said
I'm only begging the question in the sense that I assume your brain can be described as instantiating a certain algorithm. Even Searle admits this is so. And if it is so, anything that instantiates the same algorithm will do the same thing, because that's what an algorithm is. An algorithm for answering "yes" when asked "Are you conscious?" cannot answer "no" when asked "Are you conscious?" This is the basis for the thought experiment.
what makes me want to answer yes is the fact that I am conscious, not the underlying functions that constitute my cognitive process.

My point is precisely this: that it is both, and one can't possibly be separated from the other. If you admit that your brain instantiates a certain algorithm, you admit you answer yes because of the algorithm. If you admit you are conscious, you admit you answer yes because you are conscious. These are two true descriptions of the same event. You can't remove the algorithm without removing the consciousness, and you can't remove the consciousness without removing the algorithm.

Imagine I flick a light switch and the light goes on. It's true that the light went on because I flicked the light switch and it's true that the light went on because the electrical circuitry works a certain way. The two descriptions don't contradict each other -- they are equivalent. The first is a high-level description of the same event for which the second is a low-level description.

I'm glad you realize zombies aren't possible, but then I'm confused as to how you can claim a functional description isn't a complete description of consciousness.

I agree that the brain can be described as instantiating a certain algorithm, in the sense that cognition, in my definition, is algorithmic- or better, procedural. It is computations that result in action, and not just calculations floting in the void.

However, you are absolutely wrong that Searle would say that consciousness, in the full sense that includes phenomenology and understanding, is just an algorithm. Seale's whole point with the Chinese room argument, and the point of the instantiation of the neural structure of the brain by the population of China, is to show that algorithms are insufficient for consciousness. The whole intuition behind the examples is that it is incoherent to say the entire population of china is conscious, or that the guy in the chinese room understands chinese, so the algorithms the system performs aren't enough for consciousness. There are arguments against functionalism, or depending on your view against the very idea of consciousness, not arguments for the compatibility of functionalism and consciousness.

Listen, cognition is just a bunch of procedures instantiated in the brain. But the result of those procedures functionally depends on the input to the system. So you might imagine an oversimplified example, where you ask "Are you conscious?" to the China system, and the system checks its "consciousness flag", finds that it isnt conscious, and answers no; whereas the exact same procedure carried out in a human would answer yes, since the flag has been switched. Similarly, a computer instantiating the functioning of the brain could easily answer no to the question. My point is that a yes or no answer to the question doesn't just depend on the functional makeup of the brain, but on whether or not it is true that the system is conscious. You are claiming this latter fact is irrelevant to the way the system answers the question, and that is just absurd.

My point is precisely this: that it is both, and one can't possibly be separated from the other.

This is precisely where we disagree, and it is because you can't see anyway for consciousness to operate besides epiphenomenally. I agree that the functioning of the brain is necessary for consciousness, but as I said it isn't sufficient. I answer yes to the question not because the function of my brain says so, but because I am conscious. The problem here is that you think the dependency relation is reciprocal, and I am denying this claim. Cognition is necessary for consciousness, but not sufficient. Other systems can instantiate the same functions as the brain and not be conscious. Consciousness is cognition plus something else- that something else being the 1st person perspective that gives consciousness a feel or quality.

You seem to think that cognition and consciousness are mutually dependent, but I dont know why you assume this. Consciousness depends on cognition, but not the other way around. I should note that, for instance, Ned Block's view holds that neither is dependent- there can be cognition without consciousness (the blockhead, or China), and there can be consciousness without cognition. I am denying both your and Block's view, and setting the dependency relation so that it is distinctly lopsided in the opposite direction.

However, this shouldn't be confused with epiphenomenalism, because I insist that consciousness is physical (specifically, it is embodied), and that consciousness has definite affects on the world. In particularly, it affects how I respond to the question "Are you conscious?" If I wasn't conscious, I wouldn't respond with a yes.

There are two kinds of zombie cases here. One is the 'zombie' made by instantiating the exact functioning of the brain on a computer. It is a zombie because it isn't conscious, even though the functional decomposition is the same. But it wouldn't answer 'yes' to the question "are you conscious?"

The other kind of 'zombie' is the exact replica of a human, in swampman fashion, except that somehow it isn't conscious. This is the case I dont think we can make sense of.

SRG said
'd like to point out that you've sharply differed from John Searle, in that he claims he would deny a program is conscious, even if it claims it is and describes its conscious experience. He says he would assume a thing has intentionality if it seems like it has it, but only in the absence of other information. Once he found out it was "just" instantiating some program, he would deny it had intentionality.

And yet, he admits we can be described that way.

John Searle posted:

"OK, but could a digital computer think?"
If by "digital computer" we mean anything at all that has a level of description where it can correctly be described as the instantiation of a computer program, then again the answer is yes, of course, yes, since we are the instantiations of any number of computer programs, and we can think.


Many people have offered him very good reasons why the man instantiating the program would not be aware of speaking Chinese -- the understanding of Chinese occurs in a subsystem that the man cannot access consciously. I made this argument at one point, and I'm pretty sure you agreed with it. Searle, on the other hand, fails to understand it and rejects it.

He claims that the reason we know other people can think is that their brains are made of the same stuff as ours. And he claims that it's possible some other material can give rise to thought, and that this is an empirical question. But it's clearly not an empirical question. He will not accept behavior as a criterion, and he offers no other criteria.

Now, let me restate the case. Tell me if you agree with these three statements y/n.

1. Everything about your behavior is physical, including your claiming to be conscious. (metaphysical dualists disagree here)
2. Every physical process can be described by some algorithm (or procedure). (Roger Penrose disagrees here)
3. Algorithms are substrate independant, and will behave the same regardless of what system instantiates them. (nobody can disagree here, because it's true by definition)

It follows from 1, 2 and 3 that I can implement a certain algorithm on any kind of computer, and that computer will do everything you do, including claim (truthfully?) to be conscious.

You say there might be some kind of a consciousness flag, but come on. Is there really a consciousness flag in your brain, and if we turned it off you would no longer be concious? Even if there is one, all we have to do is leave it on, and the computer will still answer yes. I really don't know what you're trying to get at with this argument.


Functionalism cannot provide a full account of consciousness- it is, as was said earlier in the thread, a behaviorist account of cognition, which pays no heed to what occurs inside the system, or from the perspective of the system. So all we can do is describe consciousness from the 3rd person perspective. And from this perspective, consciousness is algorithmic in nature. As I said in my first post, consciousness is the organization of cognitive resources to concentrate on some particular detections. As an algorithm, a computer can do all of this.

Where the functionalist goes wrong is to say that this provides a complete account of consciousness. Functionalism lacks any account of the 1st person perspective of consciousness. The computer can instantiate all the functions of the brain, but the computer lacks a point of view from which the concentration of those resources can be described as having a quality of feel. And this is why the computer is not conscious.

The upshot is that having a point of view is not an algorithm or procedure. It is, one might say, a position or stance one takes towards the cognitive procedures operating in the brain. Again, think of your desktop GUI. Is it an algorithm? Well, lots of algorithms and processes go into generating and organizing the display. The act of displaying it itself is also just a series of procedures the computer performs. But the GUI itself- the thing you interact with on your monitor- is not an algorithm. It is an interface, and it is only there when it is displayed on a monitor and looks like something. If the display were removed from the computer, the computer could still run all the algorithms necessary to instantiate a GUI, but there just wouldn't be one, because there isn't anything about those processes that is being displayed, (ie, that look like anything).

I hope this analogy is clear, because it shows why your questions only make sense from a functionalist perspective. So the 3rd person view of consciousness as function is analogous to the algorithms and processes instantiated by the computer in implementing a GUI. But the 1st person perspective of counsciousness is analogous to the actual displaying of those processes. And this isn't discrenable from simply looking at the computer's processing. You need that processing to be displayed before it looks like anything.

This lends itself to a perfect analogy to your "Are you conscious?" question. When you are adjusting the display information on the computer, the computer might ask "Can you see the display?" On you functionalist account, any time the right display algorithms are running, the answer to this question is invariably 'yes', even when the user can't see the display. This is something of an inverse zombie case, I suppose.

But thats not right. Either you can see the display or you can't, and that is independent of the computer's processing ability to generate the display. None of this implies that seeing the display is somehow epiphenomenal on the underlying processes, or some meta-process, or that displays are non-physical, but it is another kind of thing entirely. But also notice that 1) the computer can continue performing those procedures whether or not anything is being displayed, and 2) nothing can be displayed unless the computer is running those algorithms. So neither 1 nor 2 imply that whenever the computer is running those algorithms, something is being displayed.

Also, it is incorrect to say that the display is just an epiphenomena of the processing with no causal consequences for the processes, because it is directly responsible for the kind of input the computer will be getting in return. The display organizes things for the user, which in turn alters the kind of input the computer gets. So there are direct causal consequences in generating a display.


On your view, the computer is displaying something whenever it is running its display algorithms, whether or not it is hooked up to a monitor. On Block's view, displays can be occuring on the monitor whether or not there is any underlying computer generating those displays. Neither of these views sound plausible. To display an interface, you need to have the underlying processing, but having that processing doesn't entail anything is being displayed. And having a display isn't just to have a certain kind of processing (though special processing abilities are required in order to have a display at all), but is an entirely different kind of thing altogether.

Zoolooman said
I understand, Eripsa. To reference an old example, the nation of China may process information like the human brain, but it has none of the important elements that make up a unified point of view.

So I ask, what physical elements make up the human point of view? Would those elements include our unified sensory structure, our brain functions for motion, our brain functions for verbalization, and any other action-guiding resource that we call into focus to aid us in our interactions with the external world?

To build an artificial point of view for a computer, would we have to give our computer the capability for unified interactivity with the outside world?

I hadn't thought of that before, but I think you are exactly right. Notice, though, that this implies there isn't any particular physical property you can 'give' a computer to make it conscious, even though counsciousness is physical. What you need to do is embed it in a body that can interact with an enviornment. But thats not a physical thing at all- it is a relational thing, where realtions aren't a physical property of any particular thing but of interactions of groups of things. It is, in my terminology, to position it in with the act -> object structure, so that its algorithmic procedure of concentrating resources can be directed at its perceptual input from an environment.

So thats right, meaning that I think that is the right extension of the view. But put like that, I'm not so sure it really works. For instance, you can get a computer with relatively simply procedures to interact with a block world, so not only are there underlying 'cognitive' resources, but they are focused on aspects of that simulated environment. But I'm not sure that just this simple embeddedness is sufficient for full-blown consciousness. In other words, its might not be just the unified interactivity with an environment that makes something conscious. It might very well be that only the unified interactivity given the particular cognitive resources of brains that admits of consciousness.

That would imply that my criticism of SRG's functionalism is right, but the distinctions I am drawing just collapse because any instantiation of the functional resources of the brain would necessarily be embedded in some world (simulated or otherwise). No one has any interest in designing a computer with an exact functional replica of a mind, only to keep it trapped in a black box with no 'sensory' input. In other words, it is a theoretical possibility, but as a matter of fact would just never actually happen.

Curiously, if we wanted to test the the theory and build an environmentally detached full brain replica, there would be no way to go inside and ask if it was conscious, since that would itself be a source of input, which would, on the theory, spontaneously generate consciousness by giving the cognitive resources and object to focus on. This leads to the rather bizarre conclusion that consciousness is always present in systems functionally equivalent to the brain whenever there is an interlocutor. That is, the computer wouldn't be conscious unless you ask it 'are you conscious', in which case it would answer yes, and then go right back to not being conscious (of course, it would also be conscious whenever you ask it any other question too). That seems to be a logical consequence of this line of thought, but I really dont know what to make of it.

SRG said
So you're just arguing for an externalist definition of intentionality? If so, I'll agree to that. And in fact I've already agreed to that. I'll add that this is basically the robot reply to Searle's chinese room thought experiment.

Let me restate the claim then:

The human brain instantiates some algorithm. Consciousness is completely described by that algorithm, combined with the fact of that algorithm being instantitated in the world.

Or: consciousness is two parts. Part one: cognition (computation, or the algorithm). Part two: intentionality (the way the conscious system is causally connected to the world, such that it can be claimed to be about something)

By these criteria, we can say that we know for a fact that all humans are conscious. We can also say we know for a fact that any robot emulating a human brain at full speed and intereacting with the world as humans do is conscious, in the same sense as a human.

Is this satisfying to you?

edit: Looks like it pretty much is!


Its not an externalist definition of intentionality, because I am also insisting that the embeddedness in the world correlate with some perspective or point of view. In other words, I am stressing the importance of a perspective from which that embeddedness would have some quality. The externalist definition of intentionality doesn't account for the quality of that directedness from the first person perspective. In other words, externalism isn't enough, you need an internalist account as well.

All your externalist attempts here have glossed over the internalist aspect of consciousness, because I think you secretly dont believe anything like that exists. I sympathize, mainly because I dont yet know how to account for this aspect of the theory either. However, I can insist it is necessary, since the very idea of 'focus' or 'concentration' requires some perspective relative to which something can count as focused. Of course, you can give an externalist account of focus by locating the source of the perspective, but this negelcts the quality of the focusing as experienced from within the perspective. NO externalist position will be able to account for this, but it is essential for any complete theory of consciousness.

SRG said:
Why is there a point of view? Why is there a universe? Why is there anything? What is the sound of one hand clapping? Unaswerable zen questions!

Wittgenstein solved the mind-body problem. "What we cannot speak about we must pass over in silence."

Thats just capitulation. This is an obvious phenomena of the universe, and as such it demands explanation.

SRG said:
No, let me elaborate. Every description of everything is "externalist." It isn't anything special about consciousness that makes it impossible to give a full "internalist" account of it. Try to give a truly internalist description of the most mundane thing. Describe for me a duck. No, don't tell me what the duck looks like. You're describing the duck as a object. I want a description of the duck. Don't tell me how it interacts with the world. Don't call it a bird or an animal -- that just puts in in a conceptual category, which only tells me how it relates to other concepts. I don't want just a relational description of the duck. I don't want just an externalist, third-person description of the duck. I want to know about... the duck!

This can't be done. Describe consciousness without describing anything it does. Describe the universe without describing any of its parts. It can't be done. The "point of view" exists, as surely as the universe exists, but you can't say anything about it.


No, private language arguments wont work here, because I am not arguing that there is anything unique to any particular subjective experience of consciousness. In fact, I am assuming that the quality of experience is shared across the board, since we have the same basic detection mechanisms, same functional architecture, and so on. The quality of blue experiences is a phenomena we all experience, and is shared and public.

Furthermore, I am describing parts of consciousness- its dependency on cognition, its basic dualistic act -> object structure, and so on. None of this is private or beyond words.

I'm surprised of all people that YOU would throw Wittgenstein at me.

Edit: and I'm not asking for a description of consciousness independent of anything else, like a description of the duck independent of any of its properties. I am saying that, if the duck is conscious, then there is something that it is like to be that duck, and I am asking for an explanation for how there could be such a thing.

edit 2: "What is gravity?"
"Its when stuff falls down"
"No, I dont want a description of the effects of gravity, I want to know what gravity itself is"
"PRIVTAE LANGUAGE WE CANT TALK ABOUT THAT"

edit3: To be less flippant, the private langauge argument would work if I were to ask you to describe what the experience feels like, or what your subjective point of view looks like. I agree that this can't be talked about.

But Wittgenstein doesn't have any specific problems with the ontology of private languages, just that if it exists we can't say anything about its content. But that doesn't prevent us from saying interesting things about its structure, its relations to cognitive resources, or the necessary and sufficient conditions for its existence.

SRG said:
Is there something it's like to be the duck?

The language here is slippery. What is it like to be the duck? Normally, whenever one talks about what it's like to be something, one is talking about what it would be like if he were that something. "What is it like to be president of the United States?" In other words, what would it be like if I were president? That's a question I can ask. Can I also ask, what is it like for George W. Bush to be president? What kind of a question is that? It's like this. Because he is president.

What is it like for whom to be the duck? Do you mean, what would it be like for you to be the duck? Stick some feathers on yourself, jump in a pond and find out.

What is it like for the duck the be the duck? What is that supposed to mean? What is it like for a rock to be a rock? What is it like for the number 5 to be the number 5?

What would it be like for you to be the duck? How about asking, what would it be like for the duck to be you? What would it be like for a rock to be a cloud? What would it be like for the number 5 to be the number 4?

Are you really sure the question you're asking makes sense? You certainly aren't using the phrase "something it is like" in a way that resembles the way it's used in plain language. So how are you using it?

No, I don't think it is "like anything" for a duck to be a duck, or for you to be you, or for me to be me, except that it's like this. So you're trying to figure out, how can it be like this? This is exactly like the question "Why is there anything?" In fact, I think it is that question.

Sorry to get all mystical and Wittgensteinian on you, but there's no other way to talk about this thing. And now I'm sleeping.

I'm going to write some more, because I don't know if I was clear. Can I ask, what is it like to be George W. Bush? Yes, sure. I can ask, what if I were president? What if I were from Texas? What if I were a social conservative? What if I had a wife named Laura? What if I were currently located inside the whitehouse (assuming that's where he is)? This is empathy, and all humans can do it. I'm "putting myself in his place" as it were. But what is it like to be George W Bush himself? What could I mean by that? What if I actually were George W Bush?

We can see this question is nonsense when we ask it about any person other than "I." What if George W Bush were me? What if George W Bush were Madonna? The last sounds like a Saturday Night Live sketch. The best we can do is imagine him in a wig and singing one of her songs. In other words, we must imagine GWB still obviously being himself, but acting very much like Madonna. What if I wrote an SNL sketch about GWB actually being Madonna, completely? It wouldn't be very funny, because it would just be about Madonna. It wouldn't be about GWB in any sense.

What is it like for a rock to be a rock? It's like this, because a rock is a rock. What is it like for a rock to be a cloud? It isn't like anything, because a rock isn't a cloud. What if the number 5 were an elevator? That would sure be weird, huh? I guess the elevator would have to become the new number 5 then, because otherwise there wouldn't be any number 5 anymore, and that would cause a lot of problems.

It only makes sense to ask "What would it be like if I were a duck?" becuase I think I have a soul, and I would still in any sense be myself if I were a duck. This is false.

Now, am I denying things have points of view? Certainly not! It's dualists that can never be sure things other than themselves have points of views. I know beyond any doubt. For instance, you talk about your point of view -- ipso facto you have one. A duck clearly has one too, of a rudimentary sort. It treats itself differently than it treats anything else in the world, including another duck. This is all we must observe to conclude it has a point of view.

But why does the duck have a point of view? Why do you have a point of view? This too can be answered. We'll hook your brain up to some machine that can read every neuron-fire. Then we'll ask you to think about your soul and even say the words "I have a soul." Then we'll print out the data we've collected. There. We have a full explanation of your soul.

Of course, the explanation doesn't help us much. For instance, if I have data on every elementary particle in a chair, that doesn't help me figure out what the chair is shaped like or what its texture is, or anything else. If I were smart enough, I would be able to infer that information from the data. But I'm not, so really the statement "It's a chair" tells me a lot more than a "full" explanation of the chair. Similarly, the statement "Eripsa is conscious" tells me a lot more than a "full" explanation of Eripsa's consciousness. That doesn't imply that your consciousness can't be reduced to brain activity. And just like with the chair, I assume that if I had godlike intelligence I would understand how it is that we get consciousness from a brain. Since I don't, though, I'll just have to assume it works, since it clearly does.

And maybe studying the brain will help us understand, intuitively, how we get consciousness, to some extent. But only if we keep focused on the question that's a real question, and not on the question "Why is it like this?"

The word "consciousness" is not very well defined. The only thing I can be sure is conscious is a human being (or a robot that perfectly emulates a human being). Autistics and other humans with abnormal brains are in the same place as animals, in that I can't say for sure if they're conscious. But this isn't because of anything in particular about autistics or animals, but only because I don't know just what consciousness is. I'm only sure that normal humans are conscious because it's true by definition. Normal humans are our prototype for what the word "conscious" means.

I will say that autistics, lions, and even ants have "points of view," in some way or another. This is because there are things they know and things they don't know. The only thing that doesn't have a point of view is something like a rock, which doesn't know anything, or possibly some omniscient being that knows everything.

By the way, here's a good statement of my theory, which I was just trying to explain in the last few posts.

People are always trying to doubt the existence of other minds, or they're trying to talk about some unique thing that "I know I have it, but I can only infer other people have it." They sometimes call this thing "consciousness." Sometimes it's a fancier name, like "qualia" or "intentionality." Sometimes it's something more to the point like, "something it is like to be me" or "a point of view." However, no matter how one phrases the question, one seems able either to give an external, public description of the thing that's supposedly private (in the case of consciousness or intentionality) or reduce the question to nonsense (in the case of "something it is like").

So what is the real question people are trying to ask when they ask all these other questions? My theory is the question is simply, "Why am I myself?" "Why am I me, and why am I not someone else, and why am I the only one that's me?"

This is either an unanswerable zen question or just plain silly (if there's any difference between those two things). Because who else could you be, and who else could be you? This formidable so-called mind-body problem reduces to our inability to comprehend a simple identity. X = X.

And Eripsa seems not able to deny that I'm right, because he's stopped responding.


I haven't responded, partly because I have been busy, but partly because I am practicing what you preached.

But seriously, this is just capitulation. You are just plain wrong to say that I am merely talking about identity, and that all consciousness is understood via comparisons to my identity.

The problem here is that you dont think consciousness exists. Its as simple as that. You think once we have given a perfect functional description of the brain we have explained everything there is to know about the brain. And, if consciousness exists in the brain, we have explained that too. Admittedly, this puts the onus on me to say what it is you haven't explained, and I haven't done an adequate job yet. But we are whittling it down.

What I am talking about has nothing to do with identity, and surely nothing to do with identity with myself. In fact, I am entirely willing to condeed that the 'self' is just another object in the world that consciousness interacts with, and that we can have murky epistemological relations to.

Furthermore, what I am talking about has nothing to do with epistemology, which is where I think you are getting tripped up. I am not asking 'how can I know I am conscious', or 'how can I know you are conscious', or 'how can I know what your consciousness feels like', or anything about our knowledge or our ability to describe the phenomenal quality of consciousness. This is the substance of the Jackson Mary cases, so people do ask such questions, but I think such questions are a bit silly, personally, and in any case can be addressed by private language arguments.

In fact, I am going to venture to say that what I am talking about has very little to do with the mind/body problem as such. The mind/body problem arose mostly due to advances in mathematics and physics by Galileo and Descartes and Newton, and is roughly the question: how is it that I am capable of complex reason and intelligence (particularly mathematical intelligence) when I am simply matter governed by the same physical laws that govern the movement of every other object in the universe? I mean, even Descartes, the great dualist, believed that the body and the brain were entirely governed by physical laws. He just couldn't imagine how complex behavior like language and mathematics could arise from such simple stuff. But the advent of a systematic logic by Frege and Russell in the late 19th early 20th centuries, and the subsequent development of computers in the 40's and 50's, solved this problem. We could now construct systems that behaved intelligently, so there was no longer a question of how physical matter could behave logically.

So whats left for consciousness? Well, we are missing a couple of things. First, we are missing purposive behavior. Computers can perform any function, but they dont do it for any purpose, or to fill any goal, or to achieve any end. They do it because they are designed to do it. Living organisms, on the other hand, having gone through a process of evolution, and so are by nature goal-oriented. Machines can fulfill goals, or satisfy goals, but they can't pursue goals. Or at least, if they can, it isn't simply in virtue of their functional architechure, but because of their intended purpose (which isn't properly theirs), or because of their design history (which may or may not be 'theirs', for instance in cases of evolutionary robotics).

The important point is that you can't give a functional or behaviorial explanation of purposive behavior. Purposive behavior isn't just what the organism does but why or how they do it. Thats not something you can quanitfy in the lab, but that doesn't mean it is just not worth talking about. But to talk about it, you have to tell a much, much larger story about the organism's phylo- and onto-genetic history, and its changing relations to a changing environment. This is what evolutionary biology gives us, and that is quite scientific and rigorous, not some mystical thing that we should refrain talking about.

But purposive behavior itself isn't sufficient for consciousness. Purposive behavior, in my story, is just the basis of cognition. It is also the basis for stimulus-response, but cognition adds to the simple mechanism a kind of mediation, where further processing can occur and better deliberation can be made with respect to the appropriate courses of action.

Ok, so we have cognition. But I admit that you can describe cognition in terms of functional decomposition, even if that vocabulary doesn't allow you the vocabulary of purposive behavior. So what's left? Well, we are missing something like intentionality, or directedness, at an environment. Cognition in functional terms could just as easily be running its algorithms in the void, where its processing has no content and isn't about anything. But thats not what animals do. We are embedded in an environment, and directed towards our environment. And this gives our cognitive resources content in addition to mere functional form. We dont just think; we think about things.

The computer, in contrast, never thinks about anything, simply because it isn't embedded in an environment that can give its calculations content (this isn't quite right, since the computer does seem to interact with a simulated environment which it partly constitutes. But thats a much harder problem, and one we should ignore for the moment). In other words, the reason we can get any meaning out of the computer is because it can trade in symbols that have meaning for us, but that can be manipulated without regard to their meaning. The computer can do math without understanding what the symbols '1' and '2' are, it can display poetry without understanding what 'love' means, and so on.

This is the underlying reason why Searle presents his chinese room, after all. He is trying to deal with the prospect of a machine doing manipulations to language by treating it purely as symbols without regard to their content. He doesn't deny that such a thing can be done, but he says that if it is done, the machine doesn't understand what it is doing, and hence lacks 'consciousness'. However, Searle isn't talking about consciousness in my sense, he is just talking about cognition. Cognition requires content, and to have content you have to be related to the world. And such relations cannot be captured by mere functional analysis, since the functions would be the same whether or not the machine is embedded in the world.

But that doesn't give us consciousness quite yet. All we have so far is the structure of cognition: that it is purposive, and that it has content. Neither of these properties are explainable on a functionalist or behaviorist view, so I've already left you far, far behind. But we're not done. So what's consciousness?

As I said, consciousness is the focusing of cognitive resources on particular detections. So lets unpack this nice and slow. An organism is going along, having certain goals and detecting certain things in its environment. And these detections get fed into the organism's cognitive resources, in order for the organism to adjust its behavior to fit its environment and better achieve its goals. In certain circumstances, particular detections can become the focus of a whole set of cognitive resources, so that they work in unison to attend to those detections. Depending on the set of cognitive resources, and the circumstances of the detection, this act of concentration can result in consciousness.

Consciousness is not always a beneficial thing, as you already pointed with the savant case. A better case might be the expert tennis player, who can just react to the ball "without thinking about it". His cognitive resouces are lightening quick, of course, but his detections don't need to be attended to by consciousness. His body already knows what to do.

But lets step back. What have I added here? Focusing can't be understood as just another function or cognitive resource, but it is a way of harmonizing those resources and concentrating them on a detection. This gives consciousness its act->object structure. The act of consciousness is just the synchronization of resources, but to say those resources are 'focused' is to imply a perspective relative to which something can be understood as 'focused'. In other words, consciousness can only be understood from a position situated in such a way that it can pick out detections on which to focus. So certainly the computer has computing resources, and can focus those resources on a particular task, but the computer is not situated in the proper way to the content of its computations, and doesn't have the proper cognitive resources, such that its act of focusing gets taken up in consciousness.

I've already left your position behind, but this is where you jump ship and swim back. You don't think there is anything like consciousness, so when I start talking about perspectives, you throw out some mysticism or reduction to logical relations like identity, and it all looks confusing relative to your nice, neat functionalism. So you conclude the whole project is messed up and give up and remain silent. But thats just capitulation in the face of the daunting task of discovering the nature of consciousness. Which is especially unscientific, given both the overwhelming evidence we have for its existence, and our familiarity with its quality. This data cannot be brushed off so easily.

SRG said
I'm honestly completely lost as to what you're saying. For the first thing, you seem to be making the point that you can't say something is conscious unless it has an environment to be conscious of. I've already agreed with that. I'm with you that far. Now this:

Eripsa posted:
This gives consciousness its act->object structure. The act of consciousness is just the synchronization of resources, but to say those resources are 'focused' is to imply a perspective relative to which something can be understood as 'focused'. In other words, consciousness can only be understood from a position situated in such a way that it can pick out detections on which to focus.


I'm sorry, I just can't parse this at all. I'm honestly trying, but I can't. Let me ask a few questions.

When you talk about "the computer," what computer are you talking about? A specific computer currently in existence? Or do you mean a computer that's perfectly emulating your brain?

I am utterly baffled as to what your position on strong AI is. On one hand, you claim it might be possible to make a computer conscious. You are quite insistent that this is an empirical question -- that there are behaviorial consequences to consciousness. For instance, only a program that is conscious will be able claim it's conscious and describe its conscious experiences. You differ sharply here from John Searle, who claims a computer absolutely cannot be conscious, and that there is no way to tell by outward behavior whether something is conscious or not. For instance, he imagines that you can slowly replace my brain with microchips, and the scientists doing it will think everything is fine, because I will keep acting the exact same way, and I will keep claiming to be conscious, and describing my conscious experiences, but on the inside I will be slowly dying, and I'll be unable to say so. Do I understand correctly that you deny this is possible?

If I do understand you correctly, you think there's nothing in principle to stop us from making a computer conscious. And yet -- and yet -- you deny functionalism, and as such deny that consciousness has anything to do with an algorithm. I cannot express how confusing this is to me. What am I supposed to put into the computer that's going to make it conscious, if not an algorithm? Is it just vision and hearing sensors, and maybe some legs, so it can interact with the real world? Fine. I'll grant that. Now it's a robot instead of just a computer. So now is it conscious? This is such a small divergence from the othodoxy of functionalism I can't imagine why you'd make such a big deal of it. Especially since I've already granted the point. And, as I said, the point was granted by others long ago -- this is just the robot reply to Seare's Chinese room idea.

Eripsa posted:
Machines can fulfill goals, or satisfy goals, but they can't pursue goals. Or at least, if they can, it isn't simply in virtue of their functional architechure, but because of their intended purpose (which isn't properly theirs), or because of their design history (which may or may not be 'theirs', for instance in cases of evolutionary robotics).


Here you seem to be talking about something like Davidson's swamp man thought experiment, which thought experiment, I'll grant, is very much like everything you've said in your post here, in that it confuses me and I have no idea what he's getting at, even though I think I agree with his ideas for the most part. The idea seems to be that my causal history has something to do with the meaning of what I do right now. I can't think of any reason to say that's the case. For instance, is it true that you would say that a robot that came from some kind of an evolutionary process would be conscious, while another robot that was made "in one go" by a conscious human engineer would not be? Even if the two robots were identical? I have no idea why anyone would say this. And since you think consciousness is observable, am I also to understand that you think the first robot would claim to be conscious, and the second one would not? Why?

Finally, I have no idea why you keep claiming I don't believe in consciousness. I do believe in it. All I've said is that I believe it's (1) physical and (2) public. In fact, because it's both physical and public, I have a way of being absolutely sure that other people are conscious, which epiphenomenalists and people like John Searle lack. I'd say my consciousness is on firmer ontological grounds than most.

edit:

A couple more comments. Here's something you posted before.

Eripsa posted:
This leads to the rather bizarre conclusion that consciousness is always present in systems functionally equivalent to the brain whenever there is an interlocutor.


Here you seem to be agreeing with the robot idea. That is, consciousness = computation + environment. Even though I'm willing to provisionally say this might be an okay definition, the problem of sensory deprivation comes to mind. Am I to understand that if I'm in a sensory deprivation tank I'm not conscious during that interval? Or is my consciousness tiding itself over on residual intentionality from back when I was interacting with the world?

How about this example then. Say we have a computer program emulating your brain, and we have a robot for it to inhabit. However, we activate the program before we activate any of the sensory or motor fuctions of the robot. So now we have the robot's "brain" just churning doing mindless computation. There is no input or output, so this is manifestly just computation having no relation to anything in the world.

However, from another perspective, this computation is emulating a human brain, and we know that when a human brain is cut off from any sensation it will not only remain conscious, but start to manufacture stimulation for itself.

So here's the kicker. We wait, say, 24 hours, and then we activate the robot. Now the robot can see and hear and move, and it can tell us it's conscious. What's more, it can tell us it was conscious 24 hours ago, and it can describe all the vivid hallucinations it had. What are we to say to that? That when we activated the sensorimotor features of the robot we made it retroactively conscious for that 24 hour period?

Eripsa posted:
Focusing can't be understood as just another function or cognitive resource, but it is a way of harmonizing those resources and concentrating them on a detection.


I don't know why you say focusing can't be understood as another function or cognitive resource. Surely the ability to focus in such a way can be understood as another function or cognitive resource. Do you only mean that the focusing has to actually be happening for it to mean anything? If that's what you mean, then that's just what I said before.

"Consciousness is completely described by that algorithm, combined with the fact of that algorithm being instantitated in the world."

And if that is what you mean, I don't think it's an objection to functionalism at all. I mean, try this on for size: "Adding can't be understood as another function or cognitive resource." "Why not?" "Because you actually have to do the adding for it to be adding." Well duh.

If that's not what you mean, then I have no idea what you do mean. Surely the ability to focus in that way is strictly a function. I mean, it's something the thing can do. And surely it must be an algorithm that gives it its ability to do that thing. An algorithm plus, of course, the algorithm's actual instantiation in the world.
12:18 :: :: eripsa :: permalink


I get around

8.27.2005
round get around I get around.

I tracked down a producer from RantRadio (by the grace of Server), and was given permission to broadcast Tales from the Afternow on the new Radio Free Urbana station, 104.5 WRFU.

Here was my proposal (which is supposed to be approx 50 words):

My proposal is not to produce new programming, but to broadcast an existing radio program freely available on the internet called "Tales from the Afternow". You can listen to the entire run of the show (just over 3 seasons) at this address: http://www.theafternow.com/listen.php

The show is a broadcast by Independent Librarian Dynamic Sean Kennedy VI from "sometime afternow". Produced in the style of 30's and 40's era serials, Afternow updates the formula with a healthy dose of post-apocalyptic imagery and cyberpunk sensibility, in a world where corporations rule and unlicensed knowledge is pornography.

I have received express written consent from the show's producer to rebroadcast Seasons 1 and 2 of the show on WRFU.


Hopefully that is enough of a pitch to get a good timeslot. Sometime between 12-3am would be ideal.

Here is an excerpt from my IRC logs tracking down the producer.

* Topic is 'Topic for #rantradio: .: www.rantradio.com :. | Zombie Walk Van 2005 THIS SAT! http://www.shanesworld.ca/vancouver_zombie_walk_2005 | happy bday mephyt and MRV | [Ronin_AlienKitten] You'd think sticking a q-tip into an animal's oriface would be easy'
* Set by CimmAway on Thu Aug 25 01:07:49
-Cow- Welcome to RantRadio. To talk in the channel you need to register your nickname. Please read the bottom of this page on how to do this - http://www.rantradio.com/faq-listeners.php
* Cow sets mode: +vvvv eripsa AtlanticSlap picklejuice nobody
[eripsa] thank you
[eripsa] I have a question, I dont know if anyone can help me
[eripsa] I am wondering what the legality issues of rebroadcasting this material are
[eripsa] and who I talk to about getting permission to do this
[eripsa] the radio station is here: http://www.radiofreeurbana.org/
[CimmAway] You talk to me.
[eripsa] hi there
[CimmAway] And you have permission to air it.
[CimmAway] I'm the producer for Afternow.
[eripsa] really? I very much enjoy the show
[eripsa] I just found it a few weeks ago
[CimmAway] Good stuff! :-) Glad you enjoy it.
[eripsa] I am only half way through the second season, but it is great material
[CimmAway] Let me know the reaction to the series after you air it: info@rantradio.com
[eripsa] when the radio station was going up, I was thinking how great it would be to broadcast some sort of serial program in the style of The Shadow or something
[eripsa] but Afternow is perfectly suited for the format
[CimmAway] :-) Yup, it's a throwback to 30's - 40's radio dramas
[eripsa] would it be possible to get some sort of official permission to present to the radio producers?
[eripsa] eripsa@gmail.com
[CimmAway] No prob
[eripsa] thanks a bunch! I will definitely stay in contact and let you know how it is recieved
[Jibkat] Just dont throw your name in front of it
[Jibkat] ;)
[eripsa] oh, absolutely not
[eripsa] heh
[eripsa] I imagine it would also include creating things like short trailers and advertisments for the broadcast
[CimmAway] If you make any trailers, make sure you copy them to me, I'd love to hear them.
[eripsa] absolutely. to info@ ?
[CimmAway] Yup.
[eripsa] sweet deal. have a good one.
Session Close: Sat Aug 27 19:16:23 2005


And here is the result of that conversation:

Hello, this is James O'Brien, the producer for 'Tales From The Afternow'. The radio drama 'Tales from the Afternow' is licensed under Creative Commons Attribution-NonCommercial-ShareAlike 2.5: http://creativecommons.org/licenses/by-nc-sa/2.5/

I give WRFU 104.5 full permission to broadcast 'Tales From the Afternow' Seasons 1 and 2. Please give credit as 'Narrated by Sean Kennedy and Produced by James O'Brien' and please give a link to website www.theafternow.com

- James


Lets hope these damn Urbana hippies let me have my radio show.
19:23 :: :: eripsa :: permalink


For the record, this week officially began my 'productive years'

You know, in case my future biographers are keeping tabs.

From The Economist

Changing gear
Aug 25th 2005
From The Economist print edition

The self-driving car comes closer—but difficulties remain

IT IS an old chestnut—a car that drives itself—but General Motors, the world's largest car manufacturer, has become the latest company to claim to be building one. The car uses updated technology combined with several existing innovations and, according to the manufacturer, could be in production by 2008. But, while the technology takes some of the boring bits out of driving, it falls far short of an automatic taxi service and, anyway, various legal, technical and social barriers to its introduction remain.

The latest prototype currently being tested is based on an Opel Vectra, a mid-sized family car. It is undergoing evaluation near the headquarters of Adam Opel, General Motors' European subsidiary in Rüsselsheim, Germany.

The car has automatic cruise control of the sort fitted to many expensive cars such as Jaguars and BMWs. These use either radar or infrared beams fitted to the front of the car to measure the distance to the car in front. That distance is kept constant by automatic acceleration and braking.

But conventional automatic cruise control fails at speeds of less than 30kph (20mph). To circumvent this problem, the new car uses lidar—short for “light detection and ranging”—a measuring technology similar to radar but which uses laser beams rather than radio waves to measure distance and determine the speed of other vehicles. As light waves have shorter wavelengths than radio waves, the technology works at shorter distances and lower speeds. Indeed, the prototype has a distance-keeping system that will brake to a standstill, and move off again when the car in front moves.

This advanced version of automatic cruise control works alongside a system that corrects the car when it drifts out of its lane. Almost two million accidents a year worldwide are thought to be caused by drivers inadvertently changing lanes, frequently caused by drowsiness.

At present, only a few cars have lane-departure warning systems. In America, the technology is available on the FX45 model from Infiniti, Nissan's luxury car division. In Europe, the Citroën C4 and C5 models have it while, in Japan, some Toyota models are fitted with it. These systems use camera images or near-range radar to determine the direction and position of the vehicle in relation to lane markings. When the system recognises that a lane departure is imminent, it bleeps or flashes to alert the driver. Some systems even try to rouse the driver by making the steering wheel or the seat vibrate.

Again, the new car takes this a step further. A camera mounted on the windscreen behind the rear-view mirror gives a clear view of the road ahead, picking up the white lines even in poor visibility or where the paint has faded. The camera works in conjunction with laser beams mounted in the headlamp unit. There is a second advantage to using lasers in preference to radar: while they have a similar range, laser sensors have a significantly wider field of vision. Existing systems can see only straight ahead and the nearside lane marking. The prototype can see more than twice as widely as this. Together, the camera and laser sensors monitor the white lines and, if the car strays out of its lane, an electronic control unit attached to an electric power-steering unit corrects it.

The system is unlikely to have a smooth ride into production, however, despite achieving what General Motors says is a very high level of reliability during the development stage. Several obstacles stand in the way.

For example, self-steering cars are currently illegal in most European countries. Carmakers want the law changed to allow them, but they are also keen not to be held legally responsible for any accidents which result. Drafting legislation which would make it attractive for carmakers to introduce the technology, but still allow some recourse for those hurt if something goes wrong, could prove tricky.

In addition, most people relish driving. One reason why people feel safer in their cars than on public transport is because they are in control of the vehicle.

Moreover, whether in the stop-go traffic of the daily rush hour or on the motorway, the system relies on having a car in front that is travelling to the same destination. On the open road, the driver must drive.

Still, the technology appears affordable. General Motors says it is looking at a price of less than €1,500 ($1,830) to have it fitted to a new range of cars due in 2008.

European governments have set a target of halving road deaths by 2010. General Motors hopes that improved technology can help meet that goal.


Discussion question: Imagine in the future that all cars are robotically driven. Would it be incorrect to say that there are still 'rules of the road'? What about driving etiquette and conventions? If so, to whom do those conventions apply?
15:14 :: :: eripsa :: permalink


The Daily Show 8/25/2005

8.26.2005
The Daily Show is something of a defining feature of the college-age generation today, and as such it recieves a lot of undue credit and unwarranted criticism. Last night's episode, however, is a good example of why it deserves its place in contemporary political discourse. The interview of Hitchens was perhaps the best public debate on the Iraq war since we declared victory; for a 5 minute segment, that is saying an awful lot.

You can watch the opening segment and interview at Crooks and Liars.

Here was my analysis from the SA TDS thread:

Hitchens supported his arguments very well, and Stewart had nothing to say in response to Hitchen's justifications for war.

What Stewart did was pick up on Hitchen's off-hand poke at the attitudes of the anti-war crowd, and launched into his standard line against the current political environment and lack of sane political discourse and transparency. Stewart was absolutely right on this point, of course, and his rant at the end was classic, but this is the exact same line he has fed us every day for the past year. We know his stance on the Iraq war, and his opinion of government, and he did nothing to address Hitchen's arguments. He knows his 'reasonable centrist' view holds water, and he knows he'll get a rousing applause, and he went for it to close out the interview.

It was cheap, and he knew that too, so he stopped himself to give Hitchen's a curtesy plug. But seriously, his ending rant basically functioned as a repetition of his standard centrist talking points.

To Stewart's credit, I dont think the irony of this is lost on him, which is why I think the dynamic of the interview was so great- they both knew they disagreed, but respected each other enough to come out and say it, and not mince words or tread lightly. And it was this frankness, I think, that made it such a powerful interview.

Agree with Hitchens or not, the argument in the interview wasn't over the war at all, but between Stewart's naive idealism that thinks an open, responsible, and competent government is necessary (and possible), and Hitchen's jaded realism that realizes that certain actions must be taken, even if it has to be done behind the guise of a distant and unresponsive government.


Also, it should be noted the amount of 'leg time' Hitchens gets. People lamented about the absense of the couch, for its formality and the fact that you can't see the guest in full figure, twitching appendages and all. But changes to camera placement have come back to compensate and add a human touch.
16:52 :: :: eripsa :: permalink


What is thought?

Posted by Cloud 9 in this shitty thread.

I basically used this as an opportunity to try and defend the view Melnick presented in class in a public arena. I took some liberties of extending it in places, and obviously some places I had to fudge.



Thought is the pooling of action-guiding resources to attend to specific detections and information from the environment in order to plan courses of behavior. These action guiding resources are perfectly general with respect to detections.

This is to be distinguished from purely 'instinctual' stimulus-response, which is detection-specific.

Abstract thought just depends on which action-guiding resources the creature has available. If the creature can have thoughts about thoughts, or thoughts about classes of objects, it increases the power and generality of those resources.

The most powerful of these action-guiding resources is language, which shifts thought from being a distributed, parallel analog process to a serialized, digital, quasi-logical process.

In case you were wondering, consciousness is the concentration of these action-guiding resources on a specific detection.

Cloud 9 said:
If I'm reading this correctly what you're saying is that all living beings are genetically pre-disposed to do everything in thier life, and that thought is just the road map kicking in?


I dont want to say anything about genetic predispositions, and it seems to me that some form of learning is required to achieve the sort of novelty and generality required for the kind of action-guiding resources I am talking about.

I dont mean anything extremely sophisticated. Think about the lioness stalking her prey. This isn't pure stimulus-response (like the frog shooting out its tongue at any small sark object). The lioness is constantly updating information about its surroundings and its target, and adjusting its behavior to fit this information in order to best achieve its goals. Given the information it detects, there are several behaviors the lion can do in response; thought is the procedure for sorting through these possibilities with a particular goal or plan in mind.

A parallel example in humans is, for instance, having to get across town, and planning a route to get there, and being able to compensate for any detours or distractions along the way. That is the essence of thought.

Now, a lot of these action-guiding resources are probably genetic, and definitely have to do with our physiology- they are functions the brain can perform. But as we go up the cognitive scale, these resources become more general and learned, and can be quite distant from the genetic starting point, like language.

[after a derail about behaviorism]

I was looking forward to this opportunity to post my newest formulation of a theory of mind, and no one has challenged it at all.

SRG said:
I have to be totally honest and say I didn't understand it.

I want to post my latest formulation of a theory of conscious thought: consciousness is the human brain's sophisticated backup plan. It's an extremely general (but not completely general, as I think Eripsa's just claimed) problem-solving module. If we can't do it "without thinking," we do it consciously, which is usually not nearly as efficient. Eventually, after doing something consciously for long enough, it can become something we do without thinking. A perfect being would not be conscious.


I made it purposively confusing so someone would call me one it. It is actually my prof's view, and I just want to take it for a test run, and the best way to test something is to practice defending it. Its not at all new in any of its particulars, but it does constitute a full-blown theory of mind.

Thought is a kind of procedure for planning actions and achieving goals. The brain has certain resources at its disposal for achieving these ends, which includes body-centric things like learned skills (how to walk) and innate behaviors (how to move your legs), but can also includes more powerful and all-purpose faculties, like attention to novelty, innovation, creativity, globality (big-picture or long-term thinking) and so on. This can also include language, on which reason and logical thinking piggyback. I called these collectively the 'action-guiding resources'- tools, basically, that each contribute to the continuous adjustment of behavior to environment. So external stimulus gets fed into these resources, gets processed (or better- digested) by the brain, and this results in some action or behavior.

A lot of the resources I list above are rather indicative of 'higher' types of cognition, but I dont want this to be overly complex. Thought (or cognition) is just the adjustment of behavior to match environment, mediated by the processing of these various resources. So back to my lioness example: when the lioness is stalking her prey, she is constantly and carefully adjusting behavior to optimally stalk her prey. It is the attention of action-guiding resources to specific detections that constitutes thought. Or you could say that stimulus results in action via the mediation of these cognitive resouces.

That, I think, is pretty straightforward and uncontroversial, except in saying that that is all thought is. The main point though is to distinguish it from the pure stimulus-response of lower animals, which is unmediated. Responses in lower animals is detection-specific, and no general purpose resources are put to use in processing those detections. They simply act upon detection.

The payoff of the view, though, is that it provides a theory of consciousness. Consciousness is just the concentration of these action-guiding resources on specific detections. The brain resources can go churning away on whatever stimulus it wants, but when a certain stimulus becomes the focus of lots of these resources it becomes conscious. I am still working on this part, and I have more to say, but I'll leave it there for now.

This sounds way to abstract, but there are some interesting consequences of the theory that have empirically testable conclusions. For instance, the above would explain why talking is almost always a conscious act. I dont mean just rambling on, or reading from a teleprompter, but actually using language requires a lot of resources to be focused on this behavior, including motor resources, language resources, creative and novel resources, and so on. That is a testable hypothesis, but it also seems right.

edit: if by perfect you mean able to attend to all input at once, on this view the perfect being would be totally conscious.

SRG said:
Well, the idea is that consciousness comes from attending to excess (unneeded) input. A "perfect" being is one that has an efficient module for everything. I got this from thinking about those savants that can do complex cube roots in their heads, and they can't tell us how they do it, if this helps you understand what I'm thinking.


Hmm. I am guessing the idea here is that consciousness is something of a power drain or crutch that is best avoided? I dont know, and I haven't worked out quite what consciousness does to the benefit of an organism yet, so I really can't address this. It seems to me, however, that I am usually conscious of the thing I am working on. The crickets chirping outside as I write this dont make an appearance in consciousness unless I attend or concentrate on them. But of course the sounds are still entering my ear canal, and are being processed by my brain, without any involvement of consciousness. Consciousness is a kind of focusing on a particular set of detections.

So the savants have extremely good cognitive resources without the ability to focus or bring them into consciousness. But I dont understand how that implies that their modules are somehow more perfect than ours, or that the object of conscious concentration is somehow unnecessary or excessive. It certainly isn't necessary for cognition, but consciousness is derivative on cognition, not the other way around. In other words, its not like the savant has somehow bypassed inessential wiring via some more 'perfect' shortcut, but that their resources simply can't organize and focus in the right way to form consciousness.

The Artificial Kid said:
Just last week I was thinking about the connection between consciousness and serial and symbolic computational processes in human thought. When you do things that are automatic or ingrained, and perhaps that are based on long-term changes to your neuronal network, it seems like they're more likely to happen unconsciously and in parallel, whereas when you sort through a recently learned list, for example, you tend to see the items and the search is parallel. Serial search like that fits well with symbolic computational models of information processing, so perhaps when you're "thinking" through a task you're using a symbolic computational system built out of neurons, a system that has attention that can be directed to encoded items, whereas when you're doing something ingrained you're just producing a response based on a well-worn neural track, either for your conscious mind or for outside observers.


Fly said:
Would it be able to distinguish between rambling or teleprompter reading and "true" talking? It seems like rambling uses at least all three of the resources you mentioned to a certain degree, though the degree could be important.

On the other hand, it seems like the resources listed are a priori assumptions of some functional modules that exist, so I don't know how one would verify that they are being used if we don't know that they are in fact real entities.

edit: Specifically the "creative and novel" resources seem hard to measure. I'll grant that we can measure motor resources by the lips moving, but what are the criteria for "creative and novel?"


I am talking about processing resources for the creative and the novel, which isn't as intractable to detect.

But lets start from the beginning. You are right, it is a matter of degree. But that seems to fall directly out of the theory of cognition I gave before. Cognition is a mediation between stimulus and response. So the degree of involvement of cogntion should be seen in the time between stimulus and response. So I would guess that the degree can be tested through time-delay experiments that are already extremely common to psychology. Of course, we have to be careful, because certain mediating processing could take place extremely rapidly (again, the calculations performed by the lioness takes place in fractions of a second), but if we constrain ourselves to a particular domain, for instance language, we should have interesting results. I haven't thought about this much, but maybe some test where you flash people a picture and have them just ramble off whatever words the image brings to mind without paying attention to grammar or sentences or anything, timing the delay between responses, and then flash the same picture to other subjects but have them engage in a discussion about the picture in full discursive language.

Similarly, novelty and creativity can be tested in similar ways. Novelty is just an ability to react to unexpected phenomena, and creativity is to come extent the ability to compensate for novel phenomena with the resources available at the time. If we are talking about langauge use, this manifests as an ability to relate to what someone else is saying even if it differs wildly from what I might expect them to say, and to attempt to respond to what they are saying in their words or words you might not normally use. I am making this up off the top of my head, but something like the $10,000 pyramid game show in experiment form might tease these resources out.

In any case, though, I am talking about resources, which means tools, which means abilities and skills. So they aren't exactly entities, although they are wholly captured in particular brain configurations. But they are only manifested in action and certain capacities for action.

Zoolooman said:
So anything incapable of planning future behavior is not thinking, even though multiple action-guiding resources may pool together to guide its complex immediate behaviors?

Take the flatworm. If you shine a lamp near the little creature, it's eyespots will detect the light, and depending on its current need for heat or cold, it will migrate towards or away from the lamp-lit waters. Yet the worm itself has no memory, nor a capacity for planning ahead. If the lamp is removed, it will cease moving, at least for the purposes of finding or avoiding lamp-lit water. Therefore, the flatworm's immediate and complex behavior only exists during its stimulation.

Would you consider the flatworm's behavior a purely instinctual stimulus-response, even though it is an example of two action-guiding resources--eyespots and temperature detecting organelles--pooling to direct unplanned behavior?
This is to be distinguished from purely 'instinctual' stimulus-response, which is detection-specific.

I agree that single stimulus, single-response actions are detection-specific, and do not require thought. However, I'm a little leery at the implication that complex behaviors caused by multiple stimuli are thoughtless, even if the individual stimuli--once divorced from any complex behaviors--are purely instinctual.
Abstract thought just depends on which action-guiding resources the creature has available.

I agree with this, though one wonders if these action-guiding resources you name are specific material things within the brain ("Here is the loop that makes a mammal think of its own thoughts!") or if they are merely useful abstractions of aspects of general behaviors in the brain. Fortunately, in either case the concept is still solid.
The most powerful of these action-guiding resources is language, which shifts thought from being a distributed, parallel analog process to a serialized, digital, quasi-logical process.

I don't know if the shift is as abrupt as you suggest. Isn't all abstraction simply the creation of a symbol loaded with semantic content? If so, then language could simply be a framework to communicate these abstractions coherently to another individual. Of course, I'm stepping into unknown territory here, so for now accept these initial thoughts as nothing more than mere speculation.
In case you were wondering, consciousness is the concentration of these action-guiding resources on a specific detection.

Now this is the only part I disagree with. I suspect, confessedly without much science to back this up, that conciousness is a specific function useful for organizing and planning in terms of high-level thoughts, and does not necessarily result from the concentration of action-guiding resources.

I only find weakness in my position when I try to imagine a 'hyper-instinctual' creature with the faculties of abstraction, self-referential thought, and language, but no conciousness to organize and create high-level thoughts.

By the way, high-level thoughts, at least in this post, are thoughts created and manipulated by conciousness.


Thats an interesting case, thanks for bringing it to my attention. There are a couple of ways to respond, though I dont know the specifics about the worm. You might say that it does have a memory, though it isn't neurally implemented. Its memory is stored as its current temperature, and this is a sort of on/off switch that determines how it will react to the light.

The point of cognition isn't the planning part, and goals dont need to be explicit or represented. To say that cognition is for planning behavior is just to say that it is purposive. The flatworm's behavior is definitely purposive, even if it is too stupid to understand how its behavior relates to its goals. Similarly, the frog's behavior in shooting out its tongue is purposive. But there are no action-guiding resources mediating stimulus and response in the flatworm. Detection feeds direction into action for the flatworm, even if this is complicated by multiple detection systems. I mean, you might even hypthosize that this is how cognition came about in the first place: the detection information got so complicated that we needed more sophisticated ways of sorting through the information in order to determine proper action. But the flatworm detection systems aren't that complicated, and so there is no reason to dam up the flow from the stimulus to response.
However, I'm a little leery at the implication that complex behaviors caused by multiple stimuli are thoughtless, even if the individual stimuli--once divorced from any complex behaviors--are purely instinctual.

I understand cognition as a pool of (neural) resources that digest or process input stimulus prior to (or determining) action. So whether or not thought is occuring isn't a matter of looking at the complexity of the behavior, or the complexity of the stimulus-detection systems, but looking at the complexity of the processing that mediates the two.

Does this make sense? Like I said, this is my first time defending this view, and I am sort of winging it.
Fortunately, in either case the concept is still solid.

I think thats right, although I also think this suggests a whole lot of research avenues into consciousness that may prove to be fruitful. This is one of the reasons I am excited about the view.
I don't know if the shift is as abrupt as you suggest.

Oh, I agree, the shift was almost certainly gradual over many generations. Language didn't appear, it evolved, starting with a basic capacity for hoots and hollers and slowly building, via physiological changes in the brain and throat, and via the pressures of socialization, into full blown language. I mean, logic didn't really kick in until we became farmers and started civilization and needed basic forms of arithmetic and geometry, though there were probably more primitive forms of language prior to that advancement.
Now this is the only part I disagree with.

The view of consciousness I am defending here is supposed to be an attempt at something like a 'workspace' view of consciousness, so I am not entirely disagreeing with you. However, I'm not so sure that consciousness is a function itself- consciousness doesn't seem to do anything at all, like the savant examples SRG brought up indicate. Consciousness doesn't plan or process anything, nor is it only related to higher-level thoughts. I am conscious of the color blue, for instance, and that seems to be a direct effect of my most basic sort of visual detection.

I mean, it is sort of a crude example, but consciousness is something like the desktop GUI of my computer. All the processing is taking place below the surface, but certain aspects of the computer's processing or internal structure can be brought up to the level of the desktop for all to see. The desktop is certainly helpful for organizing things, but the organization itself doesn't take place on the desktop- that all goes on underneath. Similarly, consciousness isn't doing anything itself, but its used to pool the underlying cognitive resources and focus them on some particular information coming in.

This also explains why consciousness can be divided- I can be conscious of visual and auditory stimulation simultaneously, for instance, or any number of other input modalities, because the underlying resources are focused on both inputs. Sort of like I can have two folders open on my desktop simultaneously.

So the important part, and really part two of my cental thesis here (the first was about the nature of cognition), is that consciousness is not involved in the creating of thoughts, or the manipulation of thoughts, or anything else like that. Thats the job of cognition. Consciousness is the result of the focusing of resources on detections that get sent to the cognitive processing. Consciousness is not simply higher-level thoughts, or thoughts about thoughts, or the brain scanning itself for activity, because none of these things explain why consciousness has an associated phenomenology and a perspective from which the objects of consciousness is percieved.

This is important, and has something to do with my reaction against functionalism earlier. Cognition is a kind of procedure or processing, so theoretically any one of the functions that cognition performs could be instantiated in any other system- a computer, or the nation of china, or whatever. But this alone isn't enough for consciousness. In other words, cognition is necessary, and not sufficient, for consciousness. Incidentally, a computer can perform all the functions you mention of a 'hyper-instinctual' creature: it can be as abstract (data structures of any size and scope) or self-referential (127.0.0.1) as we want it to be. But the computer doesn't occupy a point of view, there is no perspective from which the computer operates, and there is nothing it is like to be a computer. So the computer can think and perform all the procedures necessary for cognition, but still lacks a necessary feature of consciousness. Without any perspective or point of view, there is nothing relative to which those cognitive resources can be focused or concentrated, and thus no consciousness.

Now the obvious question to ask is, what does it mean to say the computer doesn't have a perspective? Well, thats the other hard problem of consciousness. And I dont know the answer to that. But gimme a few weeks, and I should have something.
02:08 :: :: eripsa :: permalink


Google transcends language

8.25.2005
From Google Blog

In reference to the NIST 2005 Machine Translation Evaluation Official Results

The NIST 2005 Machine Translation Evaluation (MT-05) was part of an ongoing series of evaluations of human language translation technology. NIST conducts these evaluations in order to support machine translation (MT) research and help advance the state-of-the-art in machine translation technology. These evaluations provide an important contribution to the direction of research efforts and the calibration of technical capabilities.


In Google's freshman attempt that this contest, it outperformed all other translation systems, both private and academic, with a BLEU score in excess of .513 (out of 1) in Arabic and in excess of .351 (out of 1) in Chinese.
13:24 :: :: eripsa :: permalink


Thats right, I am calling this 'work'

8.24.2005
Get over it.

Forbes Magazine

2005 E-Gang
The Machinist
Quentin Hardy, 09.05.05

Type a phrase into Google and, in an instant, it pores over an astounding 8 billion Web pages. Peter Norvig is haunted by the prospect of what it misses. As Google's director of search quality and research (a doozy of a job description), Norvig spurs on 140 scientists and engineers racing to add more depth, speed and relevance to the world's best search engine.

E-mails, out-of-print books, blogs, research papers in Arabic--any of them might contain something useful to someone. Yet a search engine accesses only 25% of all online data; the rest is out of reach. So Norvig's group designs tools to scan the contents of public libraries, crafts translators that convert foreign-language documents and creates ways to store and index e-mails cheaply.

His Google geeks also work on improving mapping technology and the ability to recognize image content. Norvig pays a small team of contractors to just do random searches all day to test the Google engine, in the hope of teaching it to learn on its own.

What makes improving search quality so complex, Norvig says, is "the uncertainty about a right answer. There is a lot of human intuition in the loop." His hope is to inject a lot more machine intelligence into that loop.

Norvig arrived at Google in 2001, bringing serious artificial intelligence chops to a company still run in seat-of-the-pants fashion. He had spent three years at NASA's Ames Research Center, where he did the early work on the artificial intelligence that steered the Mars Rover. His 1996 book on AIis considered the standard in the field.

Google could access 2 billion pages when Norvig arrived, small enough to let a handful of engineers fine-tune it. He set about broadening the ideas that make Google work. Its benchmark for what makes a page relevant as a search result is defined by the number and quality of other sites that link to it.

Now Google's statisticians develop algorithms that look at how closely one query links to another and how groups of queries interact. Studying word "clusters"helps determine whether a search term like "Blondie" means the comic strip or the punk-pop band from the 1980s. Norvig's crew also aims to accelerate results by learning which irrelevant words (like "like") to discard when indexing a Web page.

Norvig's group is pursuing video search and personalized search, as well as a program to index data from library books and photos. It designed optical scanning software that can tell when a book page is creased and correct it on the fly. "All of humanity is working for us," he says. "We just have to decipher it."


The internet does not represent the aggregate knowledge of mankind. The internet represents the total knowledge that we have allowed our machines to have access to. It is important that we don't forget this.
23:14 :: :: eripsa :: permalink


Jack decides play is for chumps; embraces dullness

8.23.2005
Alright, back to work. Nothing to see here, people, move along.

Stanford, Volkswagen team up to enter driverless-vehicle race


With Stanford University, and organizational and monetary help from Silicon Valley's venture-capital firm Mohr, Davidow Ventures, the VW lab is pursuing an entry in the DARPA Grand Challenge.

The federal Defense Advanced Research Projects Agency sponsors the event with a $2 million prize for a driverless vehicle that can travel up to 175 miles in the desert in 10 hours.

In June, Stanford's Racing Team, Volkswagen's partner, was named as one of 40 that will compete in Southern California in September. The 20 qualifying teams will compete in the Grand Challenge in October.

"We would really like to win this race," said Carlo Rummel, executive director of VW's Electronics Research Lab.

Their vehicle, a European-specification VW Touareg T5 sport-utility called Stanley, is equipped with devices that operate its steering, acceleration and braking, as well as all manner of radar and navigation equipment to stay on path and avoid obstacles.

Sebastian Thrun, a Stanford professor and artificial-intelligence expert heading the effort, sees autonomous driving as "the beginning of a scientific revolution."

The goal is to reduce the more than 40,000 American deaths from car crashes each year. Human error is responsible for a large percentage of those deaths. Creating vehicles that can stay in lanes, avoid obstacles and slow down when they need to could save thousands of lives.

"That would be a fantastic success for us," Thrun said.


Discussion question: What is the scope of 'we' and 'us' in the above article?
13:42 :: :: eripsa :: permalink