<$BlogRSDUrl$>
eripsa
thinking is dangerous

What is thought II

8.30.2005
What is thought II

Part 2 of this conversation. I get argued into a corner, and ultimately lose to SRG, even though his approach and view is wrong. But I have since discovered more of Melnick's view, and I believe I now am able to respond to his criticism. So stay tuned, faithful readers.


Mishkinman said
Since you've been doing your own pedantic Philosophy lessons, I feel ok in telling you that you should know not to reword your declarations, lest you inanely throw in a "is the result of," which you might not mean. Do you mean that it is a result of the focusing, or is the focusing itself?

It seems that the theory you're defending is very similar to Noe and O'reagans enactive approach to thought and percetion http://www.imprint.co.uk/pdf/NOE.PDF and http://ist-socrates.berkeley.edu/~noe/oregan.noe.pdf They just replace your "action guiding resources" with the slightly more intimidating "sensorimotor contingencies." I dont mean to butcher it (its late) but, briefly, it necessitates that experience/consciousness occurs by an agent through and because of its interactions with its environment. Perception here is not something that occurs inside of the animal, but is what the animal enacts as it explores the environment in which it is situated. This is as vague as any other theory of mind, but it least seems like a nice stepping stone for your theory on action-guidance.

What I like about these newer theories coming out is that behind the scenes they are founded on what always seemed to me to be very common sensical statement, that consciousness will not arise without sensation. I know not everyone believes this, but damn if I dont wish for times when ethics and decency wouldnt stand in the way of sticking some newborn in a sensory deprivation chamber and fMRI-ing him for life.


I'm sorry for being pedantic, I hope it doesn't come across as condescending. The object of my pedantry isn't really you guys but myself; like I said, I'm taking this for a test drive by subjecting it to the collective wisdom of my fellow philosophy goons.

But thanks for pointing out the inconsistency, though I think it is only an apparent one. The problem here is that I haven't broached the subject of the structure of consciousness, which is much thornier territory. But this gives me an opportunity to do that.

Let me take a running start at this by going back and recapping the view. Any theory of consciousness will need at least an account of the following three features:

1) The function of consciousness
This is what consciousness is from a 3rd person or scientific perspective. The function of consciousness tells us where to look for consciousness. My account of this is that consciousness is the concentration of action-guiding resources, so to look for consciousness we need to look at where the brain pools its various resources to attend to specific detections. These resources, considered alone, constitute cognition; so it would also be correct to say that consciousness is the concentration of cognitive resources, but where the emphasis is on action (where this view meets Noƫ's). Again, cogntion is necessary but not sufficient for consciousness.

2) The quality of consciousness
Any plausible (that is, non-functionalist) account of consciousness will need to provide some explanation for why consciousness looks or feels the way it does from the inside (otherwise, you just aren't talking about consciousness). It is consciousness from the first person perspective. This is the qualia problem, or Chalmer's hard problem (or Fodor's impossible problem). It comes to us from the Aussies, and the big problem is how to squeeze it into the functional account above, under the assumption that if we can't then consciousness must be non-physical. The problem here is that almost no one in the analytic tradition takes seriously the following issue:

3) The Structure of consciousness
This is the stuff that Sartre and Merleau-Ponty spend inordinate amounts of time on, under the guise of phenomenological ontology. The continental tradition here is hard to swallow for us analytics, though, since they don't seem to take the problem of the physicality of consciousness seriously, as if we have transcended the problem or can simply ignore it without giving an account of how anything like an 'inside' or perspective is possible in a physical universe. The structure of consciousness isn't quite a description from the 3rd person, and not quite from the 1st.

But there are some things we can say about the structure of consciousness. Consciousness always has an act -> object structure. The act of consciousness is the act of concentrating resources, and this act is directed at (or attends to) its object. So it isn't that my experience itself is blue, but my experience is of a blue object, or concentrated on the blue background, or whatever. The act of consciousness itself is never complete- it must be of something, it must have an object. So, for instance, you can't be conscious of nothing.

In other words, the act of focusing is only one half of a duality that is central to consciousness.

Again, I havent broached the subject of how this structure admits a perspective, and how such perspective is possible in the physical world. But I dont have answers to those problems yet.

SRG said
I'm going to do a reply about the definiton of thought later, but I just want to say something about the so-called "hard problem" of consciousness. What is it that makes me have a subjective experience? You've admitted that, using computation, we can in principle explain everything about what makes me have a subjective experience from a third-person perspective. That is, we can explain what makes me talk and act and, in fact, think like a being with a subjective experience (as far as we can tell from an fMRI scan), and we can build a robot that will talk and act and think just like me, including claiming to have a subjective experience.

Yes, but that's just from an outside perspective, right? What about the phenomenology or the qualia or the intentionality or some other word that's a different word depending on who you ask. In other words, what about the part that we all know is substantial about the subjective experience? The part that isn't just an abstract process, but that is there?

In other words, what makes my universe exist? I think this is a very similar question to the one, "What makes the universe exist?" In other words, why is there anything? Why isn't there just nothing? I asked this question when I was a child. "Well maybe God created the universe." "Yes, but why is there a God? Why isn't there just nothing?" Nobody could answer me, and eventually I stopped thinking about it. But enough about that. Here's a thought experiment. I've posted something like this several times, but I've never heard a satisfying response.

Imagine the algorithm for your brain instantiated by the nation of china. The nation of china is going to do exactly what your brain does, on a functional level. The being they create will be able to answer questions, and it will always answer them exactly as you do. In fact, it won't know it's not you.

So we ask it: "Do you have a subjective experience/phenomenology/qualia/soul/whatever?" And, being funtionally equivalent to you, it answers, "Yes, of course I do." Three possibilities here:

(A) It really does have the thing it claims to have.

(B) It's purposely lying about having the thing it claims to have. But this can't be right, because it does whatever you would do in the same situation, and in this situation you wouldn't purposely lie. You wouldn't even be in a position of having to purposely lie. You would just tell the truth.

(C) It's not purposely lying, but it's mistaken. But this implies that it's possible to be mistaken about this sort of thing. In other words, you might not have a subjective experience either. You might just think so because the architecture of your brain gives you that illusion. This is like John Searle claiming, "Computers don't have beleifs. They just believe they do."

A possible reply might be, okay, maybe the thing instantiated by the nation of china really does have subjective experiences. But even so, those can't just be the result of abstract computation. Maybe abstract computation just gives rise to them. Subjectivity is epiphenomenal in other words.

But that's nonsense, because it's not the epiphenomenon that's answering the question. The answer to the question is entirely determined by the abstract computation itself. There's no way the computer (brain) can even know it's producing epiphenomenal subjectivity unless that subjectivity is acting on the brain in some way -- which would be strict metaphysical dualism. Otherwise, any reasoning the computer uses to claim it knows it's producing epiphenomenal subjectivity must be fallacious.

It simply must be true that computation is sufficient to produce subjectivity, to whatever extent subjectivity really exists. It cannot be merely necessary.

So either the whole question you're trying to ask must just be nonsense, or mere abstract computation must be something more real than you think it is.


There is a lot to talk about with this example, and I dont have much time. However, I will say that I think you are begging the question right here for epiphenomenalism, and it directly stems from an assumption of functionalism. Why wouldn't the system of china answer no? After all, what makes me want to answer yes is the fact that I am conscious, not the underlying functions that constitute my cognitive process. And the plain fact of the matter is that the sum of people in China simply dont have any unified perspective from which it makes sense to answer the question in the affirmative. They dont even have a distributed perspective, except in a kind of socio-historical sense, which is mostly besides the point of the analogy.

I am not arguing for epiphenomenalism, and I insist that consciousness is directly causally interacting with the world. And part of that interaction would entail responding to questions of 'Are you conscious?' with a yes. It is not cognition that is the justification for my assent, but consciousness itself.

In other words, zombies aren't really a possibility; and conversely, if computers start telling us about their conscious experiences, perhaps we should believe them.

SRG said
I'm only begging the question in the sense that I assume your brain can be described as instantiating a certain algorithm. Even Searle admits this is so. And if it is so, anything that instantiates the same algorithm will do the same thing, because that's what an algorithm is. An algorithm for answering "yes" when asked "Are you conscious?" cannot answer "no" when asked "Are you conscious?" This is the basis for the thought experiment.
what makes me want to answer yes is the fact that I am conscious, not the underlying functions that constitute my cognitive process.

My point is precisely this: that it is both, and one can't possibly be separated from the other. If you admit that your brain instantiates a certain algorithm, you admit you answer yes because of the algorithm. If you admit you are conscious, you admit you answer yes because you are conscious. These are two true descriptions of the same event. You can't remove the algorithm without removing the consciousness, and you can't remove the consciousness without removing the algorithm.

Imagine I flick a light switch and the light goes on. It's true that the light went on because I flicked the light switch and it's true that the light went on because the electrical circuitry works a certain way. The two descriptions don't contradict each other -- they are equivalent. The first is a high-level description of the same event for which the second is a low-level description.

I'm glad you realize zombies aren't possible, but then I'm confused as to how you can claim a functional description isn't a complete description of consciousness.

I agree that the brain can be described as instantiating a certain algorithm, in the sense that cognition, in my definition, is algorithmic- or better, procedural. It is computations that result in action, and not just calculations floting in the void.

However, you are absolutely wrong that Searle would say that consciousness, in the full sense that includes phenomenology and understanding, is just an algorithm. Seale's whole point with the Chinese room argument, and the point of the instantiation of the neural structure of the brain by the population of China, is to show that algorithms are insufficient for consciousness. The whole intuition behind the examples is that it is incoherent to say the entire population of china is conscious, or that the guy in the chinese room understands chinese, so the algorithms the system performs aren't enough for consciousness. There are arguments against functionalism, or depending on your view against the very idea of consciousness, not arguments for the compatibility of functionalism and consciousness.

Listen, cognition is just a bunch of procedures instantiated in the brain. But the result of those procedures functionally depends on the input to the system. So you might imagine an oversimplified example, where you ask "Are you conscious?" to the China system, and the system checks its "consciousness flag", finds that it isnt conscious, and answers no; whereas the exact same procedure carried out in a human would answer yes, since the flag has been switched. Similarly, a computer instantiating the functioning of the brain could easily answer no to the question. My point is that a yes or no answer to the question doesn't just depend on the functional makeup of the brain, but on whether or not it is true that the system is conscious. You are claiming this latter fact is irrelevant to the way the system answers the question, and that is just absurd.

My point is precisely this: that it is both, and one can't possibly be separated from the other.

This is precisely where we disagree, and it is because you can't see anyway for consciousness to operate besides epiphenomenally. I agree that the functioning of the brain is necessary for consciousness, but as I said it isn't sufficient. I answer yes to the question not because the function of my brain says so, but because I am conscious. The problem here is that you think the dependency relation is reciprocal, and I am denying this claim. Cognition is necessary for consciousness, but not sufficient. Other systems can instantiate the same functions as the brain and not be conscious. Consciousness is cognition plus something else- that something else being the 1st person perspective that gives consciousness a feel or quality.

You seem to think that cognition and consciousness are mutually dependent, but I dont know why you assume this. Consciousness depends on cognition, but not the other way around. I should note that, for instance, Ned Block's view holds that neither is dependent- there can be cognition without consciousness (the blockhead, or China), and there can be consciousness without cognition. I am denying both your and Block's view, and setting the dependency relation so that it is distinctly lopsided in the opposite direction.

However, this shouldn't be confused with epiphenomenalism, because I insist that consciousness is physical (specifically, it is embodied), and that consciousness has definite affects on the world. In particularly, it affects how I respond to the question "Are you conscious?" If I wasn't conscious, I wouldn't respond with a yes.

There are two kinds of zombie cases here. One is the 'zombie' made by instantiating the exact functioning of the brain on a computer. It is a zombie because it isn't conscious, even though the functional decomposition is the same. But it wouldn't answer 'yes' to the question "are you conscious?"

The other kind of 'zombie' is the exact replica of a human, in swampman fashion, except that somehow it isn't conscious. This is the case I dont think we can make sense of.

SRG said
'd like to point out that you've sharply differed from John Searle, in that he claims he would deny a program is conscious, even if it claims it is and describes its conscious experience. He says he would assume a thing has intentionality if it seems like it has it, but only in the absence of other information. Once he found out it was "just" instantiating some program, he would deny it had intentionality.

And yet, he admits we can be described that way.

John Searle posted:

"OK, but could a digital computer think?"
If by "digital computer" we mean anything at all that has a level of description where it can correctly be described as the instantiation of a computer program, then again the answer is yes, of course, yes, since we are the instantiations of any number of computer programs, and we can think.


Many people have offered him very good reasons why the man instantiating the program would not be aware of speaking Chinese -- the understanding of Chinese occurs in a subsystem that the man cannot access consciously. I made this argument at one point, and I'm pretty sure you agreed with it. Searle, on the other hand, fails to understand it and rejects it.

He claims that the reason we know other people can think is that their brains are made of the same stuff as ours. And he claims that it's possible some other material can give rise to thought, and that this is an empirical question. But it's clearly not an empirical question. He will not accept behavior as a criterion, and he offers no other criteria.

Now, let me restate the case. Tell me if you agree with these three statements y/n.

1. Everything about your behavior is physical, including your claiming to be conscious. (metaphysical dualists disagree here)
2. Every physical process can be described by some algorithm (or procedure). (Roger Penrose disagrees here)
3. Algorithms are substrate independant, and will behave the same regardless of what system instantiates them. (nobody can disagree here, because it's true by definition)

It follows from 1, 2 and 3 that I can implement a certain algorithm on any kind of computer, and that computer will do everything you do, including claim (truthfully?) to be conscious.

You say there might be some kind of a consciousness flag, but come on. Is there really a consciousness flag in your brain, and if we turned it off you would no longer be concious? Even if there is one, all we have to do is leave it on, and the computer will still answer yes. I really don't know what you're trying to get at with this argument.


Functionalism cannot provide a full account of consciousness- it is, as was said earlier in the thread, a behaviorist account of cognition, which pays no heed to what occurs inside the system, or from the perspective of the system. So all we can do is describe consciousness from the 3rd person perspective. And from this perspective, consciousness is algorithmic in nature. As I said in my first post, consciousness is the organization of cognitive resources to concentrate on some particular detections. As an algorithm, a computer can do all of this.

Where the functionalist goes wrong is to say that this provides a complete account of consciousness. Functionalism lacks any account of the 1st person perspective of consciousness. The computer can instantiate all the functions of the brain, but the computer lacks a point of view from which the concentration of those resources can be described as having a quality of feel. And this is why the computer is not conscious.

The upshot is that having a point of view is not an algorithm or procedure. It is, one might say, a position or stance one takes towards the cognitive procedures operating in the brain. Again, think of your desktop GUI. Is it an algorithm? Well, lots of algorithms and processes go into generating and organizing the display. The act of displaying it itself is also just a series of procedures the computer performs. But the GUI itself- the thing you interact with on your monitor- is not an algorithm. It is an interface, and it is only there when it is displayed on a monitor and looks like something. If the display were removed from the computer, the computer could still run all the algorithms necessary to instantiate a GUI, but there just wouldn't be one, because there isn't anything about those processes that is being displayed, (ie, that look like anything).

I hope this analogy is clear, because it shows why your questions only make sense from a functionalist perspective. So the 3rd person view of consciousness as function is analogous to the algorithms and processes instantiated by the computer in implementing a GUI. But the 1st person perspective of counsciousness is analogous to the actual displaying of those processes. And this isn't discrenable from simply looking at the computer's processing. You need that processing to be displayed before it looks like anything.

This lends itself to a perfect analogy to your "Are you conscious?" question. When you are adjusting the display information on the computer, the computer might ask "Can you see the display?" On you functionalist account, any time the right display algorithms are running, the answer to this question is invariably 'yes', even when the user can't see the display. This is something of an inverse zombie case, I suppose.

But thats not right. Either you can see the display or you can't, and that is independent of the computer's processing ability to generate the display. None of this implies that seeing the display is somehow epiphenomenal on the underlying processes, or some meta-process, or that displays are non-physical, but it is another kind of thing entirely. But also notice that 1) the computer can continue performing those procedures whether or not anything is being displayed, and 2) nothing can be displayed unless the computer is running those algorithms. So neither 1 nor 2 imply that whenever the computer is running those algorithms, something is being displayed.

Also, it is incorrect to say that the display is just an epiphenomena of the processing with no causal consequences for the processes, because it is directly responsible for the kind of input the computer will be getting in return. The display organizes things for the user, which in turn alters the kind of input the computer gets. So there are direct causal consequences in generating a display.


On your view, the computer is displaying something whenever it is running its display algorithms, whether or not it is hooked up to a monitor. On Block's view, displays can be occuring on the monitor whether or not there is any underlying computer generating those displays. Neither of these views sound plausible. To display an interface, you need to have the underlying processing, but having that processing doesn't entail anything is being displayed. And having a display isn't just to have a certain kind of processing (though special processing abilities are required in order to have a display at all), but is an entirely different kind of thing altogether.

Zoolooman said
I understand, Eripsa. To reference an old example, the nation of China may process information like the human brain, but it has none of the important elements that make up a unified point of view.

So I ask, what physical elements make up the human point of view? Would those elements include our unified sensory structure, our brain functions for motion, our brain functions for verbalization, and any other action-guiding resource that we call into focus to aid us in our interactions with the external world?

To build an artificial point of view for a computer, would we have to give our computer the capability for unified interactivity with the outside world?

I hadn't thought of that before, but I think you are exactly right. Notice, though, that this implies there isn't any particular physical property you can 'give' a computer to make it conscious, even though counsciousness is physical. What you need to do is embed it in a body that can interact with an enviornment. But thats not a physical thing at all- it is a relational thing, where realtions aren't a physical property of any particular thing but of interactions of groups of things. It is, in my terminology, to position it in with the act -> object structure, so that its algorithmic procedure of concentrating resources can be directed at its perceptual input from an environment.

So thats right, meaning that I think that is the right extension of the view. But put like that, I'm not so sure it really works. For instance, you can get a computer with relatively simply procedures to interact with a block world, so not only are there underlying 'cognitive' resources, but they are focused on aspects of that simulated environment. But I'm not sure that just this simple embeddedness is sufficient for full-blown consciousness. In other words, its might not be just the unified interactivity with an environment that makes something conscious. It might very well be that only the unified interactivity given the particular cognitive resources of brains that admits of consciousness.

That would imply that my criticism of SRG's functionalism is right, but the distinctions I am drawing just collapse because any instantiation of the functional resources of the brain would necessarily be embedded in some world (simulated or otherwise). No one has any interest in designing a computer with an exact functional replica of a mind, only to keep it trapped in a black box with no 'sensory' input. In other words, it is a theoretical possibility, but as a matter of fact would just never actually happen.

Curiously, if we wanted to test the the theory and build an environmentally detached full brain replica, there would be no way to go inside and ask if it was conscious, since that would itself be a source of input, which would, on the theory, spontaneously generate consciousness by giving the cognitive resources and object to focus on. This leads to the rather bizarre conclusion that consciousness is always present in systems functionally equivalent to the brain whenever there is an interlocutor. That is, the computer wouldn't be conscious unless you ask it 'are you conscious', in which case it would answer yes, and then go right back to not being conscious (of course, it would also be conscious whenever you ask it any other question too). That seems to be a logical consequence of this line of thought, but I really dont know what to make of it.

SRG said
So you're just arguing for an externalist definition of intentionality? If so, I'll agree to that. And in fact I've already agreed to that. I'll add that this is basically the robot reply to Searle's chinese room thought experiment.

Let me restate the claim then:

The human brain instantiates some algorithm. Consciousness is completely described by that algorithm, combined with the fact of that algorithm being instantitated in the world.

Or: consciousness is two parts. Part one: cognition (computation, or the algorithm). Part two: intentionality (the way the conscious system is causally connected to the world, such that it can be claimed to be about something)

By these criteria, we can say that we know for a fact that all humans are conscious. We can also say we know for a fact that any robot emulating a human brain at full speed and intereacting with the world as humans do is conscious, in the same sense as a human.

Is this satisfying to you?

edit: Looks like it pretty much is!


Its not an externalist definition of intentionality, because I am also insisting that the embeddedness in the world correlate with some perspective or point of view. In other words, I am stressing the importance of a perspective from which that embeddedness would have some quality. The externalist definition of intentionality doesn't account for the quality of that directedness from the first person perspective. In other words, externalism isn't enough, you need an internalist account as well.

All your externalist attempts here have glossed over the internalist aspect of consciousness, because I think you secretly dont believe anything like that exists. I sympathize, mainly because I dont yet know how to account for this aspect of the theory either. However, I can insist it is necessary, since the very idea of 'focus' or 'concentration' requires some perspective relative to which something can count as focused. Of course, you can give an externalist account of focus by locating the source of the perspective, but this negelcts the quality of the focusing as experienced from within the perspective. NO externalist position will be able to account for this, but it is essential for any complete theory of consciousness.

SRG said:
Why is there a point of view? Why is there a universe? Why is there anything? What is the sound of one hand clapping? Unaswerable zen questions!

Wittgenstein solved the mind-body problem. "What we cannot speak about we must pass over in silence."

Thats just capitulation. This is an obvious phenomena of the universe, and as such it demands explanation.

SRG said:
No, let me elaborate. Every description of everything is "externalist." It isn't anything special about consciousness that makes it impossible to give a full "internalist" account of it. Try to give a truly internalist description of the most mundane thing. Describe for me a duck. No, don't tell me what the duck looks like. You're describing the duck as a object. I want a description of the duck. Don't tell me how it interacts with the world. Don't call it a bird or an animal -- that just puts in in a conceptual category, which only tells me how it relates to other concepts. I don't want just a relational description of the duck. I don't want just an externalist, third-person description of the duck. I want to know about... the duck!

This can't be done. Describe consciousness without describing anything it does. Describe the universe without describing any of its parts. It can't be done. The "point of view" exists, as surely as the universe exists, but you can't say anything about it.


No, private language arguments wont work here, because I am not arguing that there is anything unique to any particular subjective experience of consciousness. In fact, I am assuming that the quality of experience is shared across the board, since we have the same basic detection mechanisms, same functional architecture, and so on. The quality of blue experiences is a phenomena we all experience, and is shared and public.

Furthermore, I am describing parts of consciousness- its dependency on cognition, its basic dualistic act -> object structure, and so on. None of this is private or beyond words.

I'm surprised of all people that YOU would throw Wittgenstein at me.

Edit: and I'm not asking for a description of consciousness independent of anything else, like a description of the duck independent of any of its properties. I am saying that, if the duck is conscious, then there is something that it is like to be that duck, and I am asking for an explanation for how there could be such a thing.

edit 2: "What is gravity?"
"Its when stuff falls down"
"No, I dont want a description of the effects of gravity, I want to know what gravity itself is"
"PRIVTAE LANGUAGE WE CANT TALK ABOUT THAT"

edit3: To be less flippant, the private langauge argument would work if I were to ask you to describe what the experience feels like, or what your subjective point of view looks like. I agree that this can't be talked about.

But Wittgenstein doesn't have any specific problems with the ontology of private languages, just that if it exists we can't say anything about its content. But that doesn't prevent us from saying interesting things about its structure, its relations to cognitive resources, or the necessary and sufficient conditions for its existence.

SRG said:
Is there something it's like to be the duck?

The language here is slippery. What is it like to be the duck? Normally, whenever one talks about what it's like to be something, one is talking about what it would be like if he were that something. "What is it like to be president of the United States?" In other words, what would it be like if I were president? That's a question I can ask. Can I also ask, what is it like for George W. Bush to be president? What kind of a question is that? It's like this. Because he is president.

What is it like for whom to be the duck? Do you mean, what would it be like for you to be the duck? Stick some feathers on yourself, jump in a pond and find out.

What is it like for the duck the be the duck? What is that supposed to mean? What is it like for a rock to be a rock? What is it like for the number 5 to be the number 5?

What would it be like for you to be the duck? How about asking, what would it be like for the duck to be you? What would it be like for a rock to be a cloud? What would it be like for the number 5 to be the number 4?

Are you really sure the question you're asking makes sense? You certainly aren't using the phrase "something it is like" in a way that resembles the way it's used in plain language. So how are you using it?

No, I don't think it is "like anything" for a duck to be a duck, or for you to be you, or for me to be me, except that it's like this. So you're trying to figure out, how can it be like this? This is exactly like the question "Why is there anything?" In fact, I think it is that question.

Sorry to get all mystical and Wittgensteinian on you, but there's no other way to talk about this thing. And now I'm sleeping.

I'm going to write some more, because I don't know if I was clear. Can I ask, what is it like to be George W. Bush? Yes, sure. I can ask, what if I were president? What if I were from Texas? What if I were a social conservative? What if I had a wife named Laura? What if I were currently located inside the whitehouse (assuming that's where he is)? This is empathy, and all humans can do it. I'm "putting myself in his place" as it were. But what is it like to be George W Bush himself? What could I mean by that? What if I actually were George W Bush?

We can see this question is nonsense when we ask it about any person other than "I." What if George W Bush were me? What if George W Bush were Madonna? The last sounds like a Saturday Night Live sketch. The best we can do is imagine him in a wig and singing one of her songs. In other words, we must imagine GWB still obviously being himself, but acting very much like Madonna. What if I wrote an SNL sketch about GWB actually being Madonna, completely? It wouldn't be very funny, because it would just be about Madonna. It wouldn't be about GWB in any sense.

What is it like for a rock to be a rock? It's like this, because a rock is a rock. What is it like for a rock to be a cloud? It isn't like anything, because a rock isn't a cloud. What if the number 5 were an elevator? That would sure be weird, huh? I guess the elevator would have to become the new number 5 then, because otherwise there wouldn't be any number 5 anymore, and that would cause a lot of problems.

It only makes sense to ask "What would it be like if I were a duck?" becuase I think I have a soul, and I would still in any sense be myself if I were a duck. This is false.

Now, am I denying things have points of view? Certainly not! It's dualists that can never be sure things other than themselves have points of views. I know beyond any doubt. For instance, you talk about your point of view -- ipso facto you have one. A duck clearly has one too, of a rudimentary sort. It treats itself differently than it treats anything else in the world, including another duck. This is all we must observe to conclude it has a point of view.

But why does the duck have a point of view? Why do you have a point of view? This too can be answered. We'll hook your brain up to some machine that can read every neuron-fire. Then we'll ask you to think about your soul and even say the words "I have a soul." Then we'll print out the data we've collected. There. We have a full explanation of your soul.

Of course, the explanation doesn't help us much. For instance, if I have data on every elementary particle in a chair, that doesn't help me figure out what the chair is shaped like or what its texture is, or anything else. If I were smart enough, I would be able to infer that information from the data. But I'm not, so really the statement "It's a chair" tells me a lot more than a "full" explanation of the chair. Similarly, the statement "Eripsa is conscious" tells me a lot more than a "full" explanation of Eripsa's consciousness. That doesn't imply that your consciousness can't be reduced to brain activity. And just like with the chair, I assume that if I had godlike intelligence I would understand how it is that we get consciousness from a brain. Since I don't, though, I'll just have to assume it works, since it clearly does.

And maybe studying the brain will help us understand, intuitively, how we get consciousness, to some extent. But only if we keep focused on the question that's a real question, and not on the question "Why is it like this?"

The word "consciousness" is not very well defined. The only thing I can be sure is conscious is a human being (or a robot that perfectly emulates a human being). Autistics and other humans with abnormal brains are in the same place as animals, in that I can't say for sure if they're conscious. But this isn't because of anything in particular about autistics or animals, but only because I don't know just what consciousness is. I'm only sure that normal humans are conscious because it's true by definition. Normal humans are our prototype for what the word "conscious" means.

I will say that autistics, lions, and even ants have "points of view," in some way or another. This is because there are things they know and things they don't know. The only thing that doesn't have a point of view is something like a rock, which doesn't know anything, or possibly some omniscient being that knows everything.

By the way, here's a good statement of my theory, which I was just trying to explain in the last few posts.

People are always trying to doubt the existence of other minds, or they're trying to talk about some unique thing that "I know I have it, but I can only infer other people have it." They sometimes call this thing "consciousness." Sometimes it's a fancier name, like "qualia" or "intentionality." Sometimes it's something more to the point like, "something it is like to be me" or "a point of view." However, no matter how one phrases the question, one seems able either to give an external, public description of the thing that's supposedly private (in the case of consciousness or intentionality) or reduce the question to nonsense (in the case of "something it is like").

So what is the real question people are trying to ask when they ask all these other questions? My theory is the question is simply, "Why am I myself?" "Why am I me, and why am I not someone else, and why am I the only one that's me?"

This is either an unanswerable zen question or just plain silly (if there's any difference between those two things). Because who else could you be, and who else could be you? This formidable so-called mind-body problem reduces to our inability to comprehend a simple identity. X = X.

And Eripsa seems not able to deny that I'm right, because he's stopped responding.


I haven't responded, partly because I have been busy, but partly because I am practicing what you preached.

But seriously, this is just capitulation. You are just plain wrong to say that I am merely talking about identity, and that all consciousness is understood via comparisons to my identity.

The problem here is that you dont think consciousness exists. Its as simple as that. You think once we have given a perfect functional description of the brain we have explained everything there is to know about the brain. And, if consciousness exists in the brain, we have explained that too. Admittedly, this puts the onus on me to say what it is you haven't explained, and I haven't done an adequate job yet. But we are whittling it down.

What I am talking about has nothing to do with identity, and surely nothing to do with identity with myself. In fact, I am entirely willing to condeed that the 'self' is just another object in the world that consciousness interacts with, and that we can have murky epistemological relations to.

Furthermore, what I am talking about has nothing to do with epistemology, which is where I think you are getting tripped up. I am not asking 'how can I know I am conscious', or 'how can I know you are conscious', or 'how can I know what your consciousness feels like', or anything about our knowledge or our ability to describe the phenomenal quality of consciousness. This is the substance of the Jackson Mary cases, so people do ask such questions, but I think such questions are a bit silly, personally, and in any case can be addressed by private language arguments.

In fact, I am going to venture to say that what I am talking about has very little to do with the mind/body problem as such. The mind/body problem arose mostly due to advances in mathematics and physics by Galileo and Descartes and Newton, and is roughly the question: how is it that I am capable of complex reason and intelligence (particularly mathematical intelligence) when I am simply matter governed by the same physical laws that govern the movement of every other object in the universe? I mean, even Descartes, the great dualist, believed that the body and the brain were entirely governed by physical laws. He just couldn't imagine how complex behavior like language and mathematics could arise from such simple stuff. But the advent of a systematic logic by Frege and Russell in the late 19th early 20th centuries, and the subsequent development of computers in the 40's and 50's, solved this problem. We could now construct systems that behaved intelligently, so there was no longer a question of how physical matter could behave logically.

So whats left for consciousness? Well, we are missing a couple of things. First, we are missing purposive behavior. Computers can perform any function, but they dont do it for any purpose, or to fill any goal, or to achieve any end. They do it because they are designed to do it. Living organisms, on the other hand, having gone through a process of evolution, and so are by nature goal-oriented. Machines can fulfill goals, or satisfy goals, but they can't pursue goals. Or at least, if they can, it isn't simply in virtue of their functional architechure, but because of their intended purpose (which isn't properly theirs), or because of their design history (which may or may not be 'theirs', for instance in cases of evolutionary robotics).

The important point is that you can't give a functional or behaviorial explanation of purposive behavior. Purposive behavior isn't just what the organism does but why or how they do it. Thats not something you can quanitfy in the lab, but that doesn't mean it is just not worth talking about. But to talk about it, you have to tell a much, much larger story about the organism's phylo- and onto-genetic history, and its changing relations to a changing environment. This is what evolutionary biology gives us, and that is quite scientific and rigorous, not some mystical thing that we should refrain talking about.

But purposive behavior itself isn't sufficient for consciousness. Purposive behavior, in my story, is just the basis of cognition. It is also the basis for stimulus-response, but cognition adds to the simple mechanism a kind of mediation, where further processing can occur and better deliberation can be made with respect to the appropriate courses of action.

Ok, so we have cognition. But I admit that you can describe cognition in terms of functional decomposition, even if that vocabulary doesn't allow you the vocabulary of purposive behavior. So what's left? Well, we are missing something like intentionality, or directedness, at an environment. Cognition in functional terms could just as easily be running its algorithms in the void, where its processing has no content and isn't about anything. But thats not what animals do. We are embedded in an environment, and directed towards our environment. And this gives our cognitive resources content in addition to mere functional form. We dont just think; we think about things.

The computer, in contrast, never thinks about anything, simply because it isn't embedded in an environment that can give its calculations content (this isn't quite right, since the computer does seem to interact with a simulated environment which it partly constitutes. But thats a much harder problem, and one we should ignore for the moment). In other words, the reason we can get any meaning out of the computer is because it can trade in symbols that have meaning for us, but that can be manipulated without regard to their meaning. The computer can do math without understanding what the symbols '1' and '2' are, it can display poetry without understanding what 'love' means, and so on.

This is the underlying reason why Searle presents his chinese room, after all. He is trying to deal with the prospect of a machine doing manipulations to language by treating it purely as symbols without regard to their content. He doesn't deny that such a thing can be done, but he says that if it is done, the machine doesn't understand what it is doing, and hence lacks 'consciousness'. However, Searle isn't talking about consciousness in my sense, he is just talking about cognition. Cognition requires content, and to have content you have to be related to the world. And such relations cannot be captured by mere functional analysis, since the functions would be the same whether or not the machine is embedded in the world.

But that doesn't give us consciousness quite yet. All we have so far is the structure of cognition: that it is purposive, and that it has content. Neither of these properties are explainable on a functionalist or behaviorist view, so I've already left you far, far behind. But we're not done. So what's consciousness?

As I said, consciousness is the focusing of cognitive resources on particular detections. So lets unpack this nice and slow. An organism is going along, having certain goals and detecting certain things in its environment. And these detections get fed into the organism's cognitive resources, in order for the organism to adjust its behavior to fit its environment and better achieve its goals. In certain circumstances, particular detections can become the focus of a whole set of cognitive resources, so that they work in unison to attend to those detections. Depending on the set of cognitive resources, and the circumstances of the detection, this act of concentration can result in consciousness.

Consciousness is not always a beneficial thing, as you already pointed with the savant case. A better case might be the expert tennis player, who can just react to the ball "without thinking about it". His cognitive resouces are lightening quick, of course, but his detections don't need to be attended to by consciousness. His body already knows what to do.

But lets step back. What have I added here? Focusing can't be understood as just another function or cognitive resource, but it is a way of harmonizing those resources and concentrating them on a detection. This gives consciousness its act->object structure. The act of consciousness is just the synchronization of resources, but to say those resources are 'focused' is to imply a perspective relative to which something can be understood as 'focused'. In other words, consciousness can only be understood from a position situated in such a way that it can pick out detections on which to focus. So certainly the computer has computing resources, and can focus those resources on a particular task, but the computer is not situated in the proper way to the content of its computations, and doesn't have the proper cognitive resources, such that its act of focusing gets taken up in consciousness.

I've already left your position behind, but this is where you jump ship and swim back. You don't think there is anything like consciousness, so when I start talking about perspectives, you throw out some mysticism or reduction to logical relations like identity, and it all looks confusing relative to your nice, neat functionalism. So you conclude the whole project is messed up and give up and remain silent. But thats just capitulation in the face of the daunting task of discovering the nature of consciousness. Which is especially unscientific, given both the overwhelming evidence we have for its existence, and our familiarity with its quality. This data cannot be brushed off so easily.

SRG said
I'm honestly completely lost as to what you're saying. For the first thing, you seem to be making the point that you can't say something is conscious unless it has an environment to be conscious of. I've already agreed with that. I'm with you that far. Now this:

Eripsa posted:
This gives consciousness its act->object structure. The act of consciousness is just the synchronization of resources, but to say those resources are 'focused' is to imply a perspective relative to which something can be understood as 'focused'. In other words, consciousness can only be understood from a position situated in such a way that it can pick out detections on which to focus.


I'm sorry, I just can't parse this at all. I'm honestly trying, but I can't. Let me ask a few questions.

When you talk about "the computer," what computer are you talking about? A specific computer currently in existence? Or do you mean a computer that's perfectly emulating your brain?

I am utterly baffled as to what your position on strong AI is. On one hand, you claim it might be possible to make a computer conscious. You are quite insistent that this is an empirical question -- that there are behaviorial consequences to consciousness. For instance, only a program that is conscious will be able claim it's conscious and describe its conscious experiences. You differ sharply here from John Searle, who claims a computer absolutely cannot be conscious, and that there is no way to tell by outward behavior whether something is conscious or not. For instance, he imagines that you can slowly replace my brain with microchips, and the scientists doing it will think everything is fine, because I will keep acting the exact same way, and I will keep claiming to be conscious, and describing my conscious experiences, but on the inside I will be slowly dying, and I'll be unable to say so. Do I understand correctly that you deny this is possible?

If I do understand you correctly, you think there's nothing in principle to stop us from making a computer conscious. And yet -- and yet -- you deny functionalism, and as such deny that consciousness has anything to do with an algorithm. I cannot express how confusing this is to me. What am I supposed to put into the computer that's going to make it conscious, if not an algorithm? Is it just vision and hearing sensors, and maybe some legs, so it can interact with the real world? Fine. I'll grant that. Now it's a robot instead of just a computer. So now is it conscious? This is such a small divergence from the othodoxy of functionalism I can't imagine why you'd make such a big deal of it. Especially since I've already granted the point. And, as I said, the point was granted by others long ago -- this is just the robot reply to Seare's Chinese room idea.

Eripsa posted:
Machines can fulfill goals, or satisfy goals, but they can't pursue goals. Or at least, if they can, it isn't simply in virtue of their functional architechure, but because of their intended purpose (which isn't properly theirs), or because of their design history (which may or may not be 'theirs', for instance in cases of evolutionary robotics).


Here you seem to be talking about something like Davidson's swamp man thought experiment, which thought experiment, I'll grant, is very much like everything you've said in your post here, in that it confuses me and I have no idea what he's getting at, even though I think I agree with his ideas for the most part. The idea seems to be that my causal history has something to do with the meaning of what I do right now. I can't think of any reason to say that's the case. For instance, is it true that you would say that a robot that came from some kind of an evolutionary process would be conscious, while another robot that was made "in one go" by a conscious human engineer would not be? Even if the two robots were identical? I have no idea why anyone would say this. And since you think consciousness is observable, am I also to understand that you think the first robot would claim to be conscious, and the second one would not? Why?

Finally, I have no idea why you keep claiming I don't believe in consciousness. I do believe in it. All I've said is that I believe it's (1) physical and (2) public. In fact, because it's both physical and public, I have a way of being absolutely sure that other people are conscious, which epiphenomenalists and people like John Searle lack. I'd say my consciousness is on firmer ontological grounds than most.

edit:

A couple more comments. Here's something you posted before.

Eripsa posted:
This leads to the rather bizarre conclusion that consciousness is always present in systems functionally equivalent to the brain whenever there is an interlocutor.


Here you seem to be agreeing with the robot idea. That is, consciousness = computation + environment. Even though I'm willing to provisionally say this might be an okay definition, the problem of sensory deprivation comes to mind. Am I to understand that if I'm in a sensory deprivation tank I'm not conscious during that interval? Or is my consciousness tiding itself over on residual intentionality from back when I was interacting with the world?

How about this example then. Say we have a computer program emulating your brain, and we have a robot for it to inhabit. However, we activate the program before we activate any of the sensory or motor fuctions of the robot. So now we have the robot's "brain" just churning doing mindless computation. There is no input or output, so this is manifestly just computation having no relation to anything in the world.

However, from another perspective, this computation is emulating a human brain, and we know that when a human brain is cut off from any sensation it will not only remain conscious, but start to manufacture stimulation for itself.

So here's the kicker. We wait, say, 24 hours, and then we activate the robot. Now the robot can see and hear and move, and it can tell us it's conscious. What's more, it can tell us it was conscious 24 hours ago, and it can describe all the vivid hallucinations it had. What are we to say to that? That when we activated the sensorimotor features of the robot we made it retroactively conscious for that 24 hour period?

Eripsa posted:
Focusing can't be understood as just another function or cognitive resource, but it is a way of harmonizing those resources and concentrating them on a detection.


I don't know why you say focusing can't be understood as another function or cognitive resource. Surely the ability to focus in such a way can be understood as another function or cognitive resource. Do you only mean that the focusing has to actually be happening for it to mean anything? If that's what you mean, then that's just what I said before.

"Consciousness is completely described by that algorithm, combined with the fact of that algorithm being instantitated in the world."

And if that is what you mean, I don't think it's an objection to functionalism at all. I mean, try this on for size: "Adding can't be understood as another function or cognitive resource." "Why not?" "Because you actually have to do the adding for it to be adding." Well duh.

If that's not what you mean, then I have no idea what you do mean. Surely the ability to focus in that way is strictly a function. I mean, it's something the thing can do. And surely it must be an algorithm that gives it its ability to do that thing. An algorithm plus, of course, the algorithm's actual instantiation in the world.
12:18 :: :: eripsa :: permalink