Goddamn this voice inside my head
6.07.2005
D&D: Emotion & IntellectIn any case, it doesn't matter for the purposes of this thread if computers are 'intelligent' in a sufficiently robust philosophical way. My point was simply that they are autonomous reasoners, capable of logic, deliberation, and mechanical action; and these serve as a counterpoint to 'emotion', which seems neither logical or mechanical.
I think the real problem here is LK's assumption that the body is just a biological machine, because that inclines us to think that reason is the driving force, and emotions serve only to fuck up our rational interests. Its wrong to call cognition 'mechanical' in the same way it is wrong to call the rise and fall of the tide 'mechanical'. These are dynamic, probabilistic processes that depend more on the geometry of spacetime than on mechanism and symbol.
User came out of the closet to say
Unfortunately, right now this debate is about you not knowing what you're talking about. Machines cannot use reason, clearly or otherwise. Computing automata are just acceptors for recursive languages and recognizers for recursively enumerable languages. There are uncountably many problems that Turing equivalent automata cannot solve because no such algorithm can exist. Incidentally computers cannot devise algorithms at all, they can only follow them.
The Computer and the Brain was written by John von Neumann. One of his other contributions is the "von Neumann architecture," on which every modern computer is based, including the one you're using to read this. The book is part of Yale University's highly respected (if unfortunately named) Silliman Lectures.
Sure, and there is a lot of things the brain can't solve that von Neumann machines are particularly good at doing.
I'm not arguing that brains are computers; in fact, I've argued against that position many times on this forum. I am simply pointing out that, if we are going to draw the line between 'intelligent' and 'emotional', then computers provide an example of something that falls entirely on the former side. They are capable of, if not every, then at least the exemplary characteristics of intelligence- symbolic logic, mathematics, deliberation. The computer is capable of being the paradigm utilitarian.
You seem to think that computers lack something fundamental that exclude them categorically from the class of 'intelligent' things, but have not justified this assumption. Taken to its rational conclusion, you seem to be suggesting that the computer isn't even capable of logic or mathematics.
Perhaps I am misunderstanding, and you mean by 'reason' something much grander than formal symbol manipulation.
Actually that's not true. The brain can solve any problem a von Neumann machine can solve, and it can solve it better. It does this by using all the abilities at its disposal, including the fabrication of computing automata. If you think this is a stretch, keep in mind that the way automata compute many problems is by making a lesser form of automaton that is good at solving the question at hand. The formal name for this being emulation.
The first half of your statement is great. The second half contradicts it by implication. The computer is a horrible paradigm for any intellect because of its lack of intentionality. It only does exactly what you tell it to. I have issue with the capabilities you ascribe to computers, and I will enumarate in a following paragraph.
I haven't justified this assumption? Haven't you been reading what I said? Computing automata cannot even design algorithms. They cannot solve the general case for truth or falsehood of a logical proposition. I'd say any definition of intelligent is going to include those two things at the least.
Formal symbol manipulation is mechanical. If I refer to someone as an "automaton" I am not complementing his intellect. I can make a computing automaton out of tinker toys that plays tic-tac-toe, and only a fool is going to think of it as anything more than a chain of predetermined outcomes, that is, an algorithm. When computers provide anything like the illusion of intelligence, it's because you're applying naive psychology to the machine, or indirectly perceiving the intelligence of its programmers the same way you can perceive the intelligence of the creator in many human works.
First off, there are a series of more complex (yet theoretical) machines of higher order than a UTM. http://en.wikipedia.org/wiki/Hypercomputer
Second, here is an example of a mathematical theorem that a human is incapable of proving (for a variety of practical reasons) but computers can and have proven.
I was making a fairly noncontroversial claim about the distinction between intellect and emotion, which you are taking in a quite metaphysical direction. Specifically, you assume that intentionality is nonmechanical.
Hypercomputers are pure mathematical conjecture. Some of the examples on that wiki are pretty shitty by the way, I think they mean "have been posited [on the intarweb!]." Sure the human brain can conceive of them, but that's only strengthening my argument that the brain can do things that computing automata cannot.
Second the machine did NOT prove the four color theorem. Machines cannot prove anything. Mathematical proofs are proven noncomputable. Someone came up with an exhaustive proof with a very large number of cases, and used a computer to test all of them. The computer played as much of a role proving the conjecture as it does in the creative process of writing a story. It makes some of the mechanical tasks like deletion and formatting much easier, but it's not writing the story.
I'm not assuming intentionality is "nonmechanical" as much as I am assuming that it's not computable.
It'd be clearer to say that having self-causing cognitions is requisite for intelligence, I conflated that with intentionality because I believe they are exactly equivalent. Proving that would definitely be outside the scope of this thread (and probably outside the scope of possibility). Still, informally they look the same, and let's not kid ourselves, this is an informal discussion.
Here's something that I think is far more interesting than any of the above. Computers can only work with computable functions. Human beings can work with computable functions, but they are much slower than computers. Computers cannot work with definable non-computable functions at all, and human beings can work with them about as well as they can work with any other function. The part where I think it gets most interesting is that human beings can seemingly work with undefinable, even downright nebulous "functions" and often get something mistakable for meaningful when done. That may well be fodder for another thread. Really it's what goes on in most any thread. I find it likely that we as human beings can't reason about that sort of thinking in the general case anymore than computers can "reason" about turing machines in the general case, but the prior statement is an example of just that sort of undefined thinking. Then again so is most of the rest of philosophy too. With the exception of you Wittgenstein we love you and your truth tables.
Your claim is evidently not non-controversial. You have made an invalid proposition and presumably want to draw some kind of unsound conclusions from it. I pointing out the flaws in your antecedant because any argument based on it will be poisoned by it and thus unsound.
I think you are absolutely wrong about the 4 color theorem, and I think your attempt at 'clarifying' intentionality is quite a bit more unclear, but thats off topic.
We are talking about the intellect. The intellect is the rational side of our cognitive processes, the part that allows us to do math and logic, and reason out the solutions to (possibly novel) problems. One question raised in the OP was whether the intellect was distinct from the emotions. I pointed out that computers can also do math and logic, and reason out the solutions to possibly novel problems, and all without the aid of emotions. I didn't mean 'intelligent' in any more mysterious sense than that. But it is worth point out that before the first adding machines were invented a few hundred years ago, nothing else in the universe as far as we knew was capable of performing any feats of reason. We take it for granted now, but we've come a long way.
Now obviously a brain and a computer work differently, I'm not disputing that. But a computer is able to perform many of the functions we find paradigmatic about the intellect. Your arguments have been that there is something about the way the computer performs these functions that makes the results fundamentally different than the results obtained by a brain. The computer isn't really doing logic or proving theorems or solving problems. And then you do your metaphsyical dance about intentionality and computability to justify these claims.
If my assumption, that a computer can do logic and solve mathematical problems, sounds controversial or 'invalid' to your ears, I think the burden of plausibility is on you, and your little metaphysical cha-cha wont cut it.
Meta-physical cha-cha? Look, what I am talking about has been mathematically proven. That's not beyond a reasonable doubt, that means there is no doubt. What you "think," to be polite and use your term, about the four color theorem proof is beside the point. Computers provably cannot perform proofs. This is not debatable, it is proven beyond all doubt. You seem to be having trouble with what that means. Computers can only follow strict instructions that they have been given. They cannot do anything they have not been explicitly been told to do in their language, one ultimately primitive step at a time. Computers can perform arithmetic the same way a pencil can write, if someone makes it do it. They are no more intelligent than any other tool.
None of this has anything to do with "plausibility." All kinds of nonsense is plausible to a sufficiently ignorant observer, and you seem to be a prime example in this case. Fortunately for you, ignorance is remediable.
Computers are able to perform some operations that are part of intelligence. Fire is able to perform some operations that are part of life. In neither case is the former and example of the latter. More importantly computers cannot do things that are fundamental to intelligence, like originate ideas. Therefore your "paradigmatic" argument is insufficient and unnecessary.