Since we can now post pictures here is a picture.
Stumbled across a quote from Norbert Wiener (1964) in the essay "The ethics of autonomous learning systems" by A. F. Umar Khan
"The gadget-minded people often have the illusion that a highly automatized world will make smaller claims on human ingenuity than does the present one and will take over from us our need for difficult thinking, as a Roman slave who was also a Greek philosopher might have done for his master. This is palpably false. A goal-seeking machanism will not necessarily seek our goals unless we design it for that purpose, and in that designing we must foresee all steps of the process for which it was designed, instead of exercising a tentative foresight which goes up to a certain point, and can be contained from that point on as new difficulties arise. The penalties for errors of foresight, great as they are now, will be enormously increased as automation comes into its full use."
Note this implicit suggestion that if we dont design the machines for our own goals and instead let the machine construct and persue its own, we are freed from the obligation of foresight.
Google USPA 20050071741
Interesting info on Google's site ranking:Full article
- Sites can be ranked seasonally. A ski site may rank higher in the winter than in the summer. Google can monitor and rank pages by recording CTR changes by season.
- Changes in keyword density is monitored and recorded as are changes to anchor text.
- Clicks away from your site back to the search results are also monitored.
- User behavior in general could be monitored.
Much of this is in the interest of decreasing spam sites that warp Google's search results. It is, in essence, Google's defensive system, and requires Google to closely monitor the links that hold the internet together as well as the traffic through those links.
Guardian article: Googling the truthD&D Thread
Kinky Friedman came out of the closet to say:
It would be great if the history books recorded that in 2005, at the nadir of our stupidity, the computers showed America how to understand the difference between something that reinforces your beliefs and something that's actually true again.
I want to know whats going on
I am sitting at my computer. I say "What's going on, Harmony."
She is laying on my bed. She says "I want to know what's going on."
I bite my tongue, wondering how much longer the silent schtick will last. How much longer I can ignore her, how many nights can I pretend to be passed out while she pokes me and shakes me and says "I want to talk"?
Stange thing is that we have
talked. She knows perfectly well whats going on. You might think that this would undermine the need to play possum to her prodding. But last time we talked I fucked up. I attacked her. I was defensive and mean and rude and unfriendly
. I told her "I don't like you very much".
Which was a half-truth, of course, and entirely rhetorical and the product of overthinking and delusional (read: stoned) planning of the best way to broach this particular conversation. See, it had been sitting there, like a giant wave slowly approaching from the horizon, and I had time and fancied myself an orator and thought I would say something obviously untrue, and shock her, and then clarify and ease up the pain of that initial blow by explaining that...
Well, I dont quite remember my justification there, but it didn't matter because that first little bit was all that came out anyway, and her ears shut like a seal and fireworks shot into the sky to form a large glowing neon "You done fucked up, kid" sign.
And see, I dont want Harmony mad at me. No one does. But she understandably got mad and it was totally my fault and I had to make amends. So we went right back into the same routine, and I was nice to her and treated her well and tried to make her comfortable again. And when I felt that we were back on solid ground, I said "We should stop having sex".
And she reacted, as was appropriate, but she did not overreact. Or rather, her reaction was constructive and emotive and expressive and healthy
. I am grateful, at least, for that.
Goddamn this voice inside my head
D&D: Emotion & Intellect
In any case, it doesn't matter for the purposes of this thread if computers are 'intelligent' in a sufficiently robust philosophical way. My point was simply that they are autonomous reasoners, capable of logic, deliberation, and mechanical action; and these serve as a counterpoint to 'emotion', which seems neither logical or mechanical.
I think the real problem here is LK's assumption that the body is just a biological machine, because that inclines us to think that reason is the driving force, and emotions serve only to fuck up our rational interests. Its wrong to call cognition 'mechanical' in the same way it is wrong to call the rise and fall of the tide 'mechanical'. These are dynamic, probabilistic processes that depend more on the geometry of spacetime than on mechanism and symbol.
User came out of the closet to say
Unfortunately, right now this debate is about you not knowing what you're talking about. Machines cannot use reason, clearly or otherwise. Computing automata are just acceptors for recursive languages and recognizers for recursively enumerable languages. There are uncountably many problems that Turing equivalent automata cannot solve because no such algorithm can exist. Incidentally computers cannot devise algorithms at all, they can only follow them.
The Computer and the Brain was written by John von Neumann. One of his other contributions is the "von Neumann architecture," on which every modern computer is based, including the one you're using to read this. The book is part of Yale University's highly respected (if unfortunately named) Silliman Lectures.
Sure, and there is a lot of things the brain can't solve that von Neumann machines are particularly good at doing.
I'm not arguing that brains are computers; in fact, I've argued against that position many times on this forum. I am simply pointing out that, if we are going to draw the line between 'intelligent' and 'emotional', then computers provide an example of something that falls entirely on the former side. They are capable of, if not every, then at least the exemplary characteristics of intelligence- symbolic logic, mathematics, deliberation. The computer is capable of being the paradigm utilitarian.
You seem to think that computers lack something fundamental that exclude them categorically from the class of 'intelligent' things, but have not justified this assumption. Taken to its rational conclusion, you seem to be suggesting that the computer isn't even capable of logic or mathematics.
Perhaps I am misunderstanding, and you mean by 'reason' something much grander than formal symbol manipulation.
Actually that's not true. The brain can solve any problem a von Neumann machine can solve, and it can solve it better. It does this by using all the abilities at its disposal, including the fabrication of computing automata. If you think this is a stretch, keep in mind that the way automata compute many problems is by making a lesser form of automaton that is good at solving the question at hand. The formal name for this being emulation.
The first half of your statement is great. The second half contradicts it by implication. The computer is a horrible paradigm for any intellect because of its lack of intentionality. It only does exactly what you tell it to. I have issue with the capabilities you ascribe to computers, and I will enumarate in a following paragraph.
I haven't justified this assumption? Haven't you been reading what I said? Computing automata cannot even design algorithms. They cannot solve the general case for truth or falsehood of a logical proposition. I'd say any definition of intelligent is going to include those two things at the least.
Formal symbol manipulation is mechanical. If I refer to someone as an "automaton" I am not complementing his intellect. I can make a computing automaton out of tinker toys that plays tic-tac-toe, and only a fool is going to think of it as anything more than a chain of predetermined outcomes, that is, an algorithm. When computers provide anything like the illusion of intelligence, it's because you're applying naive psychology to the machine, or indirectly perceiving the intelligence of its programmers the same way you can perceive the intelligence of the creator in many human works.
First off, there are a series of more complex (yet theoretical) machines of higher order than a UTM. http://en.wikipedia.org/wiki/Hypercomputer
is an example of a mathematical theorem that a human is incapable of proving (for a variety of practical reasons) but computers can and have proven.
I was making a fairly noncontroversial claim about the distinction between intellect and emotion, which you are taking in a quite metaphysical direction. Specifically, you assume that intentionality is nonmechanical.
Hypercomputers are pure mathematical conjecture. Some of the examples on that wiki are pretty shitty by the way, I think they mean "have been posited [on the intarweb!]." Sure the human brain can conceive of them, but that's only strengthening my argument that the brain can do things that computing automata cannot.
Second the machine did NOT prove the four color theorem. Machines cannot prove anything. Mathematical proofs are proven noncomputable. Someone came up with an exhaustive proof with a very large number of cases, and used a computer to test all of them. The computer played as much of a role proving the conjecture as it does in the creative process of writing a story. It makes some of the mechanical tasks like deletion and formatting much easier, but it's not writing the story.
I'm not assuming intentionality is "nonmechanical" as much as I am assuming that it's not computable.
It'd be clearer to say that having self-causing cognitions is requisite for intelligence, I conflated that with intentionality because I believe they are exactly equivalent. Proving that would definitely be outside the scope of this thread (and probably outside the scope of possibility). Still, informally they look the same, and let's not kid ourselves, this is an informal discussion.
Here's something that I think is far more interesting than any of the above. Computers can only work with computable functions. Human beings can work with computable functions, but they are much slower than computers. Computers cannot work with definable non-computable functions at all, and human beings can work with them about as well as they can work with any other function. The part where I think it gets most interesting is that human beings can seemingly work with undefinable, even downright nebulous "functions" and often get something mistakable for meaningful when done. That may well be fodder for another thread. Really it's what goes on in most any thread. I find it likely that we as human beings can't reason about that sort of thinking in the general case anymore than computers can "reason" about turing machines in the general case, but the prior statement is an example of just that sort of undefined thinking. Then again so is most of the rest of philosophy too. With the exception of you Wittgenstein we love you and your truth tables.
Your claim is evidently not non-controversial. You have made an invalid proposition and presumably want to draw some kind of unsound conclusions from it. I pointing out the flaws in your antecedant because any argument based on it will be poisoned by it and thus unsound.
I think you are absolutely wrong about the 4 color theorem, and I think your attempt at 'clarifying' intentionality is quite a bit more unclear, but thats off topic.
We are talking about the intellect. The intellect is the rational side of our cognitive processes, the part that allows us to do math and logic, and reason out the solutions to (possibly novel) problems. One question raised in the OP was whether the intellect was distinct from the emotions. I pointed out that computers can also do math and logic, and reason out the solutions to possibly novel problems, and all without the aid of emotions. I didn't mean 'intelligent' in any more mysterious sense than that. But it is worth point out that before the first adding machines were invented a few hundred years ago, nothing else in the universe as far as we knew was capable of performing any feats of reason. We take it for granted now, but we've come a long way.
Now obviously a brain and a computer work differently, I'm not disputing that. But a computer is able to perform many of the functions we find paradigmatic about the intellect. Your arguments have been that there is something about the way the computer performs these functions that makes the results fundamentally different than the results obtained by a brain. The computer isn't really doing logic or proving theorems or solving problems. And then you do your metaphsyical dance about intentionality and computability to justify these claims.
If my assumption, that a computer can do logic and solve mathematical problems, sounds controversial or 'invalid' to your ears, I think the burden of plausibility is on you, and your little metaphysical cha-cha wont cut it.
Meta-physical cha-cha? Look, what I am talking about has been mathematically proven. That's not beyond a reasonable doubt, that means there is no doubt. What you "think," to be polite and use your term, about the four color theorem proof is beside the point. Computers provably cannot perform proofs. This is not debatable, it is proven beyond all doubt. You seem to be having trouble with what that means. Computers can only follow strict instructions that they have been given. They cannot do anything they have not been explicitly been told to do in their language, one ultimately primitive step at a time. Computers can perform arithmetic the same way a pencil can write, if someone makes it do it. They are no more intelligent than any other tool.
None of this has anything to do with "plausibility." All kinds of nonsense is plausible to a sufficiently ignorant observer, and you seem to be a prime example in this case. Fortunately for you, ignorance is remediable.
Computers are able to perform some operations that are part of intelligence. Fire is able to perform some operations that are part of life. In neither case is the former and example of the latter. More importantly computers cannot do things that are fundamental to intelligence, like originate ideas. Therefore your "paradigmatic" argument is insufficient and unnecessary.
The vitalist's last breath
: the doctrine that life cannot be explained solely by mechanism. Wikipedia entry
1) What does it mean to be intelligently designed?
2) Is this fundamentally different from systems that occur naturally in the universe?
3) More importantly, given a system at some arbitrary state in its existence, is it possible to determine whether such a system was designed through natural or intelligent means?
The obvious answer to the final question, as any good student of Hume knows, is of course not. There are some more or less relaible indicators- if the system has wingnuts and a steel case it probably isn't a naturally occuring system. But these sorts of judgments do not scale properly. Imagine walking across a particularly captivating rock formation. The naive person might look at the stark straight lines and colorful sediment and immediately think it was the product of an intelligent designer with an aesthetic sense. The geologist, on the other hand, will be able to tell a far more convincing and full story about the natural fault lines and forces that gave rise to this rock formation over the course of the earth's history. And absent evidence to the contrary (ie, without having seen a terraforming crew at work the day before), we should trust the geologist. As we scale up further to consider the extremely complicated and sophisticated biological processes we find all around us (inclusive), it becomes more imperative to trust our best science.
Simple enough, right? Science is preferrable to intuitions regarding intelligent design. It doesn't get any more obvious and straightforward than that. And yet, the 'debate' over Intelligent Design continues grabbing media attention
, due in no small part to vocal groups like the Discovery Institute
. The majority of the (godless, leftist) media deserves credit here in at least recognizing that this 'debate' is over the validitiy of ID itself, not as a competing scientific paradigm as the IDers would have it, but over the more social issues of whether it deserves to be taught in schools or if it has a place in the public discourse. And the fact is that in most cases are are inclined to think sufficiently complex and organized things have intelligent origins. It is part of our psychology; it is exceedingly natural. If we found a working pocketwatch on Mars, everyone including scientists would be crying out for some explanation or story of how such a thing could have been assembled via the natural processes on Mars.
The distinction between intelligent and natural design, I think, is a good distinction to hold on to, for more than these intuitive reasons. We want to maintain, for instance, that there is some important difference between genetically engineered food and naturally occuring food, since this distinction might have important consequences for our health and agricultural practices. I think the media recognizes these concerns, which ultimatly have to do with our relation to our own technological artifacts; but it finds itself stuck between the ID fanatics, who blow such intuitions far out of reasonable scientific proportion, and the scientists, who when pressed seem to want to distance themselves from the notion of intelligent design as a coherent concept altogether. Luckily, philosophy can help us avoid this rather nasty little fork.
And all the philosophy we need to do is recognize that the core of this debate, on both sides, rests in question 3 above. Both the scientists and the IDers hold that Q3 can be answered in the affirmative. But of course, this answer is wrong. We are under the impression that we can distinguish intelligently designed from naturally occuring systems, simply because we are faced with an overwhelming number of intelligently designed systems everyday; the roads we drive on, the car we drive in, and every flashing light, tall building, and landscaped lawn in between is an artifical construct designed by some person or committee. And for each of these systems, we can tell some story in which they arise due to the planning and intentions of some intelligent creature, namely human beings. We can't tell such a story for the Martian pocket watch.
What is important, however, is to distinguish between a 'no' answer to Q3, and a dismissal of the idea of intelligent design whatsoever. What a no answer to Q3 tells us is simply that the origins of a system, intelligent or otherwise, are entirely irrelevant to the legitimate science conducted on that system. That there was a God is worse than false; it simply doesn't matter.
But for a 'no' answer to Q3 to be convincing, it will require some robust elaboration on questions 1&2. This is perhaps one way to see the motivation for my project.