thinking is dangerous


NoneMoreNegative came out of the closet to say:
Incomprehensible from the outside... As long as you're along for the ride in some augumented form from the start, you're good to go, or as long as whatever hits the hard burn-off point first is benevolent, you can get uplifted afterwards... Here's a nice usenet post from one of my favourite recent authors on the difficulty of comprehending a runaway technical society powered by self-bootstrapping AIs:I'm still out on the possibility of the Singularity happening, but if real AI is ever cracked, I'll start to worry

There is no question that computers have the (theoretical) capacity to grow and change fast, very very fast. However, I take it as an a priori assumption that the word 'intelligent' carries with it the connotation 'comprehensible to us'.

Suppose technology of a similar sort was produced by some aliens far, far away, and had a good 10,000 year head start on us; and from our perspective was deep into the 'hyper-intelligent' end of the exponential curve, however we choose to quanitfy such a thing. Say further that it sends a bit of self-sustaining, self-evolving technology down to Earth (perhaps in the form of a menacing obelisk?), and humans, otherwise occupied with petty battles, stumble accross it. This bit of tech is crunching data and chewing through paradigm shifts at hyper rates. With no means of understanding what it is doing, however, we wouldn't consider it any more than a big, beeping chunk of useless metal. And I would go further to say that there is no sense we would be able to make of calling this machine 'intelligent'.

Change is change, and as we become aware of the process of change, this awareness feeds back into the cycle and itself changes the rate of change. And there is an upper limit of rate of change that a human can handle, simply in virtue of its physical makeup. Neurochemical signals only move so fast. Computers are not bound by these restrictions directly, but they are bound to humans, and so their rate of change must stay within certain parameters for us to even consider them useful, much less intelligent. Current trends in technology dont aim towards self-augmenting AI, but rather intelligent user interfaces that use existing raw computational power to specifically change the way people and computers interact, and it is this interaction which gives us a criteria for intelligence. But this keeps the rate of change used by a computer bound in a strong way to the rate of change achievable by humans. As technology increases, this binding can increase, and perhaps the mutual rate of change can likewise increase, but it is a mututal process, and therefore can't increase faster than the mutual bond can sustain.

If the point is that in 10000 years we wont be able to make sense of the technology, sure. People 10000 years ago would be baffled by the iPod (and the contents therein!). But tracing through the curve slowly, without jumping to unwarranted conclusions, does not reveal any special point along the curve which justifies the threatening, and you must admit, religious, moniker 'singularity'.

NoneMoreNegative came out of the closet to say:
Oh, and the appelation of the Singularity as "The Rapture... For nerds!" has always amused me; taking the big mans chair as a species rather than sitting at his right hand sounds a distinctly human outlook

Religion itself is distinctly human, and arguably always had that very goal

NoneMoreNegative came out of the closet to say:
How about if we can't make sense of technology in ten hours?

Paraphrasing one of Vinge's analogies that seems to fit here, "you could bring Shakespeare into the present and eventually get him to understand television, but no amount of education or explanation will enable a goldfish to understand television." Similarly, he expects the changes which superhuman intelligence will bring to be of a sort which humans are incapable, in principle, of understanding.

If you considered a blackboard full of the most esoteric physics equasions that only a handful of scientists in the world can understand, and then while you're trying to find a book to start deciphering them, they've already been advanced/refined/branched off to other fields as far in a half-hour as ten times that number of scientists could have done in a lifetime..? Things run away from ever being comprehensible to any human in short order.

It's a concept that's hard to do anything other than at, but at least I have more faith in it than the other Rapture

This is the very idea that I am disputing is incoherent. The "God moves in mysterious ways" bit is supposed to be that we can't explain his actions, but the actions themselves should be understandable- he slaughtered my family and my farm and struck me with boils and I have no idea why or how he did it, but I sure as hell understand what he did.

Similarly, lets suppose that a hyper post-singularity computer has solved physics problems several orders of magnitude more difficult than anything we have ever encountered. If this is all it did, and it did it using mathematics and a language totally incomprehensible to us, then we would have no access to the information; it would be essentially gibberish. Likewise, the static on my tv set, if decoded properly, might be relaying the secrets of the universe, but without a method for decoding it is nothing more than fuzz.

However, let us suppose the same computer, crunching the same data; and through these calculations, it figures out how to realize, for instance, wormhole travel; and further, it is hooked up to the necessary machines to actually build these devices, and through this process it is able to constuct a vast interstellar transportation network. I think we'd be willing to call this machine intelligent, but only in virtue of what it can do for us. And we might have no understanding of how it accomplished this task, or any way of explaining the phsyics behind such an operation, but we surely understand the output, the results, and it is on that criteria that we judge it to be intelligent.

What this rules out, then, is that there can be anything that not only uses an incomprehensible language (or mathematics), but that the resulting action on this language is also incomprehensible. If 10 hours from now, technology has advanced to the point that not only am I helpless to explain how it works, but I am also helpless to explain what it is doing, then by definition it is unintelligible, it is nonsense. I cannot make sense of calling such a system 'intelligent'.

I am essentially in line here with the standard philosophical stance on intertranslatability- basically, that what it means to be intelligent is that I have some means of translating its actions and language into my own language. There cannot be radically disperate conceptual schemes- what it means to be a conceptual scheme is just that it is translatable. Thus anything falling outside of my translational powers is not simply incomrehensible, it is not even a conceptual scheme. What this implies is that any intelligent system is necessarily comprehensible to some large degree, or else it can't rightly be called intelligent in the first place. In other words, it is simply incoherent to think of the kind of radical jump the singularity implies.

What this further implies, however, and which I think is very interesting, is that as technology increases, and if we are to 'keep up', as it were, then the keeping up cannot be on an individual level, but must be entire communities of people- generations. We must advance as a people if we are to make any sense of advancing at all. There simply cannot be a point where technology leaves mankind behind.
15:00 :: :: eripsa :: permalink