thinking is dangerous

Autonomically correct

Business Week published an article on Autonomic Computing: Computer, heal thyself.

His idea was simple. Scientists needed to come up with a new generation of computers, networks, and storage devices that would look after themselves. The name for his manifesto came from a medical term, the autonomic nervous system. The ANS automatically fine-tunes how various organs of the body function, making your heart beat faster, for instance, when you're exercising or stressed. In the tech realm, the concept was that computers should monitor themselves, diagnose problems, ward off viruses, even heal themselves. Computers needed to be smarter. But this wasn't about machines thinking like people. It was about machines thinking for themselves.

Apparently IBM has been pushing the autonomic idea for a few years now, and has detailed the 4 major aspects of an autonomic system, and the 8 obstacles such systems face.

This is interesting to me, obviously, for several reasons. The drive towards self-regulating, autonomous systems is obviously a push for greater agency in these systems. But the interesting aspect is IBM's focus on the biological metaphor in describing the nature of autonomic systems, and borrows heavily from the philosophical and cognitive science research on the nature of agency. That last link includes reference to Damasio, for instance.

I will have to do more research on the idea before I can say anything substantive. Glancing over the manifesto makes me think this is deep into 'industry buzzword' territory, though I think the implications here are more theoretical and foundational than IBM lets on. I should stop to conisder some of the blogosphere phuzz on the article.

From Rough Type: Not like breathing

The real power of the idea is not that computers will run themselves, in the way that the autonomic nervous system runs itself. Rather, it's that, by automating many of the lower-level computing chores, like allocating computing, storage, and network capacity, setting up new applications, metering usage, and so on, people actually gain greater control over the systems. We become able to program the way the systems work at a higher level, establishing the criteria, for instance, that determines how different computing jobs get prioritized based on our company's business needs.

We don't want computer systems to breathe by themselves, in other words. We want to be able to tell them exactly how we want them to breathe, to be able to set and adjust their "heart beat" to suit our own requirements. Automating computing is - or should be - all about giving people, not machines, greater control.

This seems wrong. We don't want the computers to be dependent on us for their basic functioning. We want to be able to use them for whatever we want to do. That means that we do want them to breathe for themselves, but we don't want that breathing to interfere with our own projects and tasks. We want, in other words, the computer to run transparently to its underlying functionality. We want the computer's breathing to be unconscious, both from our perspective and its.
13:35 :: :: eripsa :: permalink