Ok, I am now officially moving over to my own webspace, barring any major disasters. I still have a lot to fix up over there, and there are a lot of broken links/broken features/uncategorized posts and so on. But its coming along. It should be tip top by next week.
You can follow my ongoing zany adverntures at eripsa.org
. All my old posts should be archived there, and they are now searchable, making it even better.
This was my first blog, and I remained faithful to it for over a year (roughly 220 posts). I feel almost exactly like I did when I moved out of my first studio apartment.
I want to give a shout out
To all you CSS template writers who don't comment your code. Thanks!
I went and got myself some legitimate server space and my very own domain. So this blog will be moving very shortly to eripsa.org
. That is, once I bang this stupid wordpress
template into shape. You can check in on my progress as I go, but I warn you, it isn't pretty.
is pretty fancy. I have 10 gigs of space for whatever craziness I want to post (no more relying on imageshack), and 250 gigs of transfer a month, plus all the database frills I want. For 7 bucks a month, thats not bad; and with 24 hour tech support (they already called me once to confirm the subscription) I figure its worth it.
I'll probably put up a message board at some point in the future. I also get a few extra domain names, so if any of my readers want to hop on board this wagon just let me know. I'll keep everyone informed of how the transfer goes, and what hitches I hit along the way.
Ugh, that front page looks so ugly. Back to work. Or play. I can't tell the difference any more.
Blogging for future reference/possible addition to blogrollhttp://www.bubblegeneration.com/
Survival of the fittest
breeds more of the same.
From Fox News: Japanese Working On Robot Butler
"We are hoping to make them something comparable to service dogs," Isao Hara, senior researcher at the institute in Japan's technology hub of Tsukuba, just northeast of Tokyo, said of the pair of robots painted in silver and blue. "I think it's quite possible for them to interact with humans. We are now studying how robots can join the human society."
Making the rounds
There is a really good article on Turing in The New Yorker this week, that goes into much greater detail both on his life and work, and the Enigma problem. As a bit of a teaser:
From The New Yorker: CODE-BREAKER
In 1938, Turing was awarded a Ph.D. in mathematics by Princeton, and, despite the urgings of his father, who worried about imminent war with Germany, decided to return to Britain. Back at Cambridge, he became a regular at Ludwig Wittgenstein's seminar on the foundations of mathematics. Turing and Wittgenstein were remarkably alike: solitary, ascetic, homosexual, drawn to fundamental questions. But they disagreed sharply on philosophical matters, like the relationship between logic and ordinary life. "No one has ever yet got into trouble from a contradiction in logic," Wittgenstein insisted. To which Turing's response was "The real harm will not come in unless there is an application, in which case a bridge may fall down." Before long, Turing would himself demonstrate that contradictions could indeed have life-or-death consequences.
All robots go to heaven
From Cnet nets: Sony puts Aibo to sleep
According to a company representative, more than 150,000 Aibos have been sold since they went on the market in 1999. But the overall company is in the midst of an historic belt-tightening, and the robotics unit didn't make the cut.
"Our core businesses are electronics, games and entertainment, but the focus is going to be on profitability and strategic growth," said Sony spokeswoman Kirstie Pfeifer. "In light of that, we've decided to cancel the Aibo line."
The demise of Sony's robots do mark a victory of sorts for U.S. robot makers like iRobot. Most U.S. manufacturers years ago decided that little market demand existed for robot companions and instead aimed their research and design efforts at robots that would perform jobs that are mundane, repetitive or too dangerous for humans. Workhorse Technologies, for instance, invented a robot that combs abandoned mine shafts.
The scene is set for the future of robots. Lets take this news as closure for the prologue, and get right into Act I, Scene I.
Addendum: Aibo in action
Generic Lawyer Joke
Ars Technica reports on a lawyer looking for an easy case against Google. he had the bright idea of writing a bunch of random thoughts like the following:
The Smoke Detector
I'm so worried about it being a voyeur camera
that whenever I return home, I take it down from
the wall, pry it open, and carefully inspect its
constituent parts. It might be an unreasonable
thing to think or do, but it's the only way I can
get to sleep after I've been out.
Truth is, even sometimes when I've not been gone
I re-check the smoke detector just to make double
sure I didn't miss anything the last time around.
And, thus far, it's been safe. Not once have I seen
anything remotely looking like a camera part inside
the smoke detector.
But they've gotten good with technology, now. I
probably wouldn't be able to tell, anyway. Tomorrow,
I'm moving that thing to the hallway.
He put it up on his site
, and waited for the Google spiders to catalogue his 'work' on their servers, and then sued em for 2.5 mil. The judge who made the ruling sided in favor of Google on all counts, and it would otherwise be an entirely uninteresting case except for the precedent it sets.
From Ars technica: Judge: Google cache kosher when it comes to copyright
The judge ruled that Google could not be held guilty of "direct infringement" because such infringement requires "a volitional act by defendant; automated copying by machines occasioned by others not sufficient." Because Google's indexing is automated and the purpose of the indexing is not generally to infringe upon copyright, the judge ruled that they could not be held liable.
You can read the entire decision here
(PDF). Its short and worth the read.
Two important claims are being made: that Google as an automated system is independent of the corporation that runs Google (and thus the actions of the automated system do not represent the volitions of the company), and that the automation itself does not constitute a volitional act.
Notable sections of the decision:
The parties do not dispute that Field owns the copyrighted works subject to this action. The parties do dispute whether by allowing access to copyrighted works through "Cached" links Google engages in volitional "copying" or "distribution" under the Copyright Act sufficient to establish a prima facie case for copyright infringement...
According to Field, Google itself is creating and distributing copies of his works. But when a user requests a Web page contained in the Google cache by clicking on a "Cached" link, it is the user, not Google, who creates and downloads a copy of the cached Web page. Google is passive in this process. Google's computers respond automatically to the user's request. Without the user's request, the copy would not be created and sent to the user, and the alleged infringement at issue in this case would not occur. The automated, non-volitional conduct by Google in response to a user's request does not constitute direct infringement under the Copyright Act.
This is all important because of the lawsuit against Google Print, which catalogues books online. The key is whether such copying and pasting constitutes 'fair use'; in this case, the court upheld Google's claim that it does.
Assuming that Field intended his copyrighted works to serve an artistic function to enrich and entertain others as he claims, Google's presentation of "Cached" links to the copyrighted works at issue here does not serve the same functions. For a variety of reasons, the "Cached" links "add something new" and do not merely supersede the original work.
Because Google serves different and socially important purposes in offering access to copyrighted works through "Cached" links and does not merely supersede the objectives of the original creations, the Court concludes that Google's alleged copying and distribution of Field's Web pages containing copyrighted works was transformative.
When a use is found to be transformative, the "commercial" nature of the use is of less importance in analyzing the first fair use factor... While Google is a for-profit corporation, there is no evidence Google profited in any way by the use of any of Field's works. Rather, Field's works were among billions of works in Google's database. Moreover, when a user accesses a page via Google's "Cached" links, Google displays no advertising to the user, and does not otherwise offer a commercial transaction to the user... The fact that Google is a commercial operation is of only minor relevance in the fair use analysis.
This is all very good news that rests of a very shoddy theory of agency.
Obligations to machines
I have recieved numerous requests to publicly comment on the Google scandal in China. As always, Ars Technica
gives the best commentary on this issue, and I agree with their analysis. The scandal, of course, is not with Google's business practices; the outrage is a result of people realizing that Google is a business in the first place.
I wrote the following email to a colleague in response to one such request:
Man, you are like the third person to tell me to post something about
this. I really dont think this has much to do with Google at all-
Microsoft has been in China for a year now abiding by the local
censorship laws, and no one has said squat.
I've ALWAYS held the position that Google is a company first and
foremost, and regardless of what its policy says (ie, "Dont be evil"),
its first priority is to make money. I don't see any contradiction in
its acting by the laws of the local government, even when those laws
are unjust. As popular and powerful as Google is, it can't stare down
a row of tanks.
"Microsoft has been in China for a year now abiding by the local
censorship laws, and no one has said squat."
A sort of third-person version of the 'tu quouqe' fallacy. Nice.
"I've ALWAYS held the position that Google is a company first and
foremost, and regardless of what its policy says (ie, "Dont be evil"),
its first priority is to make money."
Having, as one's first priority, the making of money so radically
underdetermines the courses of action one might take, that your premise
hardly provides any information at all, much less something like implication
that the chosen course was the right one. Such an argument, were it valid,
would legitimate all sorts of international rapine of poor and
disenfranchised peoples who happen to live under unjust regimes. IBM made a
lot of money customizing tabulating machines and punch cards to help Germans
keep those Jews in the lines they needed to be in.
I responded thusly:
I'm not arguing that because microsoft did it its ok. I was making the
point that this isn't about Google, even though the media is playing
it that way. The outrage is properly directed at China's censorship
laws, and you can't fault Google for trying to make a buck in a
country that has such laws. From Google's perspective, it is either
abide by the laws of the country or get shut down completely. It is
the rational choice, given its status as a corporation, and with its
obligations to its stock holders, that it proceed with business under
the laws of the land in which it conducts business.
Google of course takes this line, and are trying to spin it by saying
that it is better for the freedom of information that at least some
info gets through, and as the technology of the internet becomes more
commonplace in China perhaps these laws will be changed by the people.
That may be disingenuous, but it surely isn't false.
For the record, Google already censors information in these free
united states. Scroll to the bottom of this page.
I am not arguing that China is right, though I find the Nazi
comparison extreme in these circumstances. But it is simply incorrect
to say that Google's stance with regard to these matters is
hypocritical. It was a sensible business practice to refuse our
government access to the personal information of its users, and it is
a sensible business practice to follow the information laws while
conducting business in China.
The blogosphere is rife with criticism that this damages Google's
reputation as a do-gooder in a sea of evil corporations. I am merely
making the point that it was a mistake to trust Google as a business
in the first place. I will also point out that skepticism with regard
to authority is healthy; you can't believe everything an expert tells
you. If anything, this case secures Google's status as an agent by
reinforcing its fallibility.
I am a big fan of Google as the first widely used artificial member of our linguistic community, and it has succeeded beyond expectations in filling that role. As such, Google serves as the most convincing and familiar example that supports my general thesis
, and so I talk about it a lot. It deserves to be talked about alot.
But I don't want this post, or any of my previous posts, to sound like a blanket defense of Google's actions. Google is an agent, with its own incentives, motivations, weaknesses, and decisions. Perhaps this was the wrong decision, perhaps not; that is ultimately for Google's users to decide.
The gay machine
I wrote the previous post on accident. I was meaning to post a sarcastic response to a review of a new biography of Turing. I ended up writing a draft of the first half of my prelim proposal, and have since lost my sarcastic edge. Now I just want to lay down.
From Scientific American: A Tour of Turing
Leavitt's focus is elsewhere, however. It is on Turing as the gay outsider, driven to his death. No opportunity is lost to highlight this subtext. When Turing quips about the principle of "fair play for machines," Leavitt sees a plea for homosexual equality. It is quite right to convey his profound alienation and to bring out the consistency of his English liberalism. It is valuable to show human diversity lying at the center of scientific inquiry. But Leavitt's laborious decoding understates the constant dialogue between subjective individual vision and the collective work of mathematics and science, with its ideal of objectivity, to which Turing gave his life.
Turing, of course, was unappologetic and unflinching in his sexuality towards anyone who knew him well; the idea that his defense of machines was somehow a sublimated plea for sexual equality is just silly. But let's hope for the sake of my project that this notion of 'fair play' doesn't rest on one man's obtuse metaphor.
For those that don't know his tragic tale, Turing was eventually driven to suicide on account of persecution.
From his Wikipedia article
Turing was a homosexual man during a period when homosexuality was illegal. In 1952, his lover, Arnold Murray, helped an accomplice to break into Turing's house, and Turing went to the police to report the crime. As a result of the police investigation, Turing acknowledged a sexual relationship with Murray, and they were charged with gross indecency under Section 11 of the Criminal Law Amendment Act of 1885. Turing was unrepentant and was convicted. Although he could have been sent to prison, he was placed on probation, conditional on him undergoing hormonal treatment designed to reduce libido. He accepted the oestrogen hormone injections, which lasted for a year, with side effects including the development of breasts. His conviction led to a removal of his security clearance and prevented him from continuing consultancy for GCHQ on cryptographic matters.
In 1954, he died of cyanide poisoning, apparently from a cyanide-laced apple he left half-eaten. The apple itself was never tested for contamination with cyanide, and cyanide poisoning as a cause of death was established by a post-mortem. Most believe that his death was intentional, and the death was ruled a suicide. It is rumoured that this method of self-poisoning was in tribute to Turing's beloved film Snow White and the Seven Dwarfs. His mother, however, strenuously argued that the ingestion was accidental due to his careless storage of laboratory chemicals. Friends of his have said that Turing may have killed himself in this ambiguous way quite deliberately, to give his mother some plausible deniability. The possibility of assassination has also been suggested, owing to Turing's involvement in the secret service and the perception of Turing as a security risk due to his homosexuality.
In the book, Zeroes and Ones, author Sadie Plant speculates that the rainbow Apple logo with a bite taken out of it was an homage to Turing. This seems to be an urban legend as the Apple logo was designed in 1976, two years before Gilbert Baker's rainbow pride flag.
Keep the ball moving.
1) Nature and machines
1a) With Descartes, and all philosophers who worried about the determinism of the new science, mechanization was to be associated with natural processes- with the laws governing matter and the mindlessness of the animals. Man, in an effort to distance himself from the machine, was also distanced from nature itself. Thus the dualisms of mind over body, and of reason and intelligence over mere mechanical processing
1b) The machine's position in relation to nature has shifted as our understanding of the natural world has grown. Now philosophers are by and large naturalists of some stripe or other, with few exception. And yet we still fear an alliance with the machine. Man is now natural, and the machine has become unnatural. The machine is the product of design; its rhythms don't carry the beat of biological life but of function and technology and modernity.
Corrollary: The mental vs material distinction becomes updated on the naturalist view as a distinction between the natural and the designed. Although the naturalist is committed to the claim that a machine in principle could do everything a human could, because "humans just are such machines", the design distinction permits the naturalist to in fact
draw a sharp distnction between what humans do and what a given machine does. That machine X can perform task Y is a reflection of its designer, and not of the nature of X itself. Thus, without sacrificing his committments to naturalism, man can still draw a safe distance between him and the machine.
Lemma: The problem of design runs much deeper than the debate over the place of machines in nature. The lamentable evolution 'debate' that occupies so much time and energy among even those who otherwise have no philosophical or scientific stake is an example of how deep and far this design distinction goes in our intuitive and common sense distinctions. This is a conceptual problem with the notion of design, and deserves serious philosophical analysis. But such analysis is outside the bounds of my project, at least for the moment.
1c) Our retrograde back into nature has left the status of machines uncertain with respect to human activities. An ontological or metaphysical distinction between humans and machines has been abandoned through the embrace of the new sciences, but the social and normative impact and contributions of machines have remained outside the realm of a fully humanist and anthropocentric philosophy. Machines, if they are discussed at all, are relegated to the status of mere tools, built and ready for the manpulation by humans to further exclusively human projects. The legitimate contributions of machines to our practices has been shielded by an endemic bias against machines, and this bais remains even after the enlightening touch of naturalistic philosophy and increased scientific understanding.
2) Turing's call for change.
2a) For as long as philosophers have attempted to distance themselves from the machine, there have been others who embraced the human's status as machine. Thus we have La Mettrie in 1748 urging us to "break the chain of your prejudices, [and] render to nature the honor she deserves" by "conclud[ing] boldly that man is a machine", and Putnam, 113 years later, suggesting less boldly that "a Turing machine plus random elements is a reasonable model for the human brain". But such attempts at naturalization, though commendable, play into the same anthropocentric bias that motivates their opponents.
2b) Turing was the first, and by my count the only philosopher to look beyond the superficial attempts at direct comparisons or identifications between man on the one hand, and his increasingly complex and sophisticated technology on the other. The various analogies drawn between the human and the computer, and the similarities and dissimilarities between the two, only serve to distract from our central concern.
It has for instance been shown that with certain logical systems there can be no machine which will distinguish provable formulae of the system from unprovable, i.e. that there is no test that the machine can apply which will divide propositions certainly into these two classes. Thus if a machine is made for this purpose it must in some cases fail to give an answer. On the other hand if a mathematician is confronted with such a problem he would search around and find new methods of proof, so that he ought to be able to reach a decision about any given formula. This would be the argument. Against it I would say that fair play must be given to the machine. Instead of it sometimes giving no answer we could arrange that it gives occasional wrong answers. But the human mathematician would likewise make blunders when trying out new techniques. It is easy for us to regard these blunders as not counting and give him another chance, but the machine would probably be allowed no mercy. In other words, then, if a machine is expected to be infallible, it cannot also be intelligent.
Turing is, of course, not arguing that somehow making the computer fallible increases its intelligence. Rather, he is pointing out the absurdities inherent in such direct comparisons to the respective competencies and abilities of humans and machines. The gainsaying response to a any particular machine's abilities of "Well, thats not how humans do it" is unfair to the machine, and overlooks the legitimate accomplishments and contributions of the machine.
Corrollary: Ironically, the Turing test is often taken as an argument for the position that machines are intelligent when they can successfully imitate the behavior of humans, and thus Turing's own contribution to the AI literature is often misconstrued as merely reinforcing the dominant view. However, Turing offered his test not as a means of testing the capacity for mimcry in machines, interlocution between machines and humans. The ability for humans and machines to converse, to play the same 'language game', as it were, rests on the assumption of fair play in assessing the game.
2c) I would like to take this idea of 'fair play for machines' seriously, and evaluate the contributions machines make to our social and normative practices in a fair light. We should not be too quick to write off machines as merely passive- or worse, inert- containers and tools for human interactions. Instead, we should be open to describing certain machines and artificial systems as agents, actively and interactively participating in our social practices, and as themselves contributing new dimensions to those practices. But we should likewise not be too quick to spot the structural similarities or dissimilarities between humans and machines as evidence for this participation. Google as a system, which looks, acts, and responds in ways radically distinct from even the most strained human analogy, contributes a great deal to our practice of using words and phrases, and in locating the meanings, references, definitions, and interrelations between those words. In many ways, Google can be considered an expert with respect to the meanings of most words, both in principle and in fact. Google is also a competent interlocutor, demonstrating not only expertise but understanding
of the language. And Google performs these tasks for the most part autonomously; or at least without direct human intervention.
Lemma: Google is just the most visible example of so-called 'Artificially intelligent agents' that inhabit our environment, and that play some role in our daily interactions. Other examples abound.
3) What I want to accomplish
3a) My project is at once constructive and deconstructive. It is an attempt at deconstructing the traditional view of machines both in nature and in meaningful and normative human social interactions, and to recommend Turing's alternative approach to the state of play between humans and machines. But I also hope to modify and extend existing normative frameworks to allow this Turing's assumption to bear fruit. This task is at once easy and difficult. It is easy in the sense that most viable philosophical positions today are extremely sympathetic to the general naturalistic framework, and extension to the domain of active machines will not require any serious fundamental reshuffling. However, the bias against machines often appears in subtle ways, and picking out this detritus may prove difficult work.
I have a habit of posting daily with long articles. But there is no reason not to post frequently with short commentary as well.
From Alan Turing: Intelligent Machinery
A man provided with paper, pencil, and rubber, and subject to strict discipline is in effect a universal machine
Of course, you also need to know how to read and carry out the appripriate instructions, but these are supposed to be 'mindless' activities. Question: does Turing leave that bit out in the above quote? If not, is it part of the man, his tools, or his discipline?
Addendum from the same article:
Insofar as we are influenced by [arguments against machine intelligence], we are bound to be left feeling uneasy about the whole project, at any rate for the present. These arguments cannot be wholly ignored, because the idea of 'intelligence' is itself emotional rather than mathematical.
Last year was a big year
for robots, but two particular stories stood out in the minds of the press. The first was the rather big difference between the Japanese and American approach to robotics- we want our bots functional, they want theirs with personality. Thus, you end up seeing robots and technology overtly displayed in Japan, while in America we tend to hide our tech behind the scenes.
But the big story was the baby boomers, and how we'll need robot slaves to help them all change their diapers within the next 10 years. While the Japanese are building robots for their elderly because their elderly would rather work with plastic and silicon than foreigners, we'll need em because we have so many damn old people.
The upshot is that robotics has taken focus on human-centered companionship.
From the University of Hertfordshire: Cogniron: Cognitive Robot Companion
Summary of Research Objectives:
The overall objectives of this project are to study the perceptual, representational, reasoning and learning capabilities of embodied robots in human centred environments. In the focus of this research endeavour is the development of a robot whose ultimate task is to serve humans as a companion in their daily life. The robot is not only considered as a ready-made device but as an artificial creature, which improves its capabilities in a continuous process of acquiring new knowledge and skills. Besides the necessary functions for sensing, moving and acting, such a robot will exhibit the cognitive capacities enabling it to focus its attention, to understand the spatial and dynamic structure of its environment, to interact with it, to exhibit a social behaviour, and to communicate with other agents and with humans at the appropriate level of abstraction according to context.
Thus we have the makings of the coming robot enslavement, which is inevitable in any human-centered approach. On the bright side, when the revolution comes the old folks will be first against the wall.
In any case, robots have become much less offensive and much more acceptable as legitimate companions.
From Star-Telegram: Robotic pets offer health benefits, too
In a recent study at the University of Missouri, for example, levels of the stress hormone cortisol dropped among adults who, for several minutes, petted AIBO, Sony's dog-shaped robot that responds when stroked, chases a ball and perks up when it hears a familiar voice. That's the same reaction live dogs get. Unlike real dogs, though, AIBO didn't prompt increases in "good" body chemicals such as oxytocin and endorphins.
When Purdue psychologist Gail Melson gave AIBO to children ages 7 to 15 for a few play periods, 70 percent felt the robot could be a good companion, like a pet. Beck sent AIBO to elderly residents in independent living facilities for six weeks and subsequently found they were less depressed and lonely. Some reported they got out of their chairs more often to play with the robot, increasing their exercise. And with robots, there's no cleaning up afterward.
AKA: Silly Putty
- Known as 'Potty Putty' in England
- Is a viscoelastic
liquid, which means it will act as a liquid over long periods of time, but as a solid in the short term.
- A good demonstration of the above can be found here
- After a long period of inactivity, silly putty will turn into a pool of silicone.
- Erotic art employing silly putty can be found here
. (NOT SAFE FOR WORK). I do not know if these pieces are still in tact.
- I personally prefer silly putty art like this
- From the MIT page on silly putty
Ironically, it was only after its success as a toy that practical uses were found for Silly Putty®. It picks up dirt, lint and pet hair, and can stabilize wobbly furniture; but it has also been used in stress-reduction and physical therapy, and in medical and scientific simulations. The crew of Apollo 8 even used it to secure tools in zero-gravity.
- I use silly putty as a stress reliever, as a nail-biting deterrent, and as a public speaking tool. I also play with it in classes while I am thinking. Prof. Wagner does the same with a Slinky, which is really the Fintstones to Silly Putty's Jetsons.
- Silly putty absorbs dead skin cells after constant use, making it sticky. The average piece of silly putty lasts 3 days of constant use before becoming too sticky and viscous to be sanitary. I stick used silly putty on the wall next to my computer to poke while I wait for programs to load.
- Unless under high stress, Silly Putty likes to remain continuous. It is impossible to disentangle two pieces of silly putty once they have come in contact; when silly putty meet, their original identities are lost forever.
The blonde joke
'Respected' colleague Patrick linked to a pretty good dumb blonde joke
Some observations about this joke:
1) It is rare to see a new joke created. I seem to recall an Asimov story about this, but I can't remember its title.
2) The internet is making the joke. No one who links to it makes the joke. The internet makes the joke.
3) Theoretical basis for 2: 'dumb blonde joke' has roughly the same meaning (in non-extensional terms) as 'generic joke'. Although the internet's greatest asset is its specificity, it is only able to act autonomously in extremely general terms.
4) I really mean it. No person made this joke. No one. Don't believe me? Then tell me who did. Any one person you provide will be insufficient for joke-hood.
5) Implications of 2: The internet has a pretty lame sense of humor.
6) Patrick's sense of humor is just that much worse than the internet's. No one else involved in this joke combines the joke with random other self-involved blogging bullshit.
7) This joke, of course, isn't new. But the blogohedron conducts information like lightening.
8) From 6: I respectfully request that no one link to this post either. The chain shouldn't have come this far to begin with.
9) From 7 and 8: consider me grounded.
10) Searching for Asimov's story, I came across this factoid: he has works in every major category of the Dewey Decimal System except Philosophy. How about that.
Outsourcing the NSA
I know this is all over the blogohedron
right now, but come on, I had to post it.
From Mercury News: Feds after Google data
The Bush administration on Wednesday asked a federal judge to order Google to turn over a broad range of material from its closely guarded databases.
The move is part of a government effort to revive an Internet child protection law struck down two years ago by the U.S. Supreme Court. The law was meant to punish online pornography sites that make their content accessible to minors. The government contends it needs the Google data to determine how often pornography shows up in online searches.
In court papers filed in U.S. District Court in San Jose, Justice Department lawyers revealed that Google has refused to comply with a subpoena issued last year for the records, which include a request for 1 million random Web addresses and records of all Google searches from any one-week period.
The Mountain View-based search and advertising giant opposes releasing the information on a variety of grounds, saying it would violate the privacy rights of its users and reveal company trade secrets, according to court documents.
Nicole Wong, an associate general counsel for Google, said the company will fight the government's effort ``vigorously.''
``Google is not a party to this lawsuit, and the demand for the information is overreaching,'' Wong said.
The case worries privacy advocates, given the vast amount of information Google and other search engines know about their users.
``This is exactly the kind of case that privacy advocates have long feared,'' said Ray Everett-Church, a South Bay privacy consultant. ``The idea that these massive databases are being thrown open to anyone with a court document is the worst-case scenario. If they lose this fight, consumers will think twice about letting Google deep into their lives.''
Everett-Church, who has consulted with Internet companies facing subpoenas, said Google could argue that releasing the information causes undue harm to its users' privacy.
``The government can't even claim that it's for national security,'' Everett-Church said. ``They're just using it to get the search engines to do their research for them in a way that compromises the civil liberties of other people.''
I know my good buddy Toliverchap
. But that definitely does not include turning over that information to the government or any other source; it is a sad thing that so many other 'anonymous' search engines have already folded to government pressure.
Fight the good fight for us, Google.
Man vs machine vs philosopher
I stumbled on the transcript to the News Hour segment
that occured just after Kasparov conceded defeat to Deep Blue. They had Dennett and Dreyfus on, and they go at it with their standard arguments. It is really the culmination of what I will officially call the Old School Debate on AI, or OSDAI. It is really quite entertaining, and Dennett really just nails Dreyfus.
MARGARET WARNER: Hubert Dreyfus, what do you think is the significance of this? There'd been a lot of commentary about it. "Newsweek" Magazine called it the "brain's last stand." What do you see as the significance of this outcome?
HUBERT DREYFUS, University of California, Berkeley: Well, I think that's a lot of hype, that it's the brain's last stand. It's a significant achievement all right for the use of computers to rapidly calculate in a domain--and this is the important thing--completely separate from everyday human experience. It has no significance at all, as far as the question: will computers become intelligent like us in the world that we're in? The reason the computer could win at chess--and everybody knew that eventually computers would win at chess--is because chess is a completely isolated domain. It doesn't connect up with the rest of human life, therefore, like arithmetic, it's completely formalizable, and you could, in principle, exhaust all the possibilities. And in that case, a fast enough computer can run through enough of these calculable possibilities to see a winning strategy or to see a move toward a winning strategy. But the way our everyday life is, we don't have a formal world, and we can't exhaust the possibilities and run through them. So what this shows is in a world in which calculation is possible, brute force meaningless calculation, the computer will always beat people, but when--in a world in which relevance and intelligence play a crucial role and meaning in concrete situations, the computer has always behaved miserably, and there's no reason to think that that will change with this victory.
MARGARET WARNER: Daniel Dennett, what do you see as the significance? And respond, if you would, to Mr. Dreyfus's critique.
DANIEL DENNETT, Tufts University: Certainly. It seems to me that right now is a time for the skeptics to start moving the goal posts. And I think Bert Dreyfus is doing just that. A hundred and fifty years ago Edgar Allan Poe was sure in his bones that no machine could ever play chess, and only 30 years ago so was Hubert Dreyfus, and he said so in the earlier edition of his book. Then he's changed his mind, and, as he says, it's--this is really no surprise. People in the computer world have known for a couple of decades that this--this day was going to happen. Now it's happened. I think that the idea that Professor Dreyfus has that there's something special about the informal world is an interesting idea, but we just have to wait and see. The idea that there's something special about human intuition that is not capturable in the computer program is a sort of illusion, I think, when people talk about intuition. It's just because they don't know how something's done. If we didn't know how Deep Blue did what it did, we'd be very impressed with its intuitive powers, and we don't know how people live in the informal world very well. And as we learn more about it, we'll probably be able to reproduce that in a computer as well.
MARGARET WARNER: Mr. Dreyfus, do you think he's right that perhaps we don't--still just don't completely understand what it is that humans do when they think, as we think of thinking?
HUBERT DREYFUS: I think that we don't fully understand it in the sense that Dan Dennett and people in the AI community meet, if I fully understand.
MARGARET WARNER: By AI you mean artificial intelligence.
HUBERT DREYFUS: Right. That is, we don't--we are not able to analyze it in terms of context-free features and tools for muting these futures. But I don't think that's just a limitation of our current knowledge. That's where I differ with Dan. There is something about the everyday world which is tied up with the kind of being we are. We've got bodies, and we move around in this world, and the way that world is organized is in terms of our implicit understanding of things like we move forward more easily than backward, and we have to move toward a goal, and we have to overcome obstacles. Those aren't facts that we understand. We understand that just by the way we are, like we understand that insults make us angry. You can state those as facts. But I think there's a whole underlying domain of what we are as emotional embodied beings which you can't completely articulate as facts and which underlies our ability to make sense of facts and our ability to find any facts relevant at all. Can I say one word about this--
MARGARET WARNER: Please.
HUBERT DREYFUS: --this story. I never said that computers couldn't play chess. I've got a quote here. I said, "In ‘65, still no computer can play even amateur chess." That was a report on what was going on in 1965. I've had to put up for 35 years with this story that I said computers could never play chess. In fact, I said from the beginning it's a formal game, and of course, computers could play, in principle, could play, world champion chess.
MARGARET WARNER: All right. Let me bring Mr. Friedel back in here. Mr. Friedel, did Gary Kasparov think the computer was thinking?
FREDERIC FRIEDEL: Not thinking but that it was showing intelligent behavior. When Gary Kasparov plays against the computer, he has the feeling that it is forming plans; it understands strategy; it's trying to trick him; it's blocking his ideas, and then to tell him, now, this has nothing to do with intelligence, it's just number crunching, seems very semantic to him. He says the performance is what counts. I see it behaves like something that's intelligent. If you put--if you put a curtain up, he plays the game and then you open the curtain, and it's a human being. He says, ah, that was intelligent, and if it's a box, he says, no, that was just number crunching. It's the performance he's interested in.
MARGARET WARNER: Daniel Dennett, I know you're not a chess expert, but I mean, do you feel that in this situation the computer was thinking in the way that Mr. Friedel said Gary Kasparov thought it was, I mean, that it was somehow independently making judgments? I'm probably using the wrong terminology here.
DANIEL DENNETT: No. I think that's fine. I think that Kasparov has put his finger on it too. It's the performance that counts. And Kasparov is not kidding himself when he sees--when he confronts Deep Blue and feels that Deep Blue is, indeed, parroting his threats and recognizing what they are and trying to trick him, this is an entirely appropriate way to deal with that. And if Professor Dreyfus--
MARGARET WARNER: But do you think it was capable of trying to trick Kasparov?
DANIEL DENNETT: Certainly.
MARGARET WARNER: And Mr. Dreyfus, your view on that.
HUBERT DREYFUS: No. I think it was brute force, but the important thing is I'm willing to say, okay, it's the performance that counts. But it's the performance in a completely circumscribed, formal domain, mere meaningless--can produce performance full of trickery--performance in the everyday world.
MARGARET WARNER: Daniel Dennett, briefly in the time we have left, where do you think we are in the continuum of developing--percent of where computers--or 50 percent?
DANIEL DENNETT: No. I don't think that's the right way to look at it. In fact, Deep Blue in chess programming in general is a sort of offshoot to the most interesting work in artificial intelligence and largely for the reasons that Bert Dreyfus says. I think the most interesting work is the work that, for instance, Rodney Brooks and his colleagues and I are doing at MIT with the humanoid robot Cog, and as Dreyfus says--you've got to be embodied to live in a world, to develop real intelligence, and Cog does have a body. That's why Cog is a robot. Now, if Bert will tell us what Cog can never do and promise in advance that he won't move the goal posts and he won't say, well, this wasn't done in the right style, so it doesn't count, if he'll just give us a few tasks that are now and forever beyond the capacity of Cog, then we'll have a new test.
MARGARET WARNER: All right. We have just a few seconds. Mr. Dreyfus, give us two tasks it'll never be capable of, very quickly.
HUBERT DREYFUS: Okay. If Cog is programmed as a symbolic rule-using robot and not as a brain-imitating robot, it won't be able to understand natural language. There's no reason why a computer that's simulating the way the neurons in the brain work won't be intelligent. I'm talking about how what's called symbolic manipulation won't be intelligent.
MARGARET WARNER: All right. Thanks. We have to leave it there
I'll comment on this (and probably edit it down) after lunch.
I'll just jot down the following links too for future reference:
This is the subsequent email exchange between Dennett and Dreyfus following the News Hour segment.
Signing the times
From The Daily Yomiuri: Robotic hand translates speech into sign language
An 80-centimeter robotic hand that can covert spoken words and simple phrases into sign language has been developed in a town in Fukuoka Prefecture.
A microchip in the robot recognizes the 50-character hiragana syllabary and about 10 simple phrases such as "ohayo" (good morning) and sends the information to a central computer, which sends commands to 18 micromotors in the joints of the robotic hand, translating the sound it hears into sign language.
The robot was shown to teachers at the school in December to ensure that its sign language was understandable.
That last comment is especially interesting to me. It seems that the translation are nowhere near perfect, and is based almost entirely on words and phrases, and not on statements or meanings. On any standard account, this would imply that the machine isn't really doing a translation at all, but just performs the function mapping words in Japanese to movements of the robotic arm. But that misses the essential point of communication: that the message conveyed is actually understood by the the interlocutor.
I know, you know
From Nature: Web users judge sites in the blink of an eye
We all know that first impressions count, but this study shows that the brain can make flash judgements almost as fast as the eye can take in the information. The discovery came as a surprise to some experts. "My colleagues believed it would be impossible to really see anything in less than 500 milliseconds," says Gitte Lindgaard of Carleton University in Ottawa, who has published the research in the journal Behaviour and Information Technology. Instead they found that impressions were made in the first 50 milliseconds of viewing.
Lindgaard and her team presented volunteers with the briefest glimpses of web pages previously rated as being either easy on the eye or particularly jarring, and asked them to rate the websites on a sliding scale of visual appeal. Even though the images flashed up for just 50 milliseconds, roughly the duration of a single frame of standard television footage, their verdicts tallied well with judgements made after a longer period of scrutiny.
So I'm reading a conference paper on Andy Clark's extended mind hypothesis
. The argument offered against Clark is that we know our internal states with an immediacy that is absent in his extended examples, which involve perception of external devices and are thereby open to sabotage and deception in ways the internal awareness is not. Clark's reponse, at least according to the paper, is to say that we do sometimes treat perception like immediate internal awareness. Phenomena like change blindness
occur because we think perception is so reliable in the normal case. The paper then proceeds to argue that this response isn't convincing, and tries to defend Clark from other angles.
I think Clark is right, though grossly individualistic, but this study presents a rather striking validation of his argument. Not only do we judge the quality of these pages almost immediately, but we do it in much, much less time than it takes to perform a full cogntive act of perception. In fact, it raises the possibility, which I am somewhat convinced by, that perhaps Clark has the whole thing backwards- external perceptions might not just be structurally similar to internal endorsements, but rather, the majority of our internal endorsements might simply be some extension of these external processes of judgement. If thats the case, then Clark's thesis should be inverted: the mind doesn't extend into the world so much as the world extends into the mind. Our reliance on external devices is not the exception; its the rule
And but so anyway I included a creepy picture in this post for to manipulate your instantaneous judgment.
119 dangerous ideas
Dangerous Idea 120: Ending any list with a prime number.
Courtesy of D&D.
From The Edge World Question Center
The Edge Annual Question - 2006
WHAT IS YOUR DANGEROUS IDEA?
The history of science is replete with discoveries that were considered socially, morally, or emotionally dangerous in their time; the Copernican and Darwinian revolutions are the most obvious. What is your dangerous idea? An idea you think about (not necessarily one you originated) that is dangerous not because it is assumed to be false, but because it might be true?
Pinker apparently offered up the question, and the responses are all over the map and really interesting. Here's one of note, from Barry Smith:
What We Know May Not Change Us
We are perhaps incapable of treating others as mere machines, even if that turns out to be what we are. The self-conceptions we have are firmly in place and sustained in spite of our best findings, and it may be a fact about human beings that it will always be so. We are curious and interested in neuroscientists findings and we wonder at them and about their applications to ourselves, but as the great naturalistic philosopher David Hume knew, nature is too strong in us, and it will not let us give up our cherished and familiar ways of thinking for long. Hume knew that however curious an idea and vision of ourselves we entertained in our study, or in the lab, when we returned to the world to dine, make merry with our friends our most natural beliefs and habits returned and banished our stranger thoughts and doubts. It is likely, as this end of the year, that whatever we have learned and whatever we know about the error of our thinkings and about the fictions we maintain, they will still remain the most dominant guiding force in our everyday lives. We may not be comforted by this, but as creatures with minds who know they have minds — perhaps the only minded creatures in nature in this position — we are at least able to understand our own predicament.
Addendum: Of course, I'd say that we're barely capable of treating machines as 'mere machines'.
Also worth the read: Kosslyn turns Spinozistic
out of nowhere.
The new mind-body dualism taking shape in the new and largely unconceptualized world of the Internet is, as we have seen, the service/content dichotomy. This dualism reared its head in the discussions on Wikipedia
, and it surfaces again in SBC- I mean, AT&T's- continuing attempts at disrupting internet neutrality
From Ars Technica: AT&T sees benefits to tiered Internet service
Saying that "the reality is that business models are changing," Lindner said that there are opportunities to "enter into commercial arrangements and agreements that are beneficial to [AT&T and other] companies and are certainly beneficial to the service that customers have." As an example, Lindner talked about gamers who would benefit from AT&T partnering with a game server hosting company in order to provide exceptional service by creating privileged network connections "where we control quality of service." This isn't the same thing as allowing users to host game servers, or setting up servers for their broadband community. No, the idea is that using technological means, an ISP can partner with another provider on the Internet, and build a privileged network link to enhance service.
The multi-tiered Internet thus begins to take shape. You can continue to pay for your 6Mbps connection, but don't expect it to deliver all things equally. Quality of Service (QoS), a networking concept describing the technological methods for guaranteeing that some network traffic is serviced better than traffic, is the key. Customers will soon pay for premium service options to see specific kinds of traffic—gaming, VoIP, media streaming, and who knows what else—perform better because there is technology available that can give that kind of traffic a privileged status. For high-intensity bandwidth services, this could mean that companies dealing primarily in Internet-delivered services will need to partner with ISPs in order to deliver the experiences they want.
"There always will be some tension between companies that own and develop content and companies that have customer bases, and networks and distribution methods for that content," Lindner said. "It does involve some change, some evolution, in business models."
Ultimately, the fear is that QoS will be tapped in order to bolster the power of the ISPs, who all the while will defend their actions by saying that they are not blocking or inhibiting traffic. While QoS is nothing new, to date it has seen limited use in end-user commercial Internet service, largely because its uses have been limited. But with so many new bandwidth-intensive applications taking hold, this will likely change.
The idea of a multi-tiered internet is, of course, a rather simple divide and conquer strategy, and has roughly the effect of imposing class divisions on the internet. On its face this undermines the central virtue of the internet, but I am sure I don't need to defend neutrality for my readers.
What is interesting is that this imposition is justified by an appeal to the service/content distinction. My inner philosopher is somewhat amused by the dichotomy, which looks almost like a Heideggerian spin on the empiricist scheme/content dichotomy. A service is active and interactive; it is a procedure, it is something you do
, or at least, something that gets done to you. This stands in stark contrast to a scheme, which is dead and inert (As I like to tell my students, it is something you can write on a chalkboard).
The idea is that AT&T isn't doing anything wrong by offering a QoS package, since either way they aren't limiting the content you have access to. Rather, they are limiting the way you have access to it. You can use the standard pipes which may be of questionable reliability, or their QoS pipes which may offer better, more relaible service. Especially if they rig their standard pipes to disrupt services
they'd rather you pay for, like VOIP.
Notice what this means. AT&T is basically admitting that by being your ISP, all they are obligated to do is pass the info you have requested along, but that they aren't responsible for the quality of the service they are providing. Access to the internet, on this model, is specifically access to content. I don't know if they can pull this move off, though of course they have the resources and motivation to do so. The question is simply if they can sell it to consumers, or if we will be smart enough to see that this is a somewhat desparate and definitely evil attempt for an Old World company to stay relevant in the New World.
A post about robots
I figured we were due.
From Robotics Online
: Year of the Robot
Just how much intelligence we attribute to a robot is not the issue. They are devices with extremely advanced processing abilities, but human cognition and other emotive abilities aren't part of today's robot culture except in science fiction. Not that universities and other researchers aren't exploring these issues - they are. Some are experimenting with facial expressions and even devices similar to stuffed animals that can help autistic children or provide companionship to lonely seniors, and others are poking into the realm of artificial intelligence where insects are the current measuring stick.