Pictures are cool
BBC: Top Ten Science Pictures of the Year
Migraine attacks can cause a variety of visual symptoms (aura) as well as the notorious stabbing head pain. This is a representation of a barn seen during an attack, painted by an artist and migraine sufferer.
Representation of panspermia - a theory that the seeds of life are found throughout the Universe, and that life arose on Earth when such seeds landed here early in geological history. The image shows eggs shattering and releasing smaller eggs.
The latter picture, while not particularly interesting in itself, did lead me to look up panspermia
. Of note:
A second prominent proponent of panspermia is Nobel prize winner Francis Crick, who along with Leslie Orgel proposed the theory of directed panspermia in 1973. This suggests that the seeds of life may have been purposely spread by an advanced extraterrestrial civilization. Crick argues that small grains containing DNA, or the building blocks of life, fired randomly in all directions is the best, most cost effective strategy for seeding life on a compatible planet at some time in the future. The strategy might have been pursued by a civilization facing catastrophic annihilation, or hoping to terraform planets for later colonization.
1) Say Crick's theory is true. Does that make us the product of intelligent design? Consider that life would not have arisen on Earth without intelligent creatures performing this terraforming technique. Consider also that such creatures would have had to send sturdy DNA molecules out in space, perhaps engineering them to survive such an ordeal.
2) Should we take it as an essential task of science to continue such terrforming techniques, whether or not it was in fact responsible for our our genesis? If so, what sorts of earth creatures would we send to space, and would we attempt to modify them significantly from their natural form?
3) Is it immature for me to giggle at the term 'panspermia'?
A is for awesome
Wired: R is for Robot
In the back of the lab, by a coffee table made from scrap pegboard left over from Rubi's exoskeleton, Movellan tells me in hushed tones about the epiphany that pushed him headlong into the world of affective computing. In the fall of 2002, he was working in Kyoto at ATR, the Japanese government's robot research lab, sinking deeper and deeper into the mathematics of machine perception, drifting in the intellectual tides and feeling uninspired by it all. "I was very skeptical. There was a robot there, and I didn't like it. It would say things like 'Hug me! Hug me!' It really irritated me." One day Movellan found himself using the robot to test an early version of the face-tracking program that he and Fasel developed here in La Jolla. "It worked really, really well. As I was testing it, I kept moving, and this robot kept looking at me, and his eyes moved in a particular way, and I got close, and this robot kept looking at me. And then it hugged me. And it completely got me." Movellan was shocked by the strength of his own response. "I said, 'What's happening here? I know this thing is dead. I mean, it's not alive. But I would swear that this thing is alive.'"
Oh God Yes
I'm sure this is everywhere on the internet right now, but let me indulge in my childhood imagination.
...this isn't science fiction. A set of extraordinary images captured by Japanese scientists marks the first-ever record of a live giant squid (Architeuthis) in the wild.
The animal-which measures roughly 25 feet (8 meters) long- was photographed 2,950 feet (900 meters) beneath the North Pacific Ocean. Japanese scientists attracted the squid toward cameras attached to a baited fishing line.
The scientists say they snapped more than 500 images of the massive cephalopod before it broke free after snagging itself on a hook. They also recovered one of the giant squid's two longest tentacles, which severed during its struggle.
edit: that leaves us 2/3rds of the way to the dream scenario:
What the hell are C-fibers?
They are free nerve endings
C-fibers are unmyeliniated and as a result, have a slower conduction velocity; lower than 2 m/s. These fibers are associated with chronic or dull pain. C-fibers are associated with sensations of cold, as well as mechanical and chemical stimuli. The density for cold receptors in the skin is greater than the density of warm-receptors (by a factor of 5). It is thought that most or all C fibers are nocireceptors (responding only to noxious stimuli).
Aδ fibres are thin, myelinated fibers with a large conduction velocity (2 to 30 m/s) and are associated with acute pain. In a real life situation, this is the sharp pain that triggers reflexes which result in the "pulling away" from the stimuli (ie: yanking hand away from hot stove). A certain proportion of Aδ fibers are also associated with sensations of heat and pressure.
1) Should philosophers switch to Aδ fibers firing as physical correlates for pain?
2) Do you think Kripke was a masochist based on his use of C rather than Aδ fibers?
Jimmy Wales on Wikipedia
Wikipedia founder Jimmy Wales
was interviewed on C-SPAN last night. You can read the transcript
and watch the interview on the same page. Of interest:
LAMB: As I was doing - well, using Wikipedia to do the research for this interview I kept thinking when will Google or Yahoo! put Jimmy Wales out of business. And then I - as I read further, you're in business with them in some way.
WALES: Yes, in some way. I think we have - we're a non-profit organization that I founded. And we've gotten support from Yahoo! already and Google is very interested in supporting us. We're just still talking to them about what to do.
And Yahoo! has donated some servers. And I think what's interesting about that is that if you - you know, it's almost a joke but it's completely true. If you think about well why - why do Yahoo! and Google want to do this and well, their business model depends on the Internet not sucking and we hope the Internet not suck. So it's that the Wikipedia for a lot of people hearkens back to what we all thought the Internet was for in the first place which is, you know, when most people first started the Internet they thought oh, this is fantastic, people can communicate from all over the world and build knowledge and share information.
And then we went through the whole dot-com boom and bust and the Internet seemed to be about pop-up ads, and SPAM, and porn and selling dog food over the Internet. And now Wikipedia kind of hearkens back to the original vision of the Internet.
And so it's important for the whole business of the entire Internet that there be quality resources that people can turn to and want to turn to. So that's - it's important to these companies to support us.
LAMB: Do you happen to know - I tried to find it yesterday and you can't get into Alexa to do this.
LAMB: At least - you have to pay for it I guess now?
WALES: Oh, no, I think it's still ...
LAMB: Is it still available?
WALES: ... last I checked I think we're around 40th now.
LAMB: 40th in the world ..
WALES: Yes, in the world.
LAMB: ... the busiest ...
WALES: Yes, according to Alexa.
LAMB: ... 40th busiest Web site.
WALES: Yes, which puts us - if you look at the numbers for reach, meaning the number of unique visitors that we see in a - in a day, you know, if you compare us to the New York Times, the Washington Post, USA Today, we're larger than all of those, but even more than that we're larger than all of those combined. So that's the number of people we're reaching globally every day. It's substantial.
LAMB: How many people work for Wikipedia?
WALES: One. Its not me.
The discussion about the constraining rules of the encyclopedia is also quite interesting.
Thanks to toliverchap
for telling me about this.
Birthday Blog Post
According to Wikipedia
, Sandra Day O'Conner was sworn in as the first female Supreme Court Justice on the exact date of my birth. My birthday is shared with:
- 1897 - William Faulkner
- 1930 - Shel Silverstein
- 1931 - Barbara Walters
- 1932 - Glenn Gould
- 1951 - Mark Hamill
- 1952 - Christopher Reeve
- 1965 - Scottie Pippen
- 1968 - Will Smith
- 1969 - Catherine Zeta-Jones
Curiously, no one born on my birthday since 1978 is worth Wikipedia mention, and no one since 1970 is of any significant importance (Hal Sparks doesn't count). Perhaps I am not so much a slacker but just subject to the whim of astrology.
Also, today is 'Armed Forces Day' im Mozambique.
Turing the tables
I just wanted to clarify something about my position. During the Goodman reading group Dave commented on how part of my project was to redefine the Turing Test
in such a way as to recognize that machines already pass it. Dave suggested that this sort of philosophical analysis could be used to cash in on Loebner Prize
and other science awards and make a tidy profit in the name of philosophical progress. I say we try to defend a Leibnizian theodicy
and get our hands on one of them Nobel Peace prizes.
Kidding aside, it would be absurd to say that machines currently pass the Turing test, understood in the conventional way. The Turing test as implemented in the Loebner contest is concieved as a test for linguistic competence, where being able to fool a human that the computer is also human indicates a certain degree of intelligence. But machines clearly have no linguistic
competence whatsoever. Machines still have a long way to go before they can be considered intelligent speakers.
It is worth noting, however, that the prize for this test doesn't go to the computer, but to the designers (as opposed to, for instance, dog shows, where the dog itself is considered the champion, and not its breeder). And this is the source of my objection to the traditional interpretation of the Turing test.
Turing originally conceived of the test as a game to be played by humans and computers, and the computer's intelligence is judged relative to how well it played the game, both by the standards of the game and by the willingness of its human collaborator to attribute to it intelligence in playing the game. Turing thought that written language was sufficiently medium independent to be an objective determining factor in judging intelligence, but the idea of language use itself wasn't the focus of his imitation game. The point more generally is: how well can computers act like humans? With some of our more complicated behavior like language, the computers have a ways to go. With our more basic activities, computers are trotting along with us just fine, if not better.
Two points to make on this:
1) All that shit about the supposed 'singularity'
is just stupid, because if machines do something radically different than us (even if it is in some sense 'better') we just wont consider it an intelligence anymore. In other words, an 'incomprehensible intelligence' is not intelligent at all.
2) Computers try and keep up with us, just as much as we try and keep up with them. In other words, there is no static thing that it is to "act like a human". Thus, humans and machines are inevitably bound together in symbiotic evolution.
My thumbnail thinks terrible thoughts
and on those thoughts I thrive
I shove it in my slimy mouth
and suck its supply dry.
Ripping round the cuticle
'til red raw tears remain
it quenches quite deliciously
and keeps quality thoughts away.
Google gets sued
CNET News: Authors Guild sues Google over library project
Response from Google Blog
A bit more background: CNN - Google's digital library tests law
I went to the Chuck D
and Hilary Rosen
debate last night on file sharing. Rosen's
argument was basically the standard "Its stealing guys. Stealing is wrong", while Chuck D's
main argument was "CORPORATIONS". Neither seemed to have very strong opinions either way, and both conceeded a lot of points to the other side.
Of note was Rosen's rather condescending criticism of today's youth that, of all the important issues of the day we make stealing our rallying cause. I find that opinion to be very dismissive and unsympathetic to the legitimate complaints of the consumers. Stealing isn't right, and I am saying that as an unapologetic pirate of movies, music, tv, and porn. That doesn't excuse the authoritarian control the media has put over access and availability of their content- things like iPod is only compatible with iTunes, and that most media content isn't available on the net by any legal means.
Chuck D is clearly a business man, and understands that the system has to work to some extent in the way it does; his complaint was simply that the music industry has been horribly mismanaged at the expense of both the artist and the consumer, which is just bad from both a creative and business standpoint- except to the executives at the top who are making truckloads of cash. D made a good point: making music digital (on CDs) made information liquid, and the pandora's box has been opened. But from 1988 to 1998, the music industry made a shit load of money selling and reselling people's record collection back to them on shiny discs. The affect of that quick money making scheme is just catching up to them now.
D also said a few times "Technology giveth, technology taketh away." I dont know if he is a religious man, but I found it very satisfying to hear.
So today Google gets sued by the Author's Guild, over Google Print
, which is really just a fabulous resource. Although the cases aren't exactly parallel, the issue of IP rights vs the freedom of information strikes again. This time it isn't a bunch of snotty, poor college students but The Company That Can Do No Wrong. It will be interesting to see how this turns out.
1) Is Google stealing content? Is the digitalization of information inherently dangerous to the means of production? Does the Author's Guild have a legitimate complaint?
2) How will a decision on this case affect copyright laws in general, and IP issues specifically?
3) Is access to information a right? Are attempts to thwart the control of information flow ever unwarranted?
5) Is there a legitimate analogy between the Google case and the file-sharing cases (Grokster, etc), or is this connection just going to be over-played in the media? Will the Grokster decision bear on this case, or vice-versa?
"Collaborating with machines" by Tom Jenkinson.
The old preconceptions of machines (ie: drum machines, samplers, software) as inhibitive to "genuine" creativity/ "soulless" etc. are now quickly evaporating. The machine facilitates creativity, yes, but a specific kind of creativity that has undermined the idea of a composer who is master of and indifferent to his tools - the machine has begun to participate. Any die-hard instumentalists that still struggle to retain their notion of human sovereignty are exemplifying a peculiarly (western) human stupidity - resistance to the inevitable. What is also clear, though certainly undesirable by any retaining an anthropocentric view of composition is that this process proceeds regardless of any ideal point of human-machine collaboration (ie one where the human retains any degree of importance.) One might say that music is imploding in preparation for a time when there is no longer any need for it.
As is commonly percieved, the relationship between a human operator and a machine is such that the machine is a tool, an instrument of the composers desires. Implicit in this, and generally unquestioned until recently, is the sovereignty of the composer. What is now becoming clear is that the composer is as much a tool as the tool itself, or even a tool for the machine to manifest its desires. I do not mean this in the sense that machines are in possesion of a mind capable of subtly directing human behaviour, but in the sense that the attributes of the machine are just as prominent an influence in the resulting artefact as the user is; through his work, a human operator brings as much about the machine to light as he does about himself. However, this is not to say that prior to electronic mechanisation, composers were free and unfettered in their creations. As a verbal langauge facilitates and constricts our thoughts, the musical tradition, language and the factors of its realisation(ie instrumentation, limits of physical ability) were just as active participants in the compositional process as the "composer" was.
The problematic relationship between humans and machines stems from the abject remnants of the modernist idea that we can control our fates, perfect ourselves and our surroundings, postpone or eventually eradicate death. (Anyone who is afraid of dying needs salvation, but not as they might say, from death, but in fact from life, and of course a retreat into dogma suits this purpose very well). This view holds that anything can ultimately be made a subject of our conscious will. However, bending something to our conscious will, whether that is a person, a machine, or a situation always manifests a compensatory and contradictory aspect. Something crops up which subverts our will. Yet it is never admitted that such subvertions are simply the corollary of our obsession with conscious direction of our surroundings and thus the idiocy continues.
One might say that the western tradition simultaneously holds
anthropocentric views and yet makes scientific discoveries that continually point out that we are the center of nothing at all. (In that sense, we are all schizoid - we are all irreperably split, it is simply a matter of how you deal with it.) The use of machines has completed the abolition of anthopocentricity in a radical manner - that we are no longer even the centers of ourselves. Creativity does not seem to be an exclusively human activity anymore, but that begs the question, was it ever?
1) Can machines be creative? If not, how do we characterize the contributions and collaborations with machines in creative endeavors?
2) If we are the center of nothing at all, what are we? If creativity is not exclusively ours, are we
anything at all?
CNET News: Intelligence in the Internet age
A few thousand years ago, a Greek philosopher, as he snacked on dates on a bench in downtown Athens, may have wondered if the written language folks were starting to use was allowing them to avoid thinking for themselves.
Today, terabytes of easily accessed data, always-on Internet connectivity, and lightning-fast search engines are profoundly changing the way people gather information. But the age-old question remains: Is technology making us smarter? Or are we lazily reliant on computers, and, well, dumber than we used to be?
"Our environment, because of technology, is changing, and therefore the abilities we need in order to navigate these highly information-laden environments and succeed are changing," said Susana Urbina, a professor of psychology at the University of North Florida who has studied the roots of intelligence.
If there is a good answer to the question, it probably starts with a contradiction: What makes us intelligent--the ability to reason and learn--is staying the same and will never fundamentally change because of technology. On the other hand, technology, from pocket calculators to the Internet, is radically changing the notion of the intelligence necessary to function in the modern world.
What's undeniable is the Internet's democratization of information. It's providing instant access to information and, in a sense, improving the practical application of intelligence for everyone.
Nearly a century ago, Henry Ford didn't have the Internet, but he did have a bunch of smart guys. The auto industry pioneer, as a parlor trick, liked to claim he could answer any question in 30 minutes. In fact, he had organized a research staff he could call at any time to get him the answer.
Today, you don't have to be an auto baron to feign that kind of knowledge. You just have to be able to type G-O-O-G-L-E. People can in a matter of minutes find sources of information like court documents, scientific papers or corporate securities filings.
"The notion that the world's knowledge is literally at your fingertips is very compelling and is very beguiling," said Vint Cerf, who co-created the underlying architecture of the Internet and who is widely considered one of its "fathers." What's exciting "is the Internet's ability to absorb such a large amount of information and for it to be accessible to other people, even if they don't know it exists or don't know who you are."
Indeed, Doug Engelbart, one of the pioneers of personal computing technology in the 1960s, envisioned in the early '60s that the PC would augment human intelligence. He believes that society's ability to gain insight from information has evolved with the help of computers.
"The key thing about all the world's big problems is that they have to be dealt with collectively," Engelbart said. "If we don't get collectively smarter, we're doomed."
"We might one day sit around and reminisce about having to remember phone numbers, but it's not a bad thing. It frees us up to think about other things. The brain has a limited capacity, if you give it high-level tools, it will work on high-level problems," [Hawkins] said.
Technology hasn't taught us any new facts. Google doesn't (at least directly) discover truths about the world. It isn't even that important that information has become accessible. The key change here is that information has become accessible to everyone
, and thus the epistemic standards we hold a person to have rise across the board.
1) Division of labor was for Plato an essential feature of the polis. When that labor is divided among machines, and when machines in fact provide the bulk of that labor (both mechanical and intellectual), should machines still be excluded from membership in that society?
2) The necessity of education in society means that the human as it is born is itself not sufficient for membership. The human must also be domesticated into the culture and conventions of the society, which includes a certain amount of common knowledge. Today that includes mathematical and linguistic knowledge as more or less non-negotiable requirements. Tomorrow, that common knowledge might extend to incorporate the information freely available on the net. This seems to imply that domestication will require significant integration with technology as a non-negotiable pre-requisite. Do such considerations affect your answer to 1 above?
The virtual market
Discussion of the Post article: Virtual Games Create A Real World MarketD&D thread
Kellen's auction is just one example of how increasingly popular online role-playing games have created a shadow economy in which the lines between the real world and the virtual world are getting blurred. More than 20 million people play these games worldwide, according to Edward Castronova, an economics professor at Indiana University who has written a book on the subject, and he thinks such gamers spend more than $200 million a year on virtual goods. One site, GameUSD.com, even tracks the latest value of computer-game currency against the U.S. dollar, an exchange-rate calculator for the virtual world.
After Hurricane Katrina, the operators of EverQuest II assured more than 13,000 members in the Gulf Coast region that their virtual property would be protected and preserved until they could resume playing.
Koster pointed out that it's not necessarily in the game's best interest to imitate the real-world economy, in which the point is to get money so you don't have to do things. In the gaming world, the point is to do stuff. That's the fun of playing.
"The economies in the real world are designed to grow and progress toward an improved standard of living so that eventually you don't have to slay dragons for food -- you go to a supermarket and get dragon burgers.
"We don't want people to get to a point where they just go out for dragon burgers," he said. "That would not make for an interesting game."
To me the biggest problem about "virtual" items is the lack of a legal framework dealing with these items.
So should we standardize vitrual currency?
I actually favour the extension of the "normal" property rights (to a certain degree) to "virtual" items because I see no need to distinguish between the two. No need to create something new to protect "virtual" items.
Considering the roleplaying involved, social engineering "theft" should be no more illegal than virtual murder. If the theft involves real-life chicanery, that's something altogether different.
Yeah, that's what I want to. In-Game stealing is ok with me. Hacking accounts and selling the loot on ebay is not.
So then we are
drawing a line between real life and virtual life?
Certainly, it's still a game and games (virtual or non-virtual) have rules. I can "take" your bishop while playing chess like I can "take" your Deathblade playing some online MMORPG. As long as it happens within the framework of the rules of the game it's ok.
Well, the point here is how to distinguish between the rules of the game and the larger context in which the game occurs. Surely trading characters isn't part of the rules of the game, which is why people like Baron complain about the practice. But in the context of the free market, such things do have value and can be traded accordingly. So which rules do we hold as operative? Its not as simple as saying 'when you play the game, follow the rules of the game', because the whole question is how to delineate the game itself.
This isn't just a problem with the virtual world. It would be analogous to the Roman empire saying that only Roman money can be used in its borders, and any place that uses Roman money is part of Rome. So what about merchants on the trade routes to Rome? There are incentives for the trade mechants to accept any type of money, as long as there are reliable ways to convert it to Roman money. Does that make the merchants part of Rome? Either way, they are violating the rules.
The simple answer would be to just standardize the virtual currency so we have reliable conversions to, for instance, USD. So when you are trading geldings (or whatever), you are really trading USD under a different name. This way there really are one set of rules governing all transactions. You seem to think that this kind of regulation isn't necessary, but without it we face the problem of which rules are operative in which contexts, and that seems an intractable problem.
This is an interesting issue. I'm not sure what to make of it.
A few neat optical illusions I stumbled across:
Stare at the cross in the middle.
Most have probably seen this picture:
But this one... I dont even know.
The place where the rods cross are the same color in both pictures.
From forum user Tanith
Vonnegut was on The Daily Show this week, but was cut off before he could read his list. So in case you missed it, here it is in all its glory:LIBERAL CRAP I NEVER WANT TO HEAR AGAIN
Give us this day our daily bread. Oh sure.
Forgive us our trespasses as we forgive those who trespass against us.
Nobody better trespass against me. I'll tell you that.
Blessed are the meek.
Blessed are the merciful. You mean we can't use torture?
Blessed are the peacemakers. Jane Fonda?
Love your enemies - Arabs?
Ye cannot serve God and Mammon. The hell I can't! Look at the Reverand Pat Robertson. And He is as happy as a pig in shit.
So it goes.
Knight Automated Roving Robot
From the NYT: Robotic Vehicles Race, but Innovation Wins
It has been almost 18 months since the Pentagon's research arm, the Defense Advanced Research Projects Agency, first attracted a motley array of autonomous vehicles with a prize of $1 million for the first to complete a 142-mile desert course from Barstow, Calif., to Las Vegas. The most successful robot, developed by a Carnegie Mellon University team, managed all of seven miles.
With the next running scheduled for Oct. 8 - and this time a $2 million purse for the winner among 43 entries - it is clear that many of the participants have made vast progress. For some researchers, it is an indication of a significant transformation in what has been largely a science fiction fantasy.
"Computers are starting to sprout legs and move around in the environment," said Andy Rubin, a Silicon Valley technologist and a financial backer of this year's Stanford Racing Team, which produced Stanley.
The exact course will be secret until just hours before the event begins, but Darpa officials are said to believe that the original test was too much an exercise in automatically following global positioning satellite "bread crumbs" - the data points outlining the route that are given to the contestants shortly before the race begins.
So this year the course is likely to include unexpected man-made obstacles and other hurdles that would be trivial for a human driver, but vexing for the computer-controlled navigational systems that are at the heart of the technical challenge the Pentagon has laid out.
Despite the added complexity, there is a widespread expectation among robotics researchers that this time the course will be completed.
In an attempt to come up with some witty discussion questions, I found the following Wikipedia entry
KITT is Knight Industries' second attempt at a car with artificial intelligence. Its predecessor was KARR, an abbreviation for Knight Automated Roving Robot. KARR was programmed to primarily protect itself at all costs, but this proved to be hazardous to Knight Industries' interests, so KARR was deactivated and KITT introduced in its place.
Unlike KARR, KITT is programmed to primarily protect it's owner Michael Knight at all costs as well as all human life. This is made clear in the pilot movie/first two parter where Knight asks his new boss Devon Miles if KITT will protect anyone driving it. Devon's answer is that KITT's primary function is the preservation of human life, and Michael's in particular.
1) Suppose an automated car mishandles a turn and crashes. It is not an agent, so who bears responsibility?
2) Does the difference between KARR and KITT bear on our willingness to ascribe agency to them?
3) Is KITT's primary function teleological? If so, does that make KITT robustly intelligent and perhaps even conscious? If not, is it even possible to give a machine a teleological function?
4) If you answered no to 3, what becomes of Asimov's laws?
9 billion names of Google
Just ran across this doing a vanity search on Google's new Blog Search
function. I'll just post a snippet, the full text can be found here:
The nine billion names of God
Note: this is not the Clarke short story, which is a much better.
"You know what Google is?"
"Yes," I said. I was running low on patience.
"No, I mean, do you really know? More than just the site?"
Reluctantly, I shook my head.
"You ever meet anyone who worked for them?"
"Don't think so."
"You haven't. Nobody works for them anymore."
I shrugged, and took the man's empty pint. I didn't offer to refill it.
"They're self-contained. It's all automated, in there. It's underground."
I nudged the basket of pretzels in his direction. "Why don't you eat something?" I suggested. He shook his head with so much force that I thought he might knock himself off of the stool.
"Listen. Hear me out. You know how Google works," he said, but didn't want for a response. "They cache things, right? Like they send out these spiders and take pictures of everything on the web, so when you're searching, you're not even searching the internet."
I've heard that before, but it never made much of a difference to me. "Same thing, though," I said.
"You ever wonder why Google doesn't cache it's own searches?"
"They program around it."
"No. That's what you think. That's what everyone thinks. But it started back when Google was just a thesis project, back when it was just a drop in the data sea. No one thought to stop it back then. That web site you had, the one you forgot about. Almost everyone's got one of those, right? But Google doesn't forget. Google's studied that thing so many times that it's studied its own caches of you. What do you figure happens, when a site gets so big that it's bigger than the internet?"
"It's still a part of the internet, though."
"No. Now, the internet is a part of Google."
Exciting fall fashion
Ok, after about 6 hours of work I reformatted the template on this page. The process involved running through a couple of CSS tutorials (the one here
was much more helpful than anything I had back in the day with HTML circa 1996), lots of stealing from other templates, and hacking around with it until it looked decent. The comments were a bitch to put in order, let me tell you.
Anyway, I would appreciate two things from my loyal readers:
1) let me know of any bugs you find with the new template: missing links, unaligned text, whatever problems that you find.
2) This color scheme kinda sucks. If you have any better suggestions, please let me know.
From the NYT "Play the Senator: What to Ask Judge Roberts?"
Question #3 from Glenn Harlan Reynolds:
3. Could a human-like artificial intelligence constitute a "person" for purposes of protection under the 14th Amendment, or is such protection limited, by the 14th Amendment's language, to those who are "born or naturalized in the United States?"
1) Why is being 'born' in the US a prerequsite for rights? Suppose we let machines constitute personhood; would the 'born or naturalized' clause limit 14th amd. protection to those machines manufactured in the US or those who had been imported at least 14 years ago?
2) Is granting legal status to a (sufficiently human, sufficiently intelligent) machine more or less offensive than granting similar status to a corporation?
3) Do you think this will become an issue in Roberts' undoubtedly long tenure as CJ of the SCOTUS?
You were gone before I could tell you to quit yer bitchin.
You fell in the middle of an ellipse, big and black, like an oil stain. It wasn't fresh; it wasn't blood. It was the heat of your last breaths, and it formed a halo around your body.
I dug your grave today, in the front yard where you would run to when I tried to keep you close to the porch.
Your toys and food bowl and litter box are still scattered around the house. I dont want to put them away.
Hey, I know what I want to be when I grow up. "Chief Internet Evangelist" sounds like a snazzy gig.
Google Grabs Internet Founder From MCI
Google Inc. said yesterday that it had hired one of the technology industry's sages, luring Vinton G. Cerf from his longtime home at MCI Inc. to help the Internet search giant prepare for the future of Web systems and applications.
Cerf, 62, is known as a founding father of the Internet for his role in developing the Internet's basic communications protocol in the 1970s.
Cerf will assume the lofty title of "chief Internet evangelist," heading Google's efforts to build its network infrastructure and setting the standards for the next generation of Internet applications. He starts at Google on Oct. 3 but will work from Northern Virginia.
From Cerf's wiki entry
Vinton Gray Cerf (born June 23, 1943 in New Haven, Connecticut) is an American computer scientist who is commonly referred to as the "father of the Internet" for his key technical and managerial role in the creation of Internet and the TCP/IP protocols which it uses.
The mechanical womb
we will eventually be comforted by it.
Canine coach keeps dieters on a leash
A ROBOT dog that monitors your daily food intake and exercise levels and warns you not to eat that cheesecake could encourage people to stick to their diets.
The health-conscious pooch connects wirelessly to the dieter's pedometer and an electronic diary of their eating habits, to calculate their daily calorie intake and expenditure.
While it may sound frivolous, its US developers hope the robot, a souped-up version of Sony's dog Aibo, could ultimately help in the fight against the western world's obesity epidemic.
The system is being designed by Cynthia Breazeal at the MIT Media Lab in Cambridge, Massachusetts, who is famed for creating the emotional robot Kismet. It would use a pedometer, bathroom scales and a PDA connected by Bluetooth or Wi-Fi to gather information about weight, activity and eating habits that people generally have trouble calculating, remembering and reporting.
A computer will then accurately analyse the data and present the results to the person through the friendly face of a robot, says Breazeal's student Cory Kidd, who is working with her to develop the system, which is still at an early stage.
Past studies have shown that people who accurately record what they eat and how much they exercise are more likely to keep their weight down, and that a real 3D robot is more convincing than an on-screen character. A robot could also offer support that humans don't have the time, patience or desire to provide.
Aibo does not talk. Instead he has been programmed to exhibit four different behaviours, representing lethargy, energy and two stages in between, in response to a verbal cue such as "How am I?"
“A robot could offer support that humans don't have the time to provide”
Aibo will choose his response to mirror how the person should be feeling. If you have stuck to your daily calories, he will jump up and down, wag his tail, play vibrant music and flash the brightly coloured LEDs that pepper his 50 centimetre-tall plastic body. But if you have already had too many, he will move slowly and lethargically and play low-energy music.
"It's promising to look at mobile robots for defining behavioural change," says Tim Bickmore, a computer scientist at the Boston University School of Medicine, who showed recently that an animated computer companion could encourage people to exercise more.
Kidd will present the idea at the UbiComp conference on 11 September in Tokyo, Japan, and will begin a study on 30 overweight Bostonians next spring.
We are the shit sculptors.
1 - The brain is a bodily organ, closer in kind to the stomach than to a computer.
2 - The stomach, like all bodily organs, is functional in design. It accepts a certain kind of formatted input (chewed food), subjects it to a certain amount of processing (digestion), and gets rid of the byproducts (shit, or at least the unabsorbed precursor to shit).
3 - The brain, likewise, accepts formatted input (sensory information), subjects it to a certain amount of processing (cognition), and gets rid of the byproducts (thought).
4 - The processing of the brain doesn't yield substantive (physical) byproducts but meaningful (informational) byproducts, in the form of thoughts. Meaningful byproducts aren't released through any sphincter, but through language.
5 - Thinking is shit. Language is an asshole.
6 - Shitting is a bodily function that most people handle privately and without much consideration. Similarly, people don't handle thinking with much consideration; yet thinking cannot
be private. A person will let their brain shit all over the place without thinking anything of it.
7 - Shitting is not itself bad; it is natural. Rolling in your own shit is bad. Someone who cannot control their thoughts is akin to someone who cannot control their colon and does not bother cleaning up.
8 - Philosophers are in the business of taking the muddled thoughts and language of the ordinary person, cleaning it up and shaping it into a workable and reasonable form.
We are the shit sculptors.