Rethinking Hydrogen Cars

The July 18th issue of Science Magazine has an interesting article that gives a critical eye to the idea that hydrogen-powered automobiles is the best way to attack our environmental problems. (The article is also currently cached here for those without a subscription to Science.) The article makes two main points:

  1. The hydrogen-fuel infrastructure will be expensive (around $5000 per car).
  2. The bang-for-the-buck environmental improvement from replacing gas cars with fuel-cell cars won’t be as good as simply improving the fuel efficiency of existing cars on the road (especially ancient “high emitters”). They also identify fuel-burning power plants as a more cost-effective target for cutting emissions than the already-optimized gas-powered automobile. “When emission mitigation opportunities across the economy are ordered by their cost (to form a supply curve), dep reductions in automobile emissions are not inthe cheapest 30%… Hydrogen cars should be seen as one of several long-run options, but they make no sense any time soon” concludes the report. The report also notes that even in the area of transportation, hydrogen-powered heavy freight vehicles such as ships, trains and large trucks would be better first targets for conversion than the automobile.

Fuel Cell Today suggests that some of their numbers may be exagerated, especially when it comes to the cost of they hydrogen-fuel infrastructure needed for fuelcell-powered cars. In particular, they point out that the huge financial commitment auto makers have made to fuelcell technology is a good indication that they believe it will be economically viable. They also note that many of the alternatives raised in the Science article, while perhaps better targets from an energy-efficiency standpoint, are not possible in the current political climate.

Even given this criticism, the general point seems to be well-taken. As Marianne Mintz, author of one of the reports cited in the Science article, says to Fuel Cell Today, “They’re basically trying to make the point that there are other options that deserve a fair share of attention in the near term. I don’t think that anybody would argue with that.”

References:

  1. Rethinking Hydrogen Cars (Science Magazine, 18 July 2003)
  2. Rethinking Hydrogen Cars (Science Magazine, 18 July 2003, Cached copy that does not need subscription)
  3. Fuel cell cost study gets mixed reaction (Fuel Cell Technology, 28 July 2003)

Rethinking Hydrogen Cars Read More »

Transhumanism and the problem of value

The Village Voice has a nice summary of the Transvision 2003 USA Conference, sponsored by the World Transhumanist Association. Founded in 1998, the organization anticipates the day when technology will have the ability to halt aging and alter “limitations on human and artificial intelligence, unchosen psychology, suffering, and our confinement to the planet earth.” As the name implies, they look forward to the day when technology allows us to move beyond what we now consider “human,” becoming first transitional humans and finally “posthuman.” They also anticipate several bumps in the road, both in terms of real dangers from the technology itself and a backlash against what some might see as an unnatural or downright immoral use of technology to “play God.” Thus this conference, which brings together Transhumanists, professional bioethicists, anti-technology activists, and critical social theorists of science and technology.

I think these guys are pointing in the right direction, but they’re pointing way, way far out down the road. For example, here is their view on what a posthuman can become:

As a posthuman you would be as intellectually superior to any current human genius as we are to other primates. Your body would be resistant to disease and immune to aging, giving you unlimited youth and vigor. You would have control over your own desires, moods, and mental states, giving you the option of never feeling tired, bored, or irritated about petty things; you could instead choose to experience intense pleasure, love, artistic appreciation, focused serenity, or some other state of consciousness that currently human brains may not even be able to access.

Posthumans could be completely synthetic artificial intelligences, or they could be enhanced uploads [see “What is uploading?”], or they could be the result of making many partial but cumulatively profound augmentations to a biological human. The latter alternative would probably require either the redesign of the human organism using advanced nanotechnology or its radical enhancement using some combination of technologies such as genetic engineering, mood drugs, anti-aging therapies, neural interfaces, advanced information management tools, memory enhancing drugs, wearable computers, and cognitive techniques.

I tend to be a techno-optimist when it comes to my own fields of intelligence augmentation and wearable computing, as well as those I know less about such as genetic engineering and psychoactive drugs. Many years from now (sadly, probably a generation or two after I am already dead) I expect some of the things the Transhumanists predict will come to pass. However, there are a few fundamental issues that we will have to face along this road before we ever get to the point on the horizon that they look towards.

First, we will hit a crisis of values. Biology can make us stronger, healthier and longer-lived. Artificial intelligence can make us better able to solve problems and reach goals we set for ourselves. Psychology and psychiatry can help us better understand and change of our moods, emotions and motivations. But none of these sciences can tell us whether being long-lived is good or bad, whether the goals we choose to achieve are the “right” goals, or whether the (presumably happy and contented) moods we choose to feel are in any way more appropriate than how we feel today. These questions can only be answered by liberal arts such as religion, ethics and philosophy, not science, not logic, not pure reason. (Being rationalists, I suspect the Transhumanists would be upset by that assertion, but no matter. Others with a different set of philosophical tools will come to answer these issues.)

Second, long before technology brings us the first transhuman it will by necessity bring us a deeper understanding of what it means to be human. These findings will likely have wide-reaching repercussions in how society operates. For example, we may discover that our personality, intelligence, and our very choices are determined solely by the chemistry of our brain, leaving no room for an atomic, immutable soul or indeed any identity that continues throughout time. Such issues are already being taken on by philosophers such as Daniel Dennett. They are also seeping into practical questions over the use of Prozac, the acceptability of the insanity defense plea, the regulation of and treatment for addictive drugs, and the concept of justice and “reform” of criminals. If the Transhumanists are right, these battles will be nothing compared to the turmoil over issues of identity, free will and responsibility that are yet to come.

Finally, we will have to accept that transhumans may be very unlike humans now, not only in ability but in morals and values. The Transhumanists believe “progress is when more people become more able to deliberately shape themselves, their lives, and the ways they relate to others, in accordance with their own deepest values.” What happens when I change myself so much that my deepest values themselves change? And what if, in my new transhuman state, I decide that intelligence isn’t all it’s cracked up to be and the true purpose of life is to sit around doped-up on happy drugs all day? Would you, inferior normal human that you are, decide that perhaps given my choices I’m not so superior after all? The question of value is paramount in deciding what even qualifies as transhuman or posthuman. It is, I suspect, something of a Göedel statement for the Transhumanist philosophy.

The reasonable man adapts himself to the world; the unreasonable one persists in trying to adapt the world to himself. Therefore all progress depends on the unreasonable man.
— George Bernard Shaw

Transhumanism and the problem of value Read More »

1000 down, 39999000 to go…

The RIAA has been feeling their oats after their victory against Verizon back in April, where the ISP was forced to reveal the names of customers who had been engaging in illegal file-swapping. Since then the RIAA has issued at least 911 subpoenas and expect to file at least several hundred lawsuits in the next few weeks in what can only be described as a “shock and awe” fight for the mindeshare of the average American.

However, more recent demands for user information have been rebuked. Last week MIT and Boston College both challenged subpoenas for user identification on their networks on two points. First, the demands that come under the DMCA are in conflict with the Family Education Rights and Privacy Act, which prohibits colleges from giving personal information without first informing the student. Second, they charge that the RIAA should have filed its subpoenas in Massachusetts instead of Washington, DC. And now Pacific Bell Internet Services is challenging more than 200 subpoenas on the same grounds: that they violate their user’s privacy and that they should have been filed in California, not Washington, DC.

The RIAA is correct in claiming that these challenges are only on procedural grounds, though already the RIAA’s shotgun approach has drawn the ire of Senator Norm Coleman, R-Minn., who chairs the Senate Permanent Subcommittee on Investigations. Another point I haven’t seen brought up in the news is that this “procedural challenge” could force the RIAA to change the venue in which its subpoenas are filed away from the court where their original Verizon case was won. (I’ll leave the analysis about whether that matters to someone with the necessary legal knowledge.)

Of course, the real battle is still for the hearts and minds of the American public. The RIAA could care less about the hundreds of college students and little-old-ladies they’re trying to sue for millions of dollars each, what’s important is the millions of Americans who think that sharing music is OK. And on that front they have more bad news: a recent survey from the Pew Internet & American Life Project reports that 67 percent of Internet users who download music say they don’t care about whether the music is copyrighted. If you accept the Ipsos/Reid finding that one quarter of Americans have downloaded music, that comes down to about 40 million Americans who have downloaded music and don’t care. And that, my friends, is a lot of subpoenas.

References:

1000 down, 39999000 to go… Read More »

Art history, optics and scientific debate

Our Chief Scientist, David Stork, has been doing some side research the past few years in art history. In particular he’s been assessing a theory that artist David Hockney presents in his book “Secret Knowledge”: that artists as early as 1430 have secretly used optical devices such as mirrors and lenses to help them create their almost photo-realistic paintings.

The theory is fascinating. Art historians know that some master’s used optical devices in the 1600’s, but Hockney and his collaborator Physicist Charles Falco claim that as early as 1430 the masters of the day used concave mirrors to project the image of a subject onto their canvas. The artist would then trace the inverted image. This alone, Hockney and his supporters claim, can account for the perfect perspective and “opticality” of paintings that suddenly appear at in this time period.

If the theory itself is fascinating, I find Stork’s refutation even more interesting. Stork’s argument is based on several points. First, he argues, there is no textual evidence that artists ever used such devices. Hockney and his supporters counter that the information was of course kept as a closely guarded trade secret, and that is why there was no description of it. It isn’t clear how these masters also kept the powerful patrons whose portraits they were painting from discussing their secret. Stork’s second argument is that, quite simply, the paintings aren’t all that perfect perspective after all. They look quite good, obviously, but if you actually do the geometry on the paintings Hockney presents as perfect you see that supposedly parallel lines don’t meet at a vanishing point as they would in a photograph. And third, Stork points out that the methods Hockney suggests would require huge mirrors to get the focal lengths seen in the suspected paintings: mirrors far far larger than the technology could create at the time.

My analysis is a little unfair to Hockney as I’ve only seen Stork’s presentation, but I must say I’m impressed with his argument. Hockney’s argument is quite media-pathic. It’s a mystery story that wraps history, secrecy, geniuses, modern science and great visuals all in one — no wonder it’s captured people’s attention! Unfortunately, I expect Stork’s right about one of the less fun aspects of the theory. It’s also probably dead wrong.

For those interested, a CBS documentary on Hockney’s theory will be rebroadcast this Sunday, August 3rd, on 60 Minutes.

References:

Art history, optics and scientific debate Read More »

I got the horse right here…

The story sounds like something out of The Onion, or maybe a dystopian science fiction short story. As reported widely in the news yesterday, the Pentagon has been planning an electronic futures market for analysis of foreign affairs. The idea is to create a market where people can anonymously bet on things like whether the US will reduce troop deployment in Iraq by year’s end, or whether Arafat will be assassinated. The current odds on the bet, so the argument goes, best reflects the actual probability given everything the collected thinkers know. Policy-makers could then use the probability to know where to focus their attention.

By today the firestorm had swept Washington and the Pentagon announced the project has been canceled. Apparently congressmen were not completely aware of what had been planned, in spite of the general plan being up for many months on DARPA’s web site and mention of the project in a March New Yorker article.

I can’t help but feel sympathy for Robin Hanson, the George Mason University Economics professor who has been spearheading the project. Critics were quick to describe the project as a marketplace where terrorists and mercenaries could make money by betting some horrific event would happen and then causing it. But as Hanson describes in interviews and on his Web site, the idea is more that professors, armchair analysts, and frequent travelers from all walks of life would combine their on-the-ground expertise to come to conclusions even the most expert intelligence worker in Washington wouldn’t be able to reach. But interested as I am by the concept I just can’t see it working for a number of reasons:

  • First off, critics are right in thinking there’s something morally repugnant about the whole plan. The US government should not be hosting a Website dedicated to graveyard gambling, regardless of whether it would actually encourage terrorists to make money from their exploits. (Personally I don’t believe there’s any chance a halfway-competent terrorist would bet on his own success on a Website run by DARPA, regardless of their assurances that betters will remain anonymous.) In fact, the whole plan bears a striking resemblance to the Assassination Politics plan devised by the Cypherpunk-anarchist Jim Bell. The plan describes how communities of individuals could put a price on a government official’s head simply by donating a prize to whoever can predict the exact date of that person’s death. (Bell is currently facing a 10-year prison term for harassment of a Federal officer.)
  • There has been a lot of talk about how the attack on the World Trade Center could have been avoided if all the information that was distributed around the country could have been brought together in one place. That may be true, but an ideas futures market wouldn’t have helped. What we needed was more analysis and communication; the marketplace is too abstract and mediated to allow anyone to put the pieces together. An ideas market won’t bring together the CIA agent studying Al Quaeda and the Florida flight-school instructor because neither would have enough pieces of the puzzle to realize what they were looking at. Marketplaces are additive; intelligence requires synthesis.
  • Even if the market was a reasonable risk-estimation system, it’s not clear what the government could do with that information. As Bloomberg.com, points out, the market would be quite noisy, similar to the stock market. As we’ve seen from the constant rainbow of alerts we’ve gone through over the past two years, unspecified and uncoroborated threats aren’t all that useful when you’re trying to set up a defense.

Update: According to futures sales on Tradesports.com, John Poindexter’s chances of keeping his job after this uproar are around 70%.

References:

I got the horse right here… Read More »

NPUC 2003 Trip Report

A couple weeks ago I attended the New Paradigms in Using Computers workshop at IBM Almaden. It’s always a small, friendly one-day gathering of Human-Computer Interaction researchers and practitioners, with invited talks from both academia and industry. This year’s focus was on the state of knowledge in our field: what we know about users, how we know it and how we learn it.

The CHI community has a good camaraderie, especially among the industry researchers. I suspect that’s because we’re all used to being the one designer, artist or sociologist surrounded by a company of computer scientists and engineers. Nothing brings together a professional community like commiseration, especially when it’s mixed with techniques for how to convince your management that what you do really is valuable to the company.

One of the interesting questions of the workshop was how to share knowledge within the interface-design community. Certainly we all benefit by sharing knowledge, standards and techniques, but for the industry researchers much of that information is a potential competitive advantage and therefore kept confidential. Especially here in Silicon Valley, that kind of institutional knowledge gets out into the community as a whole through employment churn, as researchers change labs throughout their careers.

Here are my notes from several of the talks. Standard disclaimers in place: these are just my notes of the event and subjected to my own filters and memory lapses. If you want the real story, get it from the respective horses’ mouths.

NPUC 2003 Trip Report Read More »

Electronic Voting Gets Burned

Electronic voting is getting slammed this week. First, Dan Gillmor’s Sunday Column took election officials to task for not insisting on providing physical paper trails that can be followed should the results of an election be in doubt. Then on Wednesday several computer security experts at Johns Hopkins University and Rice University published a scathing analysis of the design of the Diebold AccuVote-TS, one of the more commonly used electronic voting systems, based on source code that the company accidentally leaked to the Internet back in January. Exploits include the ability to make home-grown smart-cards to allow multiple voting, the ability to tamper with ballot texts, denial of service attacks, the potential to connect an individual voter to how he voted, and potentially the ability to modify votes after they have been cast. The New York Times and Gillmor’s own blog have since picked up the report. Diebold has since responded to the analysis, but at least so far they haven’t addressed the most damning criticisms.

There are several lessons to be learned from all this:

Electronic Voting Gets Burned Read More »

US to add RF-ID to passports by October 2004

Frank Moss, US deputy assistant secretary for Passport Services, announced at the recent Smart Card Alliance meeting that production of new smart-card enabled passports will begin by October 26, 2004. Current plans call for a contactless smart chip based on the ISO 14443 standard, which was originally designed for the payments industry. The 14443 standard supports a data exchange rate of about 106 kilobytes per second, much higher than that of the widely-deployed Speedpass system.

US to add RF-ID to passports by October 2004 Read More »

IEEE deciding short-range wireless standard this week

Nearly six years to the day after the process was started, it looks like the IEEE is honing in on a single standard for a fast (around 100 Mbit/s), short-range (< 10m), low-power, low-cost wireless communication. The standard, which will be IEEE 802.15.3a, comes out of the IEEE Wireless Personal Area Network (WPAN) working group. Unlike cellular or Wi-Fi networks, the point of a personal area network is to communicate with other devices that are there in the room with you. For example, a high-speed WPAN would allow your PDA to stream video directly to a large-screen TV. Alternatively, your core CPU could wirelessly communicate with medical sensors, control buttons, displays and ear-pieces, all distributed around the body. The standard fills much the same niche as Bluetooth (the first standard adopted by the working group, also known as 802.15.1), but the new technology is significantly faster than Bluetooth (up to 100 times faster, according to champions of the technology).

Trade news columnists who know more than I do about this are picking Texas Instruments’ proposal for OFDM UWB (that’s Orthogonal Frequency Division Multiplexing Ultra Wide Band, thank you for asking) as the likely technology to be picked. Assuming it does, TI’s UWB business development manager says we can expect to see the first UWB products hitting the marketplace in 2005.

Update: The standard did not receive enough votes to pass, and will be voted on again in mid-September.

References:

IEEE deciding short-range wireless standard this week Read More »

We control the horizontal…

I can hear it now:

Exec #1: “Members of the Word Media Cartel, we are against the ropes. We’ve tried imposing draconian penalties for even trivial piracy. We performed a perfect end-run around the fair use doctrine with the Digital Millennium Copyright Act. We’ve sued into bankruptcy anyone who might have a business model more survivable than our own. We’ve even sued down-and-out college students for $97.8 trillion dollars each, as an example to others who would stand in our way. And yet the peer-to-peer networks continue to thrive.”

Exec #2: “If only our industry had a way to convince people that piracy was wrong. You know, change how people think about copying music and movies.”

Exec #1: “Yes, yes, but there’s no point in wishing for… hey wait, say that again!”

And so it came to pass: the Motion Picture Association of America launched an unprecedented media blitz to convince the American public that by using Gnutella you hurt not just Disney stock-holders, but also Jerry, the man who fetches coffee for George Lucas every morning at 5am.

The sheer power of this blitz is daunting. The kickoff this Thursday will have thirty-five network and cable outlets all showing the same 30-second spot in the first prime-time break (a “roadblock” in ad-biz terms). Then every major theater in the country will play daily trailers on all screens in more than 5,000 theaters. Whew. And all that time is donated, which would be incredibly impressive if the spots weren’t essentially being donated to themselves.

And now the $97.8 trillion-dollar question: is the American people so pliable that their morality can be changed by a media blitz? (Could that be the manic laughter of of thousands of ad executives I hear in the distance?)

References:

We control the horizontal… Read More »