No seat at the WIPO table for open source

Back in July, a group of 68 economists, scientists, industry representatives, academics, open-source advocates, consumer advocates and librarians proposed that the World Intellectual Property Organization (WIPO) host a meeting on the use of open collaborative development models. Examples described in the proposal include IETF standards, open-source software such as Apache and Apple’s Darwin OS, the Human Genome Project and open academic journals, among others. The WIPO’s initial response was quite favorable. Dr. Francis Gurry, WIPO Assistant Director and Legal Counsel, was quoted by Nature Magazine as saying “The use of open and collaborative development models for research and innovation is a very important and interesting development… The director-general looks forward with enthusiasm to taking up the invitation to organize a conference to explore the scope and application of these models.”

Needless to say, business interests like Microsoft saw such high-profile acceptance of open source as a threat, and immediately lobbied to have the idea squashed. The Washington Post and National Journal’s Technology Daily report that Lois Boland, the U.S. Patent and Trademark Office Acting Director of International Relations, dismissed the meeting as out of the WIPO’s area, saying the organization is “clearly limited to the protection of intellectual property.” “To have a meeting whose primary objective is to waive or remove those protections seems to go against the mission,” Boland told National Journal. She argued specifically against the discussion of open-source models, claiming that open-source software is not protected under copyright law but only contract law, which is not in the domain of WIPO. She also protested the manner in which the meeting was organized, saying WIPO’s agenda should be driven by member nations and the idea came from outside the organization. Under increasing pressure, WIPO canceled the meeting, saying the polarized political debate made the possibility of international policy discussion “increasingly remote.”

Lawrence Lessig’s blog blasts Boland, saying “If Lois Boland said this, then she should be asked to resign. The level of ignorance built into that statement is astonishing, and the idea that a government official of her level would be so ignorant is an embarrassment.” Personally I think Lessig is missing the broader picture here, or perhaps he is just not cynical enough. Rather than ignorance, Boland is simply showing unusual candor in her statements. Her position is that WIPO should promote international IP laws that support the current content industry, regardless of how that affects new upstart industries, national productivity, the economy or other important concerns. In the words of The Economist, she is being pro-business, but not pro-market. I agree with Lessig that this is abhorrent, but given how the U.S. continues to force brand-new IP protections down the world’s collective throat it seems to be a fair description of current U.S. policy.

The issues described in the proposal to the WIPO are not going to go away, and will eventually need to be addressed with or without the involvement of WIPO. As Ed Black, president of the Computer and Communications Industry Association, said on hearing the meeting was canceled: “Does this indicate that WIPO is abdicating authority and responsibility for these issues, including open source for the future? If so, we will all live by that, but then so must they. They should step up the plate or step aside. … It is inexplicable that they would shut the door on what are clearly important issues.”

References

No seat at the WIPO table for open source Read More »

Face Recognition gets the boot in Tampa

Tampa Police have decided to scrap their much-criticized face-recognition system, admitting that during a two-year trial the system did not correctly identify a single suspect. Similar face-recognition systems are still in use in Pinellas County, Florida, and Virginia Beach, Virginia, though neither of these systems have ever resulted in an arrest either.

Face-recognition technology evokes images of automatic cameras scanning bustling crowds, automatically picking out terrorists from the millions of faces that pass by. One day the technology may be able to deliver on this, but currently it is still necessary for a human controller to zoom in on individual faces using a joystick. A 2001 St. Petersburg Times article describes a Tampa police officer scanning the weekend crowd in Ybor City, checking 457 faces out of the some 125,000 tourists and revelers in an evening.

Let’s do some quick math. The police are only scanning 457 out of 125,000 people on a given night, or 0.3%. That means even if ten known bad guys from the watch-list are in the crowd, there’s still only a 4% chance any one of them will be looked at by the system. That number drops to 0.4% if there’s only one bad guy in the crowd that night.

Then there’s the chance that the face recognition system doesn’t sound an alarm. A recently published evaluation of the Identix system used in Tampa gives a base hit rate of 77% (that is, 77% of people on a watch-list were correctly identified). However, that was with a watch-list of only 25 faces. The hit rate goes down as watch-list size goes up, down to 56% with a watch-list of 3000 faces. According to the Associated Press, the Tampa database had over 24,000 mug shots on its watch-list. Then there’s the problem that mug shots were taken indoors and the surveillance cameras were outdoors. According to the evaluation, mixing indoors and outdoors can reduce hit rates by around 40%. (The 40% reduction was seen on identity verification tasks; the watch-list task is actually more difficult.) Finally, these results all assume a 1% false-positive rate, which would result in five false alarms per night. Given all these (well-known) problems, it’s amazing anyone ever thought this was a good idea.

There’re several reasons I hope this failure dissuades similar attempts by other law-enforcement communities. First, as a 2001 ACLU report on the Tampa system points out, our resources could be better spent, and face recognition can give us a false sense of security. Second, a face-recognition systems in a public space gives the impression that everyone is a suspect, regardless of whether the system actually works. And finally, face recognition technology continues to improve. It won’t happen in the next few years, but at some point the technology is going to reach the point where recognition is completely automated, high accuracy, and robust. When that happens, it will be possible to track large numbers of people as they go about their daily lives, and even track people retroactively from recorded video. Hopefully by this time our society will be so inoculated against such privacy violations that such uses will be inconceivable.

References

Face Recognition gets the boot in Tampa Read More »

Flash Voids

Science fiction author Larry Niven once described a world where people would instantly teleport to places where something interesting was happening, causing what he called “Flash Crowds.” Now the LA Times reports that movie makers are seeing the opposite problem: instant communication means that if the audience doesn’t like your movie on opening-night Friday, by Saturday you’ll have yourself a flash void:

“Today, there is just no hope of recovering your marketing costs if the film doesn’t connect with the audience, because the reaction is so quick — you are dead immediately,” said Bob Berney, head of Newmarket Films, which distributed “Whale Rider,” a well-received, low-budget New Zealand picture that grossed $12.8 million and has endured through the summer. “Conversely, if the film is there, then the business is there.”

Two things are going on here. The first is just that word-of-mouth is getting faster, which we already knew. That means that the old strategy of hyping a bad movie so everyone sees it before the reviews come out won’t work much longer. The more important point, though, is that movie companies are seeing their carefully crafted ad campaigns overwhelmed by the buzz created by everyone’s texting, emailing and blogging. The shift in power cuts both ways: audience-pleasers like Bend It Like Beckham thrive on almost buzz alone, while The Hulk was killed by buzz based partially on pirated pre-release copies, in spite of a huge marketing campaign.

Studios (and producers in general) will learn one of two lessons from this trend. Either they’ll decide they need to manipulate buzz by wooing mavens and carefully controlling how information is released, or, just possibly, they’ll follow the advice of Oren Aviv, Disney’s marketing chief: “Make a good movie and you win. Make a crappy movie and you lose.”

References

Flash Voids Read More »

The ESP Game

What do ESP and Artificial Intelligence have in common? The ESP Game, a new game (and AI research project) recently discussed at IJCAI by CMU researcher Luis von Ahn.

Many AI researchers believe that the biggest barrier to creating human-like intelligence is that humans know millions of simple everyday facts. This ordinary knowledge ranges from knowing what a horse looks like to a simple fact like “people buy food in restaurants.” In the past, AI researchers would spend years painstakingly entering such information into huge databases, but now a new crop of researchers are leveraging the millions of Netizens who have nothing better to do than answer stupid questions all day to build these databases quickly and for free. One such site is the OpenMind Initiative (hosted by my own Ricoh Innovations), which is primarily being used by the MIT Media Lab to collect Common Sense Knowledge.

The latest foray into this space is the ESP Game. When you log into the game you are paired randomly with another player on the Net. Both you and your partner are shown the same 15 random images from the Web, one at a time. Your job is to type in as many words to describe the image as possible, with the goal of matching a word your partner has entered. When you agree on a word, you both get points and move on to the next image. Usually I don’t care for Web-based games, but I have to admit the game is compelling.

The real goal of the system is to generate a huge database of human-quality keywords for all the images on the Net. The task is huge: Google’s Image Search has already indexed over 425 Million images by using the text that surrounds the image’s hyperlink. But numbers are on Ahn’s side: if only 5000 people were to play the game throughout the day, all 425 Million images would receive at least one label in a single month. Given that many game sites get over 10,000 players in a day, a few months is probably all Ahn needs to fill out the whole database.

The ESP Game Read More »

Micropayments finally here?

I’m probably the last on the block to have heard about this, but Scott McCloud, the author of Understanding Comics, has finally come out with an online comic available for a micropayment of 25 cents. Or rather, he came out with it over a month ago, but I just found out about it today. As you might expect from Scott, he’s put the new medium (Macromedia Flash in this case) to good use without losing the fundamental comic-book feel. It was a quarter well-spent, especially since I could download the content to my computer and feel like I actually got something I can call “my copy.”

Payments are made through BitPass, a new startup out of Stanford that allows you to open an account with as little as three dollars and a credit card or PayPal account. The whole process was quick and painless, as is the payment process itself. There’s not too much content you can purchase through BitPass yet, but it looks like they’re building up a solid content base as they go through their beta-testing. Content providers seem to still be figuring out how the market will play out for different kinds of media: models range from the donation cups that are already common with PayPal, to purchase-and-download, to a “30 reads in 90 days” pay-per-view kind of model.

And now just in case I wasn’t quite the last person on the Internet to have heard about this, you know too.

Micropayments finally here? Read More »

And the DMCA be damned…

Here are a few free Mac programs I’ve recently come across that make it easy to exercise your rights to fair use. Which is to say, these are programs that allow you to backup, timeshift, spaceshift, or quote digital media that you have bought and paid for but that the Content Cartel would rather you not be able to manipulate. Windows users will have to find their own equivalents (they’re bound to be out there) or just break down and buy a Mac.

  • DVDbackup: A program that copies a DVD to disk. It can also change or remove region codes, remove the Macrovision Analog Protection System that prevents copying DVD movies to video tapes, and decrypt the Content Scrambling System (CSS) that prevents copying of commercial DVD content to another digital storage media. Simple drag-and-drop interface. Freeware. Note that some uses of this program may be illegal in the U.S. or any other country that has granted legal protection to any business model that can be encoded in digital rights management technology.
  • OpenShiva: Convert a VOB (DVD video) file to MPEG-4 with AAC audio codecs. This will reduce the size of a full-length feature film from about 4.7 Gigabytes to only 1 Gigabyte without substantial loss in quality. Simple interface, and lots of options including cropping and scaling of the final output. Open source (GPL). Note that for commercial DVDs you will need to use something like DVDbackup to decode the CSS encryption first.
  • WireTap 1.0.0: This program goes right to the sound drivers and records any audio playing on your Mac. This includes sound snippets from DVD movies, games, iChatAV conversations, or Internet radio. Free product provided by Ambrosia Software, the people who make the SnapzProX video/screen capture software.

And the DMCA be damned… Read More »

A Fair and Balanced Look at Hydrogen Fuel

(Happy Fair and Balanced Friday everyone!)

A few days ago I blogged about the economics of hydrogen cars. As a follow-up, I’ve recently come across a report from the Rocky Mountain Institute on hydrogen power: Twenty Hydrogen Myths. A summary of the report’s conclusions can be found here.

The gist of the RMI report is that hydrogen fuel is extremely efficient; a hydrogen fuel-cell car is 2-3 times more efficient than a gasoline car and 1.5 times more efficient than a hybrid gas-electric car (page 11). However, hydrogen is also difficult to transport because of its low energy-to-volume ratio, so their transition strategy (page 13, published in detail here) is to distribute energy in a different form, most likely natural gas, and then generate hydrogen local to where it’s needed. Building complexes would all have their own natural-gas-to-hydrogen converters, and the hydrogen would then be used to run fuel-cells to generate electricity. Excess hydrogen would be used to refuel hydrogen-powered cars during off-peak hours. These cars would initially be in company fleets, but as the infrastructure develops RMI sees the model expanding to sell fuel to cars in the neighborhood. Ultimately, natural gas will be supplanted by renewable energies such as wind and solar as these technologies become more cost-effective.

I don’t have the expertise to judge the arguments made in the report, but on their face they sound compelling. Most of all I’m pleased with RMI’s overall message: you don’t need to choose between environmentally friendly business practices and the bottom line. Rather than argue that corporate fat-cats need to give up their profits so we can have cleaner air, RMI is creating road maps that show how businesses can improve the environment by acting in their own economic self-interest. Assuming these road maps stand the test of the market, that sounds a lot more effective (and valuable to society) than raging against the machine or trying to pass ham-handed government regulation, especially in today’s political environment.

References

A Fair and Balanced Look at Hydrogen Fuel Read More »

Kaltix and personalized search

There are some interesting rumors floating around about Kaltix, a stealth start-up out of the Stanford WebBase Project. This is the same group that created the PageRank algorithm that was later spun out as a little start-up called Google. As you might expect with a company in stealth mode we’re still long on speculation and short on facts, but it looks like their main technology is a faster way to compute PageRank, the algorithm used by Google to rank hits from a search based on the Web’s link structure.

This is interesting because it would allow Google (or any other search engine) to quickly recalculate personalized indexes for each and every user. After seeding a personal index with my bookmarks file, Google would know that when I for “Jaguar” I’m probably interested in the latest version of Apple’s OS, not the car or the cat. The CNET article has a good overview, but Jeffrey Heer’s blog has a nice perspective as a researcher who happens to be housemates with one of the Kaltix founders.

There are a lot of question-marks still, and I’m not yet convinced that Kaltix’s technology is the crown jewels that Heer or the CNET article claim it is. Speedy indexing is necessary for large-scale personalized search, but you still need to create a profile from something. The real question will be whether a search engine can generate a personal profile that helps disambiguate the searches people make in actual use. Add to this the need to keep personal information like browser history from being transmitted to outside companies and you have a tall order. I’m not saying these problems can’t be solved, but as far as I know they haven’t been solved yet. I expect Kaltix will get bought by one of the big search companies, but it will still be several years before we see personalized search running on any large (non-intranet) scale.

References

Kaltix and personalized search Read More »

Identity Theft and the Need for a New Common Sense

A couple stories have come up the last two days that highlight how the way the law and business determines identity isn’t keeping up with technology. One story is about identity theft and the other about computer security violations, but both have a common thread: technology has made it so our common-sense assumptions about how to tell someone’s identity no longer work.

The first is a lengthy Washington Post article about identity theft. The driving story is about Michael Berry, whose identity was stolen by an ex-con who proceeded to rack up debt and eventually commit murder all while living under Berry’s name. Around this driving story the article gives a good analysis of just how incredibly easy and common this kind of identity theft is today.

It used to be that identifying someone was a long-term and high-touch operation. You’d get paychecks from a local business, deposit checks at the local bank branch, and write checks to the local grocery store. Over time all these entities would get to know you and your identity would become firmly entrenched in the system. Now that society is more mobile that system doesn’t work, and we’re finding that the replacement system of asking for social security numbers or mother’s maiden name doesn’t work too well either. Currently banks have to eat any monetary losses that come from identity-theft fraud, but do not currently have to take responsibility for damage caused to a person’s credit rating or reputation (as a recently upheld by the South Carolina Supreme Court). That means that, as the law stands now, the economic incentives encourage more convenience and less security than would be the case if banks had to take the total cost of identity theft into account.

The second story is from yesterday’s New York Times, who reported that a British man was exonerated of child pornography charges after his computer was found to have been infected by nearly a dozen Trojan-horse programs. Mr. Green, who has lost custody of his daughter and spent nine days in prison and three months in a “bail hostel” due to charges, has all along claimed that his computer was infected and that it even dialed into the Internet when no one was home.

In this case the question is whether Green is responsible for the material on his own computer. Not long ago if a crime was committed in a particular house then the perpetrator could only be one of a handful of people. For these data crimes, the person actually downloading porn onto Green’s computer could have been literally anyone in the world. Similar arguments have been made about open Wi-Fi access points and “zombie” computers that are used as launching pads for attacks on other sites on the Net. As the Times article points out, there are two issues here. One is that bad guys could use such security problems as a defense, the other is that it really is a valid defense:

“The scary thing is not that the defense might work,” said Mark Rasch, a former federal computer crime prosecutor. “The scary thing is that the defense might be right,” and that hijacked computers could be turned to an evil purpose without an owner’s knowledge or consent.

The general problem is that our old common sense ideas of identity no longer hold, or can’t be applied in our hyper-convenient and mobile society. I’m not necessarily in control of my own networked computer. I’m not the only person who knows the last four digits of my SSN. And the person handling my application has almost certainly never seen me before, and that’s no cause for alarm. Perhaps technology will come to the rescue in the form of biometrics that can prevent identity theft while still preventing governmental abuses. Perhaps regulation will come to the rescue in terms of systems to challenge faulty information, and by insuring that those who are responsible for security have the incentive to maintain it. Probably a combination of these will be required, but in the mean time I expect the problem to get worse before it gets better.

References

Identity Theft and the Need for a New Common Sense Read More »

Guided Voting

Eugene Volokh has an interesting post about guided voting over at the Volokh Conspiracy (also discussed at Edward Felten’s Freedom to Tinker).

Guided voting already exists in basic form. I’m knowledgeable about a few political issues, but when it comes to local candidates or ballot initiatives outside my area of expertise I rely on party affiliation or endorsements from friends or organizations I trust to “tell” me how to vote.

Prof. Volokh’s point is that, like it or not, Internet voting will lead to a much greater role for guided voting. Today’s ballots have a candidate’s party affiliation printed on the ballot, but if I want to know how, say, the National Organization of Women feels about a candidate I need to do my homework in advance and bring a cheat sheet. Volokh paints a future where I could go to a trusted third-party site, say suggestedvote.com, and check off the organizations I would like to guide my vote. The website would then produce a suggested ballot that aggregates all the recommendations of the organizations I picked, possibly weighing organizations differently in case they conflict on a particular issue. Then with a single keystroke my suggested ballot could be filed. The advantage of such a system, so the argument goes, is that the influence currently held by our two main political parties would be diluted and the political process would become more diverse.

While I like the idea in principle, I think there are two improvements that could be made to Prof. Volokh’s scenario:

First, there is no reason to have a third-party gatekeeper such as suggestedvote.com. More general and egalitarian would be for election boards to publish a standard XML ballot and then any interested party could publish their own itemized recommendations. I would be able to subscribe to recommendations from now.org, aclu.org, or even volokh.com just like I currently subscribe to RSS feeds to read several blogs at once. Of course, a site like suggestedvote.com could still offer to host RSS or similar recommendation feeds for anyone who doesn’t have their own website.

Second, I am quite frightened by the concept of one-click voting. Behavioral psychologists have repeatedly shown that people will tend to do what an interface makes easy to do (see The Adaptive Decision Maker for a nice analysis). This is why there are heated debates about things like motor-voter registration and whether voting booths should allow a single lever to cast all votes for a single party, policies that would be no-brainers if changing the convenience of voting didn’t also change who votes and for what. Given that any change we make will affect how people act, I want the system to encourage thoughtful individual contributions to our democracy, not a constituency of sheep.

This is not to say there should be no voting guides at all, but rather that people should still be forced to actually see and touch every ballot measure, even if it is only to find and check the their favorite party nominee. Each ballot measure and candidate would be accompanied by labels representing endorsements by each guide the voter has chosen, possibly with links from the endorsement to a short argument explaining the group’s reasoning. Rather than follow an automatically aggregated recommendation, voters would judge for themselves who to follow on each individual issue. Voters might even choose guides from organizations with whom they explicitly disagree, either to vote against their measures or to see opposing viewpoints. This system would not be that much more inconvenient than the one-click voting Prof. Volokh suggests, but would insure individual voter involvement while still giving the main advantages of voting guides.

References

Guided Voting Read More »