June 2005

Kennedy on the vaccine/autism link

Robert F. Kennedy Jr. has a good overview on the potential link between mercury-based preservatives used in vaccines from 1981 to 2003 and the simultaneous huge increase in autism. I’d sort of bunched this theory in with fluoridation paranoia, but it looks like there’s a lot of concern among level-headed people who have looked at the data and have the expertise to understand it. If what this article implies is accurate, this whole thing could blow into another Thalidomide.

Kennedy on the vaccine/autism link Read More »

Grokster glass half full?

I’m feeling very “glass is half full” about today’s Supreme Court decision in MGM v. Grokster, which essentially says a technology company can be guilty of contributory copyright infringement if it induces others to violate copyright (e.g. through advertising). Sure it leaves open lots of questions hanging, which no doubt will be clarified after much more blood on the field. On the whole I’m still optimistic for where this might lead us in the long run:

  1. Peer-to-peer sharing of copyrighted files will continue unabated, that was a given regardless of the decision. I think this is a good thing not because I’ve some anarchist itch than needs scratching, but because the content cartel have been abusing their government-granted limited monopoly for decades, and they’ve become damaging to society. Congress is a part of the problem, so there’s no remedy there. Monopolies don’t change willingly, and the only two forces I see moving the cartel to serve their customers instead of abuse them are files-sharing on the one hand and empowered artists eliminating the middleman on the other. My big hope is that somehow these two stakeholders figure out the right way to join forces.
  2. The decision makes it harder for companies like Grokster to profit from copyright violations with a wink and a nod. That makes it less likely that we trade one set of market-masters and gatekeepers for another, and it also makes it a little easier for the cartel to survive as they (hopefully) reform into good corporate citizens. The message I’d take from this decision if I were MGM would be “OK, we’ll still have our clock cleaned if we don’t offer our customers something better than free, distributed, somewhat undercover, all-volunteer-provided infrastructure, but at least we don’t have to compete with funded commercial versions as well.” Or at least they’ll have a reprieve until legal alternatives like voluntary collective licensing or Creative Commons start to take their market-share.
  3. It’ll make P2P technologies even more decentralized and distributed. We’d never have seen the P2P technology explosion if the RIAA had embraced the posting of MP3s on the Web back in the mid-90s. Like a weakened virus that trains the immune system to later fight off a full-strength disease, we’re building the technology and mindset that will one day help protect us against far worse threats than the Disney Secret Police. Which leads my to my last hope…
  4. Maybe this will encourage businesses and technologists working with P2P to raise public awareness of the P2P applications that don’t involve copyright violations, like load-balancing, wireless ad-hoc networking, store-and-forward networks for the third world, and censorship-resistant communication.

Grokster glass half full? Read More »

Owning David

The cover story of this month’s Communications of the ACM is a mostly technical paper called Protecting 3D Graphics Content. In it, Stanford graduate student David Koller and professor Mark Levoy describe a method for copy-protecting 3D graphical models such as the ones generated in the Stanford Digital Michelangelo Project. Most copy-restriction schemes are snake oil — they rely on a mythological “trusted client” that prevents the user from accessing the raw bits being displayed on his own monitor by his own CPU. The Stanford team has gotten around this problem for 3D models by keeping the high-resolution model on their own server and only sending 2D images to the client. The client uses a much lower resolution 3D model for the interface to choose new camera angles. The method sounds sound, though the authors admit it might still be possible to reconstruct the 3D model using machine-vision techniques on their 2D images.

Scholarly researchers are often faced with difficult ethical trade-offs, especially when developing new technology. The authors state their own particular quandary in the second paragraph:

These statues represent the artistic patrimony of Italy’s cultural institutions, and our contract with the Italian authorities permits distribution of the 3D models only to established scholars for noncommercial use. Though everyone involved would like the models to be available for any constructive purpose, the digital 3D model of the David would quickly be pirated if it were distributed without protection: simulated marble replicas would be manufactured outside the provisions of the parties authorizing creation of the model.

Michelangelo’s David
(image courtesy of and © Mary Ann Sullivan)

In other words, as academics Koller and Levoy understand how the free sharing of history, art and scholarly data contributes to society as a whole, but they also recognize that without some assurance that this data is not shared freely, the authorities who control access to the original works won’t allow any sharing. The museum would also like to see the data shared with fellow researchers, but don’t want to see it used to make replicas without their approval and license fees. Unfortunately, I think Koller, Levoy and the museum all fall the wrong way on this question.

One of the things that jars me in reading this piece is the liberal sprinkling of the words “theft” and “piracy,” as in “For the digital representations of valuable 3D objects (such as cultural heritage artifacts), it is not sufficient to detect piracy after the fact; piracy must be prevented.” Here the authors are making a fundamentally false assumption. I cannot speak to Italian law, but under U.S. law (and thus for any viewer of the data in the U.S.) exact models of works that are in the public domain are not themselves copyrightable. To quote the 1999 decision by the US District Court SDNY in Bridgeman Art Library, LTD. v. Corel Corp.:

There is little doubt that many photographs, probably the overwhelming majority, reflect at least the modest amount of originality required for copyright protection. “Elements of originality . . . may include posing the subjects, lighting, angle, selection of film and camera, evoking the desired expression, and almost any other variant involved.” [n39] But “slavish copying,” although doubtless requiring technical skill and effort, does not qualify. [n40] As the Supreme Court indicated in Feist, “sweat of the brow” alone is not the “creative spark” which is the sine qua non of originality. [n41] It therefore is not entirely surprising that an attorney for the Museum of Modern Art, an entity with interests comparable to plaintiff’s and its clients, not long ago presented a paper acknowledging that a photograph of a two-dimensional public domain work of art “might not have enough originality to be eligible for its own copyright.” [n42]

What Koller and Levoy are protecting are not the museum’s property — the 3D models of David belong to the public at large. What they are protecting is a business model, one that is based on preventing the legitimate and legal sharing of information. Their opponents in this battle are neither thieves nor pirates, they are merely potential competitors for the museum’s gift shop, or customers the museum fears losing.

It is understandable that museums want to protect an income stream they’ve come to rely on to accomplish their mission. It is also understandable that Koller and Levoy are willing to help museums maintain their gate-keeper status in exchange for at least limited access to the treasures they hold. After all, isn’t partial access to the World’s greatest artwork in digital form better than no access at all?

In this case I fear the short-term gain will be outweighed by long-term loss. Information technology and policy is in a state of incredibly rapid flux, with new systems constantly building on top of what came before like a giant coral reef. This project takes us another step down the path of information gate-keepers and toll-road bandits, a path that rewards the hoarding of information and the blockade of communication rather than the promotion of the useful arts and sciences. It also reinforces the message that we are all cultural sharecroppers, that education and the arts are reserved for those with the money to pay for them, and that the public domain is just a myth that thieves tell themselves to assuage a guilty conscience. This is the exact opposite of what our universities and museums represent, and it undermines the project participants’ legitimate desire to share these treasures with the world. We can do better, and we should.

Update 6/21/05: A longer version of the CACM article (published in SIGGRAPH 2004) can be found here, and includes a video demonstration (Quicktime MPEG-4, 20MB).

Owning David Read More »

Downing Street Memo slow burn?

The Times Online has just released a transcript of an official Cabinet Office brief that presumably was the basis for the discussion later detailed in the Downing Street Memo they released last month. Unlike the previous leak, this transcript is missing the last page and has been anonymized by the Times to protect the source.

Given that the Downing Street Memo story is just now getting traction in the US media (a month after being leaked) it’ll be interesting to see how this new story is handled here, especially given how understandably gun-shy the US media is right now about criticizing the administration without being damn sure the sources can be verified. According to an interview USA Today’s Mark Memmott gave On The Media (MP3), the main reason they delayed so long in talking about the first leak was that they couldn’t verify the memo themselves.

Downing Street Memo slow burn? Read More »

The Party Party

About a year ago I mentioned how the “virtual band” The Bots had put up a public-domain database of G.W. Bush audio clips to help would-be remixers get started. Their own rap Fuzzy Math is fun, but IMO succeeds mostly on the novelty of hearing GW saying things he’d never cop to in real life. The mixes over at The Party Party (by the band (me)™)) take GW mixing to the next level. The music stands on its own, and they turn the inherent choppiness of the mixing process into an advantage by fitting it with the natural rhythm of the music. (Be sure to especially check out My name is RX, a cross between Bush, Sympathy for the Devil and Slim Shady.)

The Party Party Read More »

MD5 collision for two meaningful documents

Researchers at RUB and the University of Mannheim have a nice demonstration of how the recently discovered attack on the MD5 hash function can be used to fool someone into signing one document when they think it’s another:

Recently, the world of cryptographic hash functions has turned into a mess. A lot of researchers announced algorithms (“attacks”) to find collisions for common hash functions such as MD5 and SHA-1 (see [B+, WFLY, WY, WYY-a, WYY-b]). For cryptographers, these results are exciting – but many so-called “practitioners” turned them down as “practically irrelevant”. The point is that while it is possible to find colliding messages M and M’, these messages appear to be more or less random – or rather, contain a random string of some fixed length (e.g., 1024 bit in the case of MD5). If you cannot exercise control over colliding messages, these collisions are theoretically interesting but harmless, right? In the past few weeks, we have met quite a few people who thought so.

With this page, we want to demonstrate how badly wrong this kind of reasoning is! We hope to provide convincing evidence even for people without much technical or cryptographical background.

Their method is simple and clever. They use the newly discovered attack to generate two random strings that have the same hashed value (say R1 and R2). Then they put those at the start of a “high-level” document description language like PostScript and tack on something along the lines of “if the previous value was R1, print an innocuous message I can get signed, otherwise print the real message I want signed.” A well-known weakness to the MD5 algorithm is that if R1 and R2 have the same hash values then R1+some text will have the same hash value as R2+the same text here, so depending on whether they use R1 or R2 as their preamble they get two very different messages with the same hash value.

MD5 collision for two meaningful documents Read More »