Media Technology

And the DMCA be damned…

Here are a few free Mac programs I’ve recently come across that make it easy to exercise your rights to fair use. Which is to say, these are programs that allow you to backup, timeshift, spaceshift, or quote digital media that you have bought and paid for but that the Content Cartel would rather you not be able to manipulate. Windows users will have to find their own equivalents (they’re bound to be out there) or just break down and buy a Mac.

  • DVDbackup: A program that copies a DVD to disk. It can also change or remove region codes, remove the Macrovision Analog Protection System that prevents copying DVD movies to video tapes, and decrypt the Content Scrambling System (CSS) that prevents copying of commercial DVD content to another digital storage media. Simple drag-and-drop interface. Freeware. Note that some uses of this program may be illegal in the U.S. or any other country that has granted legal protection to any business model that can be encoded in digital rights management technology.
  • OpenShiva: Convert a VOB (DVD video) file to MPEG-4 with AAC audio codecs. This will reduce the size of a full-length feature film from about 4.7 Gigabytes to only 1 Gigabyte without substantial loss in quality. Simple interface, and lots of options including cropping and scaling of the final output. Open source (GPL). Note that for commercial DVDs you will need to use something like DVDbackup to decode the CSS encryption first.
  • WireTap 1.0.0: This program goes right to the sound drivers and records any audio playing on your Mac. This includes sound snippets from DVD movies, games, iChatAV conversations, or Internet radio. Free product provided by Ambrosia Software, the people who make the SnapzProX video/screen capture software.

And the DMCA be damned… Read More »

Kaltix and personalized search

There are some interesting rumors floating around about Kaltix, a stealth start-up out of the Stanford WebBase Project. This is the same group that created the PageRank algorithm that was later spun out as a little start-up called Google. As you might expect with a company in stealth mode we’re still long on speculation and short on facts, but it looks like their main technology is a faster way to compute PageRank, the algorithm used by Google to rank hits from a search based on the Web’s link structure.

This is interesting because it would allow Google (or any other search engine) to quickly recalculate personalized indexes for each and every user. After seeding a personal index with my bookmarks file, Google would know that when I for “Jaguar” I’m probably interested in the latest version of Apple’s OS, not the car or the cat. The CNET article has a good overview, but Jeffrey Heer’s blog has a nice perspective as a researcher who happens to be housemates with one of the Kaltix founders.

There are a lot of question-marks still, and I’m not yet convinced that Kaltix’s technology is the crown jewels that Heer or the CNET article claim it is. Speedy indexing is necessary for large-scale personalized search, but you still need to create a profile from something. The real question will be whether a search engine can generate a personal profile that helps disambiguate the searches people make in actual use. Add to this the need to keep personal information like browser history from being transmitted to outside companies and you have a tall order. I’m not saying these problems can’t be solved, but as far as I know they haven’t been solved yet. I expect Kaltix will get bought by one of the big search companies, but it will still be several years before we see personalized search running on any large (non-intranet) scale.

References

Kaltix and personalized search Read More »

Identity Theft and the Need for a New Common Sense

A couple stories have come up the last two days that highlight how the way the law and business determines identity isn’t keeping up with technology. One story is about identity theft and the other about computer security violations, but both have a common thread: technology has made it so our common-sense assumptions about how to tell someone’s identity no longer work.

The first is a lengthy Washington Post article about identity theft. The driving story is about Michael Berry, whose identity was stolen by an ex-con who proceeded to rack up debt and eventually commit murder all while living under Berry’s name. Around this driving story the article gives a good analysis of just how incredibly easy and common this kind of identity theft is today.

It used to be that identifying someone was a long-term and high-touch operation. You’d get paychecks from a local business, deposit checks at the local bank branch, and write checks to the local grocery store. Over time all these entities would get to know you and your identity would become firmly entrenched in the system. Now that society is more mobile that system doesn’t work, and we’re finding that the replacement system of asking for social security numbers or mother’s maiden name doesn’t work too well either. Currently banks have to eat any monetary losses that come from identity-theft fraud, but do not currently have to take responsibility for damage caused to a person’s credit rating or reputation (as a recently upheld by the South Carolina Supreme Court). That means that, as the law stands now, the economic incentives encourage more convenience and less security than would be the case if banks had to take the total cost of identity theft into account.

The second story is from yesterday’s New York Times, who reported that a British man was exonerated of child pornography charges after his computer was found to have been infected by nearly a dozen Trojan-horse programs. Mr. Green, who has lost custody of his daughter and spent nine days in prison and three months in a “bail hostel” due to charges, has all along claimed that his computer was infected and that it even dialed into the Internet when no one was home.

In this case the question is whether Green is responsible for the material on his own computer. Not long ago if a crime was committed in a particular house then the perpetrator could only be one of a handful of people. For these data crimes, the person actually downloading porn onto Green’s computer could have been literally anyone in the world. Similar arguments have been made about open Wi-Fi access points and “zombie” computers that are used as launching pads for attacks on other sites on the Net. As the Times article points out, there are two issues here. One is that bad guys could use such security problems as a defense, the other is that it really is a valid defense:

“The scary thing is not that the defense might work,” said Mark Rasch, a former federal computer crime prosecutor. “The scary thing is that the defense might be right,” and that hijacked computers could be turned to an evil purpose without an owner’s knowledge or consent.

The general problem is that our old common sense ideas of identity no longer hold, or can’t be applied in our hyper-convenient and mobile society. I’m not necessarily in control of my own networked computer. I’m not the only person who knows the last four digits of my SSN. And the person handling my application has almost certainly never seen me before, and that’s no cause for alarm. Perhaps technology will come to the rescue in the form of biometrics that can prevent identity theft while still preventing governmental abuses. Perhaps regulation will come to the rescue in terms of systems to challenge faulty information, and by insuring that those who are responsible for security have the incentive to maintain it. Probably a combination of these will be required, but in the mean time I expect the problem to get worse before it gets better.

References

Identity Theft and the Need for a New Common Sense Read More »

Guided Voting

Eugene Volokh has an interesting post about guided voting over at the Volokh Conspiracy (also discussed at Edward Felten’s Freedom to Tinker).

Guided voting already exists in basic form. I’m knowledgeable about a few political issues, but when it comes to local candidates or ballot initiatives outside my area of expertise I rely on party affiliation or endorsements from friends or organizations I trust to “tell” me how to vote.

Prof. Volokh’s point is that, like it or not, Internet voting will lead to a much greater role for guided voting. Today’s ballots have a candidate’s party affiliation printed on the ballot, but if I want to know how, say, the National Organization of Women feels about a candidate I need to do my homework in advance and bring a cheat sheet. Volokh paints a future where I could go to a trusted third-party site, say suggestedvote.com, and check off the organizations I would like to guide my vote. The website would then produce a suggested ballot that aggregates all the recommendations of the organizations I picked, possibly weighing organizations differently in case they conflict on a particular issue. Then with a single keystroke my suggested ballot could be filed. The advantage of such a system, so the argument goes, is that the influence currently held by our two main political parties would be diluted and the political process would become more diverse.

While I like the idea in principle, I think there are two improvements that could be made to Prof. Volokh’s scenario:

First, there is no reason to have a third-party gatekeeper such as suggestedvote.com. More general and egalitarian would be for election boards to publish a standard XML ballot and then any interested party could publish their own itemized recommendations. I would be able to subscribe to recommendations from now.org, aclu.org, or even volokh.com just like I currently subscribe to RSS feeds to read several blogs at once. Of course, a site like suggestedvote.com could still offer to host RSS or similar recommendation feeds for anyone who doesn’t have their own website.

Second, I am quite frightened by the concept of one-click voting. Behavioral psychologists have repeatedly shown that people will tend to do what an interface makes easy to do (see The Adaptive Decision Maker for a nice analysis). This is why there are heated debates about things like motor-voter registration and whether voting booths should allow a single lever to cast all votes for a single party, policies that would be no-brainers if changing the convenience of voting didn’t also change who votes and for what. Given that any change we make will affect how people act, I want the system to encourage thoughtful individual contributions to our democracy, not a constituency of sheep.

This is not to say there should be no voting guides at all, but rather that people should still be forced to actually see and touch every ballot measure, even if it is only to find and check the their favorite party nominee. Each ballot measure and candidate would be accompanied by labels representing endorsements by each guide the voter has chosen, possibly with links from the endorsement to a short argument explaining the group’s reasoning. Rather than follow an automatically aggregated recommendation, voters would judge for themselves who to follow on each individual issue. Voters might even choose guides from organizations with whom they explicitly disagree, either to vote against their measures or to see opposing viewpoints. This system would not be that much more inconvenient than the one-click voting Prof. Volokh suggests, but would insure individual voter involvement while still giving the main advantages of voting guides.

References

Guided Voting Read More »

Howard Dean, Blogs, and the Fireside Chat

Mark Glaser at Online Journalism Review has an interesting look at Howard Dean’s Blog For America campaign blog. Glaser’s main point: Dean’s blog is building support and a sense of connection to his campaign, even though almost all the entries are from his campaign staff rather than Dean himself. As Dan Gillmor puts it, the official Dean blog is a campaign document, not a candidate document.

The article raises the question of how blogs (and by extension, the Web) is best used in political campaign. For Dean, blogforamerica.com is a tool for organizing grassroots support. It lets supporters know what they can do to help, and more importantly it keeps them informed about the bigger picture of how the campaign is moving. Dick Morris even goes so far as to declare grass-roots Internet organization as the new replacement for television ads. But as Glaser points out, you don’t get the feeling of being in Dean’s head like you would if he were writing his own daily entries. In fact, you get a better sense of Dean’s thought process from the posts he made as a guest blogger at Lawrence Lessig’s site than from his own blog.

Certainly there’s nothing wrong with how Dean is using his blog, and his success so far has shown (yet again) just how powerful the Net can be for grass-roots organization. But I can also see why people would wish for more personal contact through his blog as well. Like email, blogs are an informal and even intimate medium, better suited to throwing out ideas that are from the heart, or at least from the hip, than to well-rehearsed campaign speeches. It gives everyday voters a seat on the campaign bus, where they can discuss the issues in detail and watch as positions become fully formed. One of the problems with politics, especially around campaign season, is that everything is so well crafted that you can never hear the doubts and alternatives that had to be considered in crafting the final message. This was brought home to me after 9/11 when, for a period of about three months, it seemed like the curtains had been lifted and politicians were all thinking out loud.

The next question in my mind is how this sort of medium can be used once a candidate is elected. Dean has commented that he might have a White House blog if he’s elected, and of course already the White House publishes Press Secretary briefings on the Net. Perhaps the White House blog could become the 21st century’s fireside chat?

References

Howard Dean, Blogs, and the Fireside Chat Read More »

Art history, optics and scientific debate

Our Chief Scientist, David Stork, has been doing some side research the past few years in art history. In particular he’s been assessing a theory that artist David Hockney presents in his book “Secret Knowledge”: that artists as early as 1430 have secretly used optical devices such as mirrors and lenses to help them create their almost photo-realistic paintings.

The theory is fascinating. Art historians know that some master’s used optical devices in the 1600’s, but Hockney and his collaborator Physicist Charles Falco claim that as early as 1430 the masters of the day used concave mirrors to project the image of a subject onto their canvas. The artist would then trace the inverted image. This alone, Hockney and his supporters claim, can account for the perfect perspective and “opticality” of paintings that suddenly appear at in this time period.

If the theory itself is fascinating, I find Stork’s refutation even more interesting. Stork’s argument is based on several points. First, he argues, there is no textual evidence that artists ever used such devices. Hockney and his supporters counter that the information was of course kept as a closely guarded trade secret, and that is why there was no description of it. It isn’t clear how these masters also kept the powerful patrons whose portraits they were painting from discussing their secret. Stork’s second argument is that, quite simply, the paintings aren’t all that perfect perspective after all. They look quite good, obviously, but if you actually do the geometry on the paintings Hockney presents as perfect you see that supposedly parallel lines don’t meet at a vanishing point as they would in a photograph. And third, Stork points out that the methods Hockney suggests would require huge mirrors to get the focal lengths seen in the suspected paintings: mirrors far far larger than the technology could create at the time.

My analysis is a little unfair to Hockney as I’ve only seen Stork’s presentation, but I must say I’m impressed with his argument. Hockney’s argument is quite media-pathic. It’s a mystery story that wraps history, secrecy, geniuses, modern science and great visuals all in one — no wonder it’s captured people’s attention! Unfortunately, I expect Stork’s right about one of the less fun aspects of the theory. It’s also probably dead wrong.

For those interested, a CBS documentary on Hockney’s theory will be rebroadcast this Sunday, August 3rd, on 60 Minutes.

References:

Art history, optics and scientific debate Read More »

NPUC 2003 Trip Report

A couple weeks ago I attended the New Paradigms in Using Computers workshop at IBM Almaden. It’s always a small, friendly one-day gathering of Human-Computer Interaction researchers and practitioners, with invited talks from both academia and industry. This year’s focus was on the state of knowledge in our field: what we know about users, how we know it and how we learn it.

The CHI community has a good camaraderie, especially among the industry researchers. I suspect that’s because we’re all used to being the one designer, artist or sociologist surrounded by a company of computer scientists and engineers. Nothing brings together a professional community like commiseration, especially when it’s mixed with techniques for how to convince your management that what you do really is valuable to the company.

One of the interesting questions of the workshop was how to share knowledge within the interface-design community. Certainly we all benefit by sharing knowledge, standards and techniques, but for the industry researchers much of that information is a potential competitive advantage and therefore kept confidential. Especially here in Silicon Valley, that kind of institutional knowledge gets out into the community as a whole through employment churn, as researchers change labs throughout their careers.

Here are my notes from several of the talks. Standard disclaimers in place: these are just my notes of the event and subjected to my own filters and memory lapses. If you want the real story, get it from the respective horses’ mouths.

NPUC 2003 Trip Report Read More »

Electronic Voting Gets Burned

Electronic voting is getting slammed this week. First, Dan Gillmor’s Sunday Column took election officials to task for not insisting on providing physical paper trails that can be followed should the results of an election be in doubt. Then on Wednesday several computer security experts at Johns Hopkins University and Rice University published a scathing analysis of the design of the Diebold AccuVote-TS, one of the more commonly used electronic voting systems, based on source code that the company accidentally leaked to the Internet back in January. Exploits include the ability to make home-grown smart-cards to allow multiple voting, the ability to tamper with ballot texts, denial of service attacks, the potential to connect an individual voter to how he voted, and potentially the ability to modify votes after they have been cast. The New York Times and Gillmor’s own blog have since picked up the report. Diebold has since responded to the analysis, but at least so far they haven’t addressed the most damning criticisms.

There are several lessons to be learned from all this:

Electronic Voting Gets Burned Read More »

US to add RF-ID to passports by October 2004

Frank Moss, US deputy assistant secretary for Passport Services, announced at the recent Smart Card Alliance meeting that production of new smart-card enabled passports will begin by October 26, 2004. Current plans call for a contactless smart chip based on the ISO 14443 standard, which was originally designed for the payments industry. The 14443 standard supports a data exchange rate of about 106 kilobytes per second, much higher than that of the widely-deployed Speedpass system.

US to add RF-ID to passports by October 2004 Read More »

IEEE deciding short-range wireless standard this week

Nearly six years to the day after the process was started, it looks like the IEEE is honing in on a single standard for a fast (around 100 Mbit/s), short-range (< 10m), low-power, low-cost wireless communication. The standard, which will be IEEE 802.15.3a, comes out of the IEEE Wireless Personal Area Network (WPAN) working group. Unlike cellular or Wi-Fi networks, the point of a personal area network is to communicate with other devices that are there in the room with you. For example, a high-speed WPAN would allow your PDA to stream video directly to a large-screen TV. Alternatively, your core CPU could wirelessly communicate with medical sensors, control buttons, displays and ear-pieces, all distributed around the body. The standard fills much the same niche as Bluetooth (the first standard adopted by the working group, also known as 802.15.1), but the new technology is significantly faster than Bluetooth (up to 100 times faster, according to champions of the technology).

Trade news columnists who know more than I do about this are picking Texas Instruments’ proposal for OFDM UWB (that’s Orthogonal Frequency Division Multiplexing Ultra Wide Band, thank you for asking) as the likely technology to be picked. Assuming it does, TI’s UWB business development manager says we can expect to see the first UWB products hitting the marketplace in 2005.

Update: The standard did not receive enough votes to pass, and will be voted on again in mid-September.

References:

IEEE deciding short-range wireless standard this week Read More »