Media Technology

Guided Voting

Eugene Volokh has an interesting post about guided voting over at the Volokh Conspiracy (also discussed at Edward Felten’s Freedom to Tinker).

Guided voting already exists in basic form. I’m knowledgeable about a few political issues, but when it comes to local candidates or ballot initiatives outside my area of expertise I rely on party affiliation or endorsements from friends or organizations I trust to “tell” me how to vote.

Prof. Volokh’s point is that, like it or not, Internet voting will lead to a much greater role for guided voting. Today’s ballots have a candidate’s party affiliation printed on the ballot, but if I want to know how, say, the National Organization of Women feels about a candidate I need to do my homework in advance and bring a cheat sheet. Volokh paints a future where I could go to a trusted third-party site, say suggestedvote.com, and check off the organizations I would like to guide my vote. The website would then produce a suggested ballot that aggregates all the recommendations of the organizations I picked, possibly weighing organizations differently in case they conflict on a particular issue. Then with a single keystroke my suggested ballot could be filed. The advantage of such a system, so the argument goes, is that the influence currently held by our two main political parties would be diluted and the political process would become more diverse.

While I like the idea in principle, I think there are two improvements that could be made to Prof. Volokh’s scenario:

First, there is no reason to have a third-party gatekeeper such as suggestedvote.com. More general and egalitarian would be for election boards to publish a standard XML ballot and then any interested party could publish their own itemized recommendations. I would be able to subscribe to recommendations from now.org, aclu.org, or even volokh.com just like I currently subscribe to RSS feeds to read several blogs at once. Of course, a site like suggestedvote.com could still offer to host RSS or similar recommendation feeds for anyone who doesn’t have their own website.

Second, I am quite frightened by the concept of one-click voting. Behavioral psychologists have repeatedly shown that people will tend to do what an interface makes easy to do (see The Adaptive Decision Maker for a nice analysis). This is why there are heated debates about things like motor-voter registration and whether voting booths should allow a single lever to cast all votes for a single party, policies that would be no-brainers if changing the convenience of voting didn’t also change who votes and for what. Given that any change we make will affect how people act, I want the system to encourage thoughtful individual contributions to our democracy, not a constituency of sheep.

This is not to say there should be no voting guides at all, but rather that people should still be forced to actually see and touch every ballot measure, even if it is only to find and check the their favorite party nominee. Each ballot measure and candidate would be accompanied by labels representing endorsements by each guide the voter has chosen, possibly with links from the endorsement to a short argument explaining the group’s reasoning. Rather than follow an automatically aggregated recommendation, voters would judge for themselves who to follow on each individual issue. Voters might even choose guides from organizations with whom they explicitly disagree, either to vote against their measures or to see opposing viewpoints. This system would not be that much more inconvenient than the one-click voting Prof. Volokh suggests, but would insure individual voter involvement while still giving the main advantages of voting guides.

References

Howard Dean, Blogs, and the Fireside Chat

Mark Glaser at Online Journalism Review has an interesting look at Howard Dean’s Blog For America campaign blog. Glaser’s main point: Dean’s blog is building support and a sense of connection to his campaign, even though almost all the entries are from his campaign staff rather than Dean himself. As Dan Gillmor puts it, the official Dean blog is a campaign document, not a candidate document.

The article raises the question of how blogs (and by extension, the Web) is best used in political campaign. For Dean, blogforamerica.com is a tool for organizing grassroots support. It lets supporters know what they can do to help, and more importantly it keeps them informed about the bigger picture of how the campaign is moving. Dick Morris even goes so far as to declare grass-roots Internet organization as the new replacement for television ads. But as Glaser points out, you don’t get the feeling of being in Dean’s head like you would if he were writing his own daily entries. In fact, you get a better sense of Dean’s thought process from the posts he made as a guest blogger at Lawrence Lessig’s site than from his own blog.

Certainly there’s nothing wrong with how Dean is using his blog, and his success so far has shown (yet again) just how powerful the Net can be for grass-roots organization. But I can also see why people would wish for more personal contact through his blog as well. Like email, blogs are an informal and even intimate medium, better suited to throwing out ideas that are from the heart, or at least from the hip, than to well-rehearsed campaign speeches. It gives everyday voters a seat on the campaign bus, where they can discuss the issues in detail and watch as positions become fully formed. One of the problems with politics, especially around campaign season, is that everything is so well crafted that you can never hear the doubts and alternatives that had to be considered in crafting the final message. This was brought home to me after 9/11 when, for a period of about three months, it seemed like the curtains had been lifted and politicians were all thinking out loud.

The next question in my mind is how this sort of medium can be used once a candidate is elected. Dean has commented that he might have a White House blog if he’s elected, and of course already the White House publishes Press Secretary briefings on the Net. Perhaps the White House blog could become the 21st century’s fireside chat?

References

Art history, optics and scientific debate

Our Chief Scientist, David Stork, has been doing some side research the past few years in art history. In particular he’s been assessing a theory that artist David Hockney presents in his book “Secret Knowledge”: that artists as early as 1430 have secretly used optical devices such as mirrors and lenses to help them create their almost photo-realistic paintings.

The theory is fascinating. Art historians know that some master’s used optical devices in the 1600’s, but Hockney and his collaborator Physicist Charles Falco claim that as early as 1430 the masters of the day used concave mirrors to project the image of a subject onto their canvas. The artist would then trace the inverted image. This alone, Hockney and his supporters claim, can account for the perfect perspective and “opticality” of paintings that suddenly appear at in this time period.

If the theory itself is fascinating, I find Stork’s refutation even more interesting. Stork’s argument is based on several points. First, he argues, there is no textual evidence that artists ever used such devices. Hockney and his supporters counter that the information was of course kept as a closely guarded trade secret, and that is why there was no description of it. It isn’t clear how these masters also kept the powerful patrons whose portraits they were painting from discussing their secret. Stork’s second argument is that, quite simply, the paintings aren’t all that perfect perspective after all. They look quite good, obviously, but if you actually do the geometry on the paintings Hockney presents as perfect you see that supposedly parallel lines don’t meet at a vanishing point as they would in a photograph. And third, Stork points out that the methods Hockney suggests would require huge mirrors to get the focal lengths seen in the suspected paintings: mirrors far far larger than the technology could create at the time.

My analysis is a little unfair to Hockney as I’ve only seen Stork’s presentation, but I must say I’m impressed with his argument. Hockney’s argument is quite media-pathic. It’s a mystery story that wraps history, secrecy, geniuses, modern science and great visuals all in one — no wonder it’s captured people’s attention! Unfortunately, I expect Stork’s right about one of the less fun aspects of the theory. It’s also probably dead wrong.

For those interested, a CBS documentary on Hockney’s theory will be rebroadcast this Sunday, August 3rd, on 60 Minutes.

References:

NPUC 2003 Trip Report

A couple weeks ago I attended the New Paradigms in Using Computers workshop at IBM Almaden. It’s always a small, friendly one-day gathering of Human-Computer Interaction researchers and practitioners, with invited talks from both academia and industry. This year’s focus was on the state of knowledge in our field: what we know about users, how we know it and how we learn it.

The CHI community has a good camaraderie, especially among the industry researchers. I suspect that’s because we’re all used to being the one designer, artist or sociologist surrounded by a company of computer scientists and engineers. Nothing brings together a professional community like commiseration, especially when it’s mixed with techniques for how to convince your management that what you do really is valuable to the company.

One of the interesting questions of the workshop was how to share knowledge within the interface-design community. Certainly we all benefit by sharing knowledge, standards and techniques, but for the industry researchers much of that information is a potential competitive advantage and therefore kept confidential. Especially here in Silicon Valley, that kind of institutional knowledge gets out into the community as a whole through employment churn, as researchers change labs throughout their careers.

Here are my notes from several of the talks. Standard disclaimers in place: these are just my notes of the event and subjected to my own filters and memory lapses. If you want the real story, get it from the respective horses’ mouths.

Electronic Voting Gets Burned

Electronic voting is getting slammed this week. First, Dan Gillmor’s Sunday Column took election officials to task for not insisting on providing physical paper trails that can be followed should the results of an election be in doubt. Then on Wednesday several computer security experts at Johns Hopkins University and Rice University published a scathing analysis of the design of the Diebold AccuVote-TS, one of the more commonly used electronic voting systems, based on source code that the company accidentally leaked to the Internet back in January. Exploits include the ability to make home-grown smart-cards to allow multiple voting, the ability to tamper with ballot texts, denial of service attacks, the potential to connect an individual voter to how he voted, and potentially the ability to modify votes after they have been cast. The New York Times and Gillmor’s own blog have since picked up the report. Diebold has since responded to the analysis, but at least so far they haven’t addressed the most damning criticisms.

There are several lessons to be learned from all this:

US to add RF-ID to passports by October 2004

Frank Moss, US deputy assistant secretary for Passport Services, announced at the recent Smart Card Alliance meeting that production of new smart-card enabled passports will begin by October 26, 2004. Current plans call for a contactless smart chip based on the ISO 14443 standard, which was originally designed for the payments industry. The 14443 standard supports a data exchange rate of about 106 kilobytes per second, much higher than that of the widely-deployed Speedpass system.

IEEE deciding short-range wireless standard this week

Nearly six years to the day after the process was started, it looks like the IEEE is honing in on a single standard for a fast (around 100 Mbit/s), short-range (< 10m), low-power, low-cost wireless communication. The standard, which will be IEEE 802.15.3a, comes out of the IEEE Wireless Personal Area Network (WPAN) working group. Unlike cellular or Wi-Fi networks, the point of a personal area network is to communicate with other devices that are there in the room with you. For example, a high-speed WPAN would allow your PDA to stream video directly to a large-screen TV. Alternatively, your core CPU could wirelessly communicate with medical sensors, control buttons, displays and ear-pieces, all distributed around the body. The standard fills much the same niche as Bluetooth (the first standard adopted by the working group, also known as 802.15.1), but the new technology is significantly faster than Bluetooth (up to 100 times faster, according to champions of the technology).

Trade news columnists who know more than I do about this are picking Texas Instruments’ proposal for OFDM UWB (that’s Orthogonal Frequency Division Multiplexing Ultra Wide Band, thank you for asking) as the likely technology to be picked. Assuming it does, TI’s UWB business development manager says we can expect to see the first UWB products hitting the marketplace in 2005.

Update: The standard did not receive enough votes to pass, and will be voted on again in mid-September.

References: