Media Technology

Misperceptions, the Media and the Iraq War

The Program on International Policy Attitudes at the University of Maryland and Knowledge Networks have just released a report that sheds a lot of light on the much-reported polls that show Americans have serious misconceptions about the facts surrounding the Iraq War. (PIPA’s press release and questionnaire are also available).

At the heart of the PIPA study are three questions:

  • Is it your impression that the US has or has not found clear evidence in Iraq that Saddam Hussein was working closely with the al Qaeda terrorist organization?
  • Since the war with Iraq ended, is it your impression that the US has or has not found Iraqi weapons of mass destruction?
  • Thinking about how all the people in the world feel about the US having gone to war with Iraq, do you think the majority of people favor the US having gone to war?

The answers, by the way, are “no clear evidence has been found,” “no weapons of mass destruction have been found,” and “the majority of people in the world do not favor the US having gone to war.” If you got at least one wrong don’t feel too bad: only 30% of people surveyed in three polls (June, July, and August-September) got all three correct.

The report is well worth reading, but here’s a brief summary of their findings:

Misperceptions, the Media and the Iraq War Read More »

NeoMedia coming out with portable price-checker

NeoMedia has just announced a service where you can take a picture of an ISBN code (the barcode printed on every book jacket) with a cellphone camera and be automatically brought the the Amazon.com page for that book. From their press release:

“Now, shoppers can take out their Nokia(R) 3650 camera phone at Barnes & Noble, Border’s, or just about any other book store, and just take a picture of the ISBN on the book to comparison shop at Amazon.com right on the screen of their wireless Web browser,” Jensen said. “It’s kind of a high-tech version of the Santa Claus at Macy’s(R) sending Christmas shoppers to Gimbels in the classic movie, ‘Miracle on 34th Street’,” he mused.

Gizmodo suggests this is Barnes & Noble’s worst nightmare, but I expect it won’t hurt the large chains, as their volume keeps prices fairly close to Amazon’s as it is. It’ll be harder on independent bookstores, but even then there’s a premium that people are willing to pay for a book that’s already in their hot little hands. That premium will be even larger than the usual amount people will pay for bricks-and-mortar convenience because the customer is already in the store — I expect a lot more.

The biggest question for me is whether “now is the time.” I first saw this kind of technology about 6 years ago, both in a class project at MIT and in Anderson Consulting’s (now Accenture‘s) Shopper’s Eye project, and even briefly looked at doing a startup in this area just before the crash. It never quite felt like the time was right for this to go mainstream because the technology wasn’t in the hands of enough consumers. Clearly NeoMedia thinks we’re getting close.

References

NeoMedia coming out with portable price-checker Read More »

CIA and ICT developing anti-terrorism training “game”

The CIA’s Counter-Terrorism Center (CTC) is working to develop training simulations with the help of the Institute for Creative Technologies, a center within the University of Southern California that specializes in combining artificial intelligence, virtual reality and techniques from the videogame and movie industries to create interactive training simulations. The company recently received accolades for their “Full Spectrum Warrior” project, which was designed as a training aid for the US Army but has also lead to a commercial videogame for the X-Box. The Army project uses material developed with the Army Infantry School at Fort Benning and a rich AI engine to run trainees through both military and peacekeeping scenarios. For example, in one scenario the trainee plays an officer in charge of a unit that has just been involved in a traffic accident between a tank and a non-English-speaking civilian. If approved, the CIA’s simulation would allow analyst trainees to play themselves or the part of terrorist cell leaders, cell members, money-movers and facilitators.

The Washington Times, who broke the story, is highly critical of the project, comparing it to Vice Adm. John Poindexter’s ill-received Idea Futures project and quoting unnamed military officials and other critics who call it “a ridiculous and absurd scheme that makes Poindexter’s project look good in comparison” and suggest that “the key issue here is the CTC misspending funds on silly, low-priority projects, exactly the kind of thing that forced Admiral Poindexter to resign.” A follow-up article, also in the Washington Times, quotes former Georgia congressman Bob Barr (R-GA) as saying “Perhaps this is the reason we were surprised by September 11. If it weren’t so serious, it would be comical… What we ought to be doing is focusing our money and attention in identifying terrorists and their associates so we can be on the watch for these characters, not playing video games.” The Sydney Morning Herald was slightly less critical, but also linked the project with Poindexter’s projects.

It’s entirely possible that this project is too expensive (the CIA has not revealed the price tag) or that the simulation is in some way teaching the wrong lessons. However, the main criticism seems to be of the form “the CIA is wasting time playing video games,” which is patently absurd. Simulation role-playing has been an effective training tool in both the military and business for decades, and in fact much of the technology now seen in video games was originally developed for training U.S. Army officers. To suggest that the CIA should be out catching terrorists instead of playing video games is like suggesting the U.S. Army should be out fighting wars instead of wasting their time doing training exercises consisting of “running around with toy guns playing capture the flag.”

It’s pretty clear that there’s a thicket of political wrangling going on behind the scenes, and the Times story is a salvo fired by people who want this CIA project canceled. I’ve no idea whether this is a case of fighting over scarce funding, vengeance against the CTC, or an honest attempt to scuttle a project that won’t provide good training, and I won’t even begin to speculate. Hopefully someone with a better understanding of the ins and outs of intelligence and military politics (like Phil Carter at Intel Dump) will weigh in on this before long.

CIA and ICT developing anti-terrorism training “game” Read More »

Breaking the brick

Intel’s Personal Server project, lead by Ubiquitous Computing long-timer Roy Want, got some press this past week after it was shown at the Intel Developer Forum. The prototype is a 400MHz computer with Bluetooth, battery and storage, all about the size of a deck of cards. No screen and no keyboard — I/O is handled by whatever devices happen to be around, be they the display and keyboard on your desk, the large-screen projector in the conference room or your portable touch-screen. This concept isn’t new; it’s something that researchers in Ubiquitous Computing and Wearable Computing (including Roy) have been talking about for over a decade. But it is the right concept, and Moore’s Law is finally bringing it to almost within reach.

There are three main reasons why this is the Right Thing(tm):

  • Your hands aren’t getting smaller. Handheld computers are now small enough that the limiting factor is screen and button size. Since our hands aren’t getting any smaller, we’re pretty much at the limit for everything-in-a-single-brick handhelds, at least for current applications. One of the ways out of that box is the wearable computing approach, where interfaces are spread around the body like clothing or jewelry. Displays are shrunk by embedding them directly into the glasses, tiny microphones are used for speech recognition, micro cameras and accelerometers are used for for gesture and context recognition, and specialty input devices such as medical monitors get used instead of more generic input devices. One of the big difficulties with wearables is all the wires leading from the CPU/Disk/Battery unit to the I/O devices, and in fact this problem was a big motivating force behind the IEEE 802.15 short-range wireless standards, which include Bluetooth. Wireless isn’t a complete solution (you still have to worry about powering your I/O devices) but it’s a start.

    The other way to break the hand-size limit is the UbiComp approach: use whatever interfaces are in your surrounding area. When I’m at my desk I want to use my nice flat-panel display and ergonomic keyboard, not my black-and-white cellphone LCD. When I give a presentation I want to use the conference hall’s projector. I don’t need a keyboard at all, just enough to launch my Keynote presentation and change slides. Roy naturally leans towards this second approach, but as I’ve argued before the Ubicomp and Wearables approaches work well together; there’s no need to choose.

  • Always the right tool for the job. Another advantage to breaking the CPU from the I/O is it gets around an inherent conflict in interface design. On the one hand, designers will tell you that you always want the interface to fit the task. Use a hammer to drive nails and a screwdriver to turn screws, and all that. But in the mobile world you don’t want to carry around your cellphone, PDA, MP3 player, two-way pager, camera and laptop everywhere you go. When it comes to mobility, most people choose to carry a Swiss Army knife instead of a full toolchest, even though the one-size-fits-all interface won’t ever be quite right for the task. (That’s why I still carry my Danger Hiptop, which is great for text messaging but feels like I’m holding a bar of soap to my ear when I use it for voice.) When you break the brick, as it were, you can use one CPU, main battery, network connection and storage for all your devices. Then just bring whatever interfaces you need for whatever you tasks you expect that day, and use interfaces in your environment when they’re available.

  • Thin clients don’t grow with Moore’s Law. An alternative to having your personal CPU with you at all times is to run a thin client that has just enough smarts to talk to a server over wireless. The server then does all the heavy lifting. The trouble with this approach is that thin clients rely mainly on two resources: wireless bandwidth and the rather significant battery power needed to get to the nearest cell tower. The trouble is, these are the two resources that are growing most slowly. Since 1990, the RAM in mobile computers has improved a hundred-fold, CPUs 400-fold, and disk-space a whopping 1200-fold. In that same time, long-haul wireless speed has improved only 20-fold and battery efficiency only three-fold. (Thanks to Thad Starner for those numbers.) And, of course, thin clients don’t work when you’re in a wireless deadzone.

It’s not clear when Intel (or Apple or Sony for that matter) will finally come out with a successful Personal Server style product. The hardware is just one necessary piece to the puzzle, with resource discovery, communication standards, good interface design and of course the all-important “killer app” to bring it all together. But in spite of the hurdles yet to come, this is the right approach. I’m glad to see Intel is giving it the support it deserves.

Breaking the brick Read More »

ISWC registration deadline this Friday

As a reminder for those who are interested in wearable computers, the early registration deadline for the 7th IEEE International Symposium on Wearable Computers is this Friday, September 26th. You can check out the advance program here.

I’ll be co-teaching the Introduction to Wearable Computers tutorial with Thad Starner, and am also tutorials chair and on the program committee.

ISWC registration deadline this Friday Read More »

Privacy and soft walls

I’ve been reading up on IBM’s recently announced WebFountain project. The system, which has been dubbed Google on steroids, spiders the Net and other databases and applies various data-mining, natural-language processing and pattern recognition techniques to the data. The current system uses 500 parallel-processing Linux boxes, all accessing about half a terabyte of storage in the basement of the IBM Almaden Research Center. IBM’s infrastructure allows clients to customize their searches and standing queries using a library that will “tokenize the data to identify people and companies, and discover patterns, trends and relationships in the data.” The technology is being offered as a service, and is already being sold through a partnership with Factiva. It is being marketed mainly for trend identification and for “reputation management,” where a company watches chat rooms, bulletin boards, newspapers and other sources to see what people are saying about it.

I’m quite interested in the technology, and even have a friend from grad school who has been working on it (Hi Dan!). But the thing that got me thinking was a comment about privacy by Robert Morris, the director of IBM Almaden. As reported in the San Jose Mercury News:

The technology could potentially raise privacy concerns if companies turned its power on analyzing individuals. But Hart and Morris said both companies would protect user privacy.

“Anything we mine is public data on the Web,” Morris said.

But it isn’t yet clear how the company would restrict users trying to use the tool to invade someone’s privacy.

The quote is in line with the comment by The Economist: “No doubt some people will say it sounds a little intrusive. But all WebFountain does is reveal information that is hidden in plain sight.”

Unfortunately, the idea that anything findable on the Net is “public” is a dodge — “public data” is a simplification of what is a much more complex set of social rules. Counter-intuitive as it may sound, privacy rules are not primarily about restricting information access to particular people. The primary purpose of privacy rules is to keep people from using the information in ways that would harm the person who is keeping it secret. This is why companies wink at sharing trade secrets with your wife or husband but are adamant about not revealing them to potential competitors, unless they’ve first signed a non-disclosure agreement. The NDA explicitly restricts harmful uses of the data, making the privacy rules unnecessary.

The idea that privacy is a restriction on power was brought home to me a few years ago by an old fraternity brother of mine. Back when he was still finishing his PhD at MIT he got a call from an MIT campus policeman, who somewhat sheepishly explained that he was calling on behalf of an irate member of the Massachusetts Maritime Police Department. Apparently this maritime policeman had been surfing the Web and had come across a picture from my friend’s undergraduate fraternity days, showing him firing water balloons from a giant funnelator. The campus policeman said he was calling to inform my friend that slingshots are illegal in Massachusetts, and that he wanted to make sure that the device had been destroyed.

So here was a picture that was clearly “public” in that it had been published for anyone to see. The intended audience was anyone who was interested in our fraternity’s annual Water War, plus anyone else who might get a chuckle out of it. You could even say the intended audience was everyone in the world except for particularly humor-impaired members of the Massachusetts Maritime Police Department. If webservers had provided such vaguely-defined access rules, we certainly would have used them.

A more realistic idea of public vs. private spaces is one of intended use, with restrictions on access as a proxy for limiting that use. When I write an article for an academic journal or even a blog entry I expect to be called upon to defend my position. When I write a LiveJournal post I expect much less criticism, and I expect that people who read my postings will be the sort of people who generally agree with me and will be accepting of whatever personal thoughts I write. Both are published on the Web, both are “public,” but different social rules are implied by the relative ease of access, ease of discovery, and the different communities that are most likely to come across my posts. Difficult access provides a kind of “soft wall” that restricts access to certain communities, and the social rules of those communities provide a soft wall that limit how my information will be used. I expect most LiveJournal users would feel violated if information from their posts wound up being used in targeted marketing literature, even though most posts aren’t password protected.

I don’t intend to slam WebFountain with this argument — WebFountain is just the latest technology that is moving soft walls around by changing the ground rules. It was also only a matter of time before such a service was be offered. As a coworker of mine has pointed out, it is almost a certainty that the NSA has already developed such technology. (The argument goes: (a) The NSA would have to be really incompetent not to have done this, and (b) the NSA is not incompetent.) Given this is likely, it seems better for society that such technology be out in the open so people can adjust their expectations about how soft those soft walls really are.

Privacy and soft walls Read More »

Move over Zuccarini

For those who don’t know, John Zuccarini is the most notorious of the so-called “typo-squatters,” people who register domain names that are common typos of popular websites and then flood the poor fat-fingering visitor with advertisements. Zuccarini had at least 5,500 copycat Web addresses, and the FTC estimated he was earning between $800,000 and $1 million annually from the mostly porn-based banner ads he displayed, in spite of numerous lawsuits against him for trademark violations. Zuccarini was arrested last week under the new Truth in Domain Names provision in the PROTECT Act of 2003, which makes it illegal to use misleading domain names to lure children to sexually explicit material.

But to add insult to injury, no sooner has Zuccarini been arrested than he has been toppled as the typo-squatting king by a new upstart: the domain-name register VeriSign. Trumping Zuccarini’s 5,500 copycat domain names, VeriSign has used their position as keeper of the keys to redirect ALL unregistered typos to their site. Try going to http://whattheheckisverisignsmoking.com/ and see for yourself. VeriSign has posted a white paper on their new move, which creates a top-level “wildcard” registry for every domain-name request in the .com or .net domains. The change redirects any entry without DNS service to VeriSign’s own SiteFinder search engine, including reserved domain names such as a.com and domain names that are registered to other people but don’t have an active name server.

The main problem is that VeriSign is abusing their position as gatekeeper of the com and net domains, which are a public trust and not VeriSign’s commercial property. Network types have also been quick to point out other ways this move breaks things on the Net. Most important to everyday users, Web browsers are no longer able to gracefully handle bad links or mistyped URLs. Most browsers pop up a small dialog box for a bad URL, leaving the user on the old page. With the new changes, browsers cannot give this functionality. (Of course, for people who use versions of IE that redirect to Microsoft’s search-page, the only difference will be a change of masters.) Furthermore, debugging scripts often use domain-not-found errors to check for routing problems; these are no longer returned. And finally, anti-spam software also often uses domain-not-found errors to detect mail from invalid email addresses. (There was also concern that email sent to a typoed domain name would not bounce properly, but it seems this was either not the case or has been fixed.)

As one might expect, the flameage has been fast and furious on this one. Of particular note is the discussion on the North American Network Operators Group mailing list, where members have already contributed several patches to routing software that would essentially ignore VeriSign’s wildcard lookup, restoring the Internet (or at least the portions that apply the patch) to the old way things operated. Many are also simply dropping the IP address for sitefinder.verison.com (64.94.110.11) on the floor. If widely adopted such actions would essentially neutralize VeriSign’s change, but I expect the adoption levels will only be enough to be a statement of protest, not an actual revolution. However, Computer Business Review notes that the Internet Corporation for Assigned Names and Numbers (ICANN), which manages aspects of the DNS for the US government, has yet to weigh in on whether VeriSign’s changes are actually valid according to agreed-upon specs.

UPDATE: It seems VeriSign is only half-handling email correctly. What they’ve done is hooked up their own special mail-handler (which they call the Snubby Mail Rejector Daemon v1.3) that returns a fixed set of responses to SMTP transactions. Currently, VeriSign reads the From and To headers and then returns an error code. This means all misaddressed email relies on VeriSign’s server to bounce mail, and should the server not be available bounces might be delayed by several days. It also means that all addresses of typoed email are actually sent to VeriSign before being bounced, rather than stopped locally. Of course, I’m sure no VeriSign employee would be so criminal as to actually use this information for industrial espionage, nor would he change the Snubby Mail Daemon to actually collect the contents of said messages.

Friends of mine have also pointed out that ISPs and businesses “cache” DNS addresses on their local DNS servers. By claiming that all DNS requests are legitimate, VeriSign is clogging these caches with bad requests.

References

Move over Zuccarini Read More »

Flash Voids

Science fiction author Larry Niven once described a world where people would instantly teleport to places where something interesting was happening, causing what he called “Flash Crowds.” Now the LA Times reports that movie makers are seeing the opposite problem: instant communication means that if the audience doesn’t like your movie on opening-night Friday, by Saturday you’ll have yourself a flash void:

“Today, there is just no hope of recovering your marketing costs if the film doesn’t connect with the audience, because the reaction is so quick — you are dead immediately,” said Bob Berney, head of Newmarket Films, which distributed “Whale Rider,” a well-received, low-budget New Zealand picture that grossed $12.8 million and has endured through the summer. “Conversely, if the film is there, then the business is there.”

Two things are going on here. The first is just that word-of-mouth is getting faster, which we already knew. That means that the old strategy of hyping a bad movie so everyone sees it before the reviews come out won’t work much longer. The more important point, though, is that movie companies are seeing their carefully crafted ad campaigns overwhelmed by the buzz created by everyone’s texting, emailing and blogging. The shift in power cuts both ways: audience-pleasers like Bend It Like Beckham thrive on almost buzz alone, while The Hulk was killed by buzz based partially on pirated pre-release copies, in spite of a huge marketing campaign.

Studios (and producers in general) will learn one of two lessons from this trend. Either they’ll decide they need to manipulate buzz by wooing mavens and carefully controlling how information is released, or, just possibly, they’ll follow the advice of Oren Aviv, Disney’s marketing chief: “Make a good movie and you win. Make a crappy movie and you lose.”

References

Flash Voids Read More »

The ESP Game

What do ESP and Artificial Intelligence have in common? The ESP Game, a new game (and AI research project) recently discussed at IJCAI by CMU researcher Luis von Ahn.

Many AI researchers believe that the biggest barrier to creating human-like intelligence is that humans know millions of simple everyday facts. This ordinary knowledge ranges from knowing what a horse looks like to a simple fact like “people buy food in restaurants.” In the past, AI researchers would spend years painstakingly entering such information into huge databases, but now a new crop of researchers are leveraging the millions of Netizens who have nothing better to do than answer stupid questions all day to build these databases quickly and for free. One such site is the OpenMind Initiative (hosted by my own Ricoh Innovations), which is primarily being used by the MIT Media Lab to collect Common Sense Knowledge.

The latest foray into this space is the ESP Game. When you log into the game you are paired randomly with another player on the Net. Both you and your partner are shown the same 15 random images from the Web, one at a time. Your job is to type in as many words to describe the image as possible, with the goal of matching a word your partner has entered. When you agree on a word, you both get points and move on to the next image. Usually I don’t care for Web-based games, but I have to admit the game is compelling.

The real goal of the system is to generate a huge database of human-quality keywords for all the images on the Net. The task is huge: Google’s Image Search has already indexed over 425 Million images by using the text that surrounds the image’s hyperlink. But numbers are on Ahn’s side: if only 5000 people were to play the game throughout the day, all 425 Million images would receive at least one label in a single month. Given that many game sites get over 10,000 players in a day, a few months is probably all Ahn needs to fill out the whole database.

The ESP Game Read More »

Micropayments finally here?

I’m probably the last on the block to have heard about this, but Scott McCloud, the author of Understanding Comics, has finally come out with an online comic available for a micropayment of 25 cents. Or rather, he came out with it over a month ago, but I just found out about it today. As you might expect from Scott, he’s put the new medium (Macromedia Flash in this case) to good use without losing the fundamental comic-book feel. It was a quarter well-spent, especially since I could download the content to my computer and feel like I actually got something I can call “my copy.”

Payments are made through BitPass, a new startup out of Stanford that allows you to open an account with as little as three dollars and a credit card or PayPal account. The whole process was quick and painless, as is the payment process itself. There’s not too much content you can purchase through BitPass yet, but it looks like they’re building up a solid content base as they go through their beta-testing. Content providers seem to still be figuring out how the market will play out for different kinds of media: models range from the donation cups that are already common with PayPal, to purchase-and-download, to a “30 reads in 90 days” pay-per-view kind of model.

And now just in case I wasn’t quite the last person on the Internet to have heard about this, you know too.

Micropayments finally here? Read More »