Big Brother

Privacy Backlash

Jane Black’s Privacy Matters column in BusinessWeek this week takes a look at the privacy backlash against the MATRIX statewide database and similar programs:

BIRTH OF BIG BROTHER. There’s no doubt that MATRIX raises privacy red flags, though after an extensive briefing by the Florida Law Enforcement Dept., which is spearheading the project, I believe that it’s little more than an efficient way to query multiple databases.

The real furor over MATRIX demonstrates something much more important — and surprising: Privacy advocates have gained a lot of ground in the two years since September 11. And the pendulum is swinging back in their favor.

“The MATRIX is not whirring away at night to create a list of suspects that is placed on my desk every morning,” says Zadra [chief of investigation at the Florida Law Enforcement]. “All it does is dynamically combine commercially available public data with state-owned data [such as driver’s license information, sexual-predator records, and Corrections Dept. information] when queried. I can’t imagine any citizen getting angry that we’re using the best tools available to efficiently and effectively solve crimes.”

Nobody has a problem with law enforcement using the best tools to solve crimes. Everybody has a problem with law enforcement using those tools to harass innocent citizens and suppress free expression of speech. It’s because of this potential for abuse that we have things like the Fourth Amendment and laws preventing the CIA from spying on US citizens. The trouble with all these combined public/commercial database plans like MATRIX, CAPPS-2 and TIA is that commercial databases have no such protections — companies can and will do just about anything to gather information about us, and it’s all perfectly legal. Why should I care whether it’s the CIA or Master Card that is telling the government what breakfast cereal I eat?

Privacy Backlash Read More »

Trusted Computing

I’ve finally gotten around to reading up on Trusted Computing (a process that, ironically enough, was interrupted by my being rootkitted a couple of weeks ago). I’d heard some pretty unsettling things about trusted computing, but now that I’ve done some digging… well it’s still pretty disturbing.

Trusted Computing (TC) is one of several names for a set of changes to server, PC, PDA and mobile phone operating systems, software and hardware that will make these computers “more trustworthy.” Microsoft has one version, known as Palladium or Next Generation Secure Computing Base (NGSCB), and an alliance of Intel, Microsoft, IBM, HP and AMD known as the Trusted Computing Group has a slightly different one called either trusted computing, trustworthy computing, or “safer computing.” Some parts of Trusted Computing are already in Windows XP, Windows Server 2003, and the in the hardware for the IBM Thinkpad, and many more will be in Microsoft’s new Longhorn version of Windows scheduled for 2006.

The EFF has a nice introduction to trusted computing systems, written by Seth Schoen, and Ross Anderson has a more detailed and critical analysis. A brief summary of the summary is that a trusted computer includes tamper-resistant hardware that can cryptographically verify the identity and integrity of the programs you run, verify that identity to online “policy servers,” encrypt keyboard and screen communications, and keep an unauthorized program from reading another program’s memory or saved data. The center of this is the so-called “Fritz” chip, named after Senator Fritz Hollings of South Carolina, who tried to make digital rights management a mandatory part of all consumer electronics. (He failed and is retiring in 2004, but I’ve no doubt there will be attempts to pass similar laws in the future.)

When most people think about computer security they think about virus detectors, firewalls and encrypted network traffic — the computer analogs to burglar alarms, padlocks and opaque envelopes. The Fritz chip is a different kind of security, more like the “political officer” that the Soviet Union would put on every submarine to make sure the captain stayed loyal. The whole purpose of the Fritz chip is to make sure that you, the computer user, can’t do anything that goes against the policies set by the people who wrote your software and/or provide you with web services.

There are many people who would like such a feature. Content providers such as Disney could verify that your version of Windows Media Player hasn’t had digital rights management disabled before sending you a decryption key for a movie. Your employer could prevent email from being printed or read on non-company machines, and could automatically delete it from your inbox after six months. Governments could prevent leaks by doing the same with sensitive documents. Microsoft and AOL could prevent third-party instant-message software from working with the MSN or AIM networks, or lock-in customers by making it difficult to switch to other products without losing access to years worth of saved documents. Game designers could keep you from cheating in networked games. Distributed computing and mobile agents programs could be sure their code isn’t being subverted or leaked when running on third-party systems. Software designers could verify that a program is registered and only running on a single computer (as Windows XP does already), and could even prevent all legitimate trusted computers from reading files encrypted by pirated software. Trusted computing is all about their trust, and the person they don’t trust is you.

End users do get a little bit of “trust” out of trusted computing, but not as much as you might think. TC won’t stop hackers from gaining access to a system, but it could be used to detect rootkits that have been installed. TC also won’t prevent viruses, worms or Trojans, but it can prevent them from accessing data or keys owned by other applications. That means a program you download from the Internet won’t be able to email itself to everyone in your (encrypted) address book. However, TC won’t stop worms that exploit security holes in MS Outlook’s scripting language from accessing your address book, because Outlook already has that permission. In spite of what the Trusted Computing Group’s backgrounder and Microsoft’s Palladium overview imply, TC won’t help with identity theft or computer thieves physically accessing your data any more than current public key cryptography and encrypted file systems do.

As long as you agree with the goals of the people who write your software and provide your web services, TC isn’t a bad deal. After all, most people don’t want people to cheat at online games and can see the value of company email deletion policies. The same can be said of the political officer on Soviet submarines — they were great as long as you believed in what the Communist Party stood for. And unlike Soviet submarine commanders, you won’t get shot for refusing to use TC on your computer. Your programs will still run as always, you just won’t be able to read encrypted email from your customers, watch downloaded movies, or purchase items through your TC-enabled cellphone. Some have claimed that this is how it should be, and that the market will try out all sorts of agreements and those that are acceptable to both consumers and service providers will survive. That sounds nice in theory, but doesn’t work when the market is dominated by a few players (e.g. Microsoft for software, wireless providers for mobile services, and the content cartel for music and movies) or when there are network externalities that make it easy to lock in a customer base (e.g. email, web, web services and electronic commerce). What choice will you have in wordprocessors if the only way you can read memos from your boss is by using MS Word? What choice will you have in stereo systems when the five big record companies announce that new recordings will only be released in a secure-media format?

Of course, even monopolies respond to strong enough consumer push-back, but as Ross Anderson points out there are subtle tricks software and service providers can pull to lock in unwary consumers. For example, a law firm might discover that migrating years of encrypted documents from Microsoft to OpenOffice requires sign-off for the change by every client that has ever sent an encrypted email attachment. That’s a nasty barrel to be over, and the firm would probably grudgingly pay Microsoft large continuing license fees to avoid that pain. These kinds of barriers to change can be subtle, and you can bet they won’t be a part of the original sales pitch from Microsoft. But then what do you expect when you invite a political officer into your computer?

References

Trusted Computing Read More »

TSA still pushing on CAPPS II

It seems the Transportation Security Administration is still determined to go forward with their test of the Computer Assisted Passenger Prescreening System (CAPPS II) with live data, even if it means forcing airlines to cooperate. Airlines are understandably hesitant, since Delta Airlines withdrew support after facing a passenger boycott and JetBlue is now facing potential legal action for handing over passengers’ data to a defense contractor without passenger knowledge or consent.

For those who haven’t heard about CAPPS-II, the idea is to replace the current airline security system where passenger’s names are checked against a no-fly list and people with “suspicious” itineraries like one-way flights are flagged for extra search. The TSA has released a disclosure under the Privacy Act of 1974, and Salon published a nice overview on the whole debate a few weeks ago. The ACLU also has a detailed analysis. Extremely briefly, the new system would work like this:

  1. Airlines ask for your Name, Address, Phone Number and Date of Birth.
  2. That info plus your itinerary goes to the CAPPS-II system, which
  3. sends it to commercial data services (e.g. the people who determine your credit rating) who
  4. send back a rating “indicating a confidence level in that passenger’s identity.”
  5. CAPPS-II sends all the info to the Black Ops Jedi-Mind-Reader computer that was provided by aliens back in 1947.
  6. The Black Ops computer comes back with a rating of whether you are or are not a terrorist, ax murderer, or likely to vote against the President.
  7. Based on both identity and threat ratings, the security guard either gives you a once-over, strip-search, or shoots you on sight (actually, just arrest on sight).

Number 6 is the part that really scares people, because the TSA refuses to say anything about how the (classified) black box computer system will identify terrorists. It could be based on racial profiling, political ideology, or i-ching and no one would ever know.

There’s a lot of speculation that the whole “airline security” story is just an excuse to collect travel information from everyday citizens for use in something akin to the Total Information Awareness project that was just killed (or at least mostly just killed) by Congress last week. I’m of two minds on that theory. On the one hand, I can’t believe the people at the TSA would really be so stupid as to think something like CAPPS-II would work for the stated purpose, so they must have ulterior motives. On the other hand, maybe I’m being too generous and they really are that stupid, or at least have been deceived by people a little too high on their own technology hype. Of course, there might be a bit of both going on here.

Too many details are left out of the TSA’s description of CAPPS-II to do a full evaluation, but even with what they’ve disclosed there are some huge technological issues:

  • The commercial database step (#4) is to verify that you are who you say you are. The classified black-box step (#6) is to verify that the person you say you are is not a terrorist. This means a terrorist only has to thwart one of the two checks: he either steals the identity of a mild-mannered war hero who is above suspicion, or he gives his real identity and makes sure he doesn’t raise any red flags himself. Since no biometric info (photo, fingerprints, or the like) is used, it would be trivial to steal someone else’s name, address, phone number and birth date and forge a driver’s license for the new identity.
  • Like all automatic classifiers, CAPPS-II needs to be tuned to trade off the number of false positives (innocent people arrested) vs. false negatives (terrorists let through with just a cursory search). Make it too sensitive and every third person will trigger a request for a full search (or worse, arrest), slowing down the security lines. Make it too lax and terrorists will get through without giving up their nail files. The trouble is that airports screen over a billion people a year, and yet even with our supposed heightened risk these past two years far fewer than one in a billion is a terrorist who plans to hijack a plane. Given those numbers, even if our CAPPS-II system correctly identified an innocent person 99.99999% of the time, we would still arrest 1000 people per year due to false information. And with a 99.99999% accuracy requirement on false positives, the odds are good that even Jedi-mindreading alien technology won’t have a great false-negative rate. This isn’t to say risk-assessment has no effect — it may still give better odds than the system we use currently — but most of the benefit from our security screening comes from the added random risk of being caught that a terrorist faces. And that brings us to the third technical problem: intelligent opponents.
  • Standard classification is a pattern recognition problem. A computer is given large amounts of data and expert knowledge, and tries to predict what class a sample (in this case, a passenger) falls into. Classification of intelligent adversaries is different though — it leaves the realm of normal pattern recognition and enters into game-theory. Once this happens it’s a constant arms (and intelligence) race: terrorists commit 9/11 with one-way tickets, so we double-search people with one-way tickets. So all but the stupidest of terrorists now buy round-trip tickets, thus giving them even better than random chance to get through with just a once-over. Of course, we know that’s what they would do, so we should switch to letting one-way tickets through and double-search round-trip tickets, at least until the terrorists catch on and change their plans. (Surely I cannot choose the wine in front of me.) There is a solution to all this madness: completely random selection of passengers for extra screening cannot be gamed in this way. Anything else and it become a question of who can figure out the other side’s profile faster, and given an intelligent foe who can probe the system to his heart’s content, I know who I’d bet on in that race.

Given that Congress has just moved to delay CAPPS II until the General Accounting Office makes an assessment, I can only hope they’ll have similar questions and concerns. This system is either lunacy or a boondoggle to keep a database on the travel habits of every single American — neither is a comforting option.

TSA still pushing on CAPPS II Read More »

Homefront Confidential

Since March 2002, the Reporters Committee for Freedom of the Press has released a semiannual report on how the War on Terrorism is affecting “access to information and the public’s right to know.” The fourth edition of this report, Homefront Confidential, has just been released.

The 89-page report ranks threats to a free press on the same color-code used by the Department of Homeland Security:

  • Red (severe): Access to terrorism & immigration proceedings; restrictions on Freedom of Information Act requests
  • Orange (high): Covering the war; military tribunals
  • Yellow (elevated):The USA PATRIOT Act and beyond; The reporter’s privilege; The rollback in state openness
  • Blue (guarded): Domestic coverage

Homefront Confidential is a stark contrast to the kind of “information wants to be free” rhetoric I so usually find (and, I’ll admit, often speak) here in Silicon Valley. In my techno-optimistic world, information naturally flows straight from bloggers in the field to a public eager for news, with no gatekeepers between us. There is some truth to this notion, and blogs have been credited with breaking the Monica Lewinsky story and keeping Trent Lott’s racist remarks about Strom Thurmond in the public eye, as well as many other successes.

But while blogs and other Internet reporting can both accelerate a story’s propagation and occasionally magnify the voice of an eyewitness or whistleblower, most important news starts in the hands of a few important decision-makers. Without cooperation from the Justice Department, information about closed terrorism and immigration proceedings (including the detainees’ names) is simply not available. Without access to battlefields and military officers, details about our progress in war is not available. The government also has extensive powers to keep information bottled up, from criminal prosecution of whistleblowers under the Homeland Security Act, to legal restrictions on commercial satellite imaging companies, to use of subpoenas to force reporters to reveal their sources. These are all effective restrictions on the flow of information that aren’t deterred by the blogger’s nimble RSS feed.

Information wants to be free in this networked age, but the information that is most important for keeping our government in check is still behind several gatekeepers. In deciding the laws and policies of our land it’s important to remember the converse of this techie creed: Yes, information wants to be free, but freedom also requires information.

References

Homefront Confidential Read More »

Slate’s Guide to the Patriot Act

With tomorrow’s anniversary of 9/11, John Ashcroft wrapping up his national tour for promoting the USA Patriot Act, and President Bush asking for more authority under what is being called the first of several Patriot-II laws, I highly recommend people go read Dahlia Lithwick and Julia Turner’s four-part series, A Guide to the Patriot Act, published in Slate. Lithwick and Turner manage to cut through the spin-doctoring on both sides of the debate, presenting the more controversial parts of the Act without shilling for one side or the other, but while still presenting their own analysis and thoughtful interpretation. It’s a breath of fresh air, cutting between punditry and objective-to-a-fault reporting-without-analysis:

How bad is Patriot, really? Hard to tell. The ACLU, in a new fact sheet challenging the DOJ Web site, wants you to believe that the act threatens our most basic civil liberties. Ashcroft and his roadies call the changes in law “modest and incremental.” Since almost nobody has read the legislation, much of what we think we know about it comes third-hand and spun. Both advocates and opponents are guilty of fear-mongering and distortion in some instances.

The truth of the matter seems to be that while some portions of the Patriot Act are truly radical, others are benign. Parts of the act formalize and regulate government conduct that was unregulated — and potentially even more terrifying — before. Other parts clearly expand government powers and allow it to spy on ordinary citizens in new ways. But what is most frightening about the act is exacerbated by the lack of government candor in describing its implementation. FOIA requests have been half-answered, queries from the judiciary committee are blown off or classified. In the absence of any knowledge about how the act has been used, one isn’t wrong to fear it in the abstract — to worry about its potential, since that is all we can know.

Ashcroft and his supporters on the stump cite a July 31 Fox News/Opinion Dynamics Poll showing that 91 percent of registered voters say the act had not affected their civil liberties. One follow-up question for them: How could they know?

If you haven’t read all 300-plus pages of the legislation by now, you should.

Since I haven’t read all 300-plus pages of the legislation myself, I won’t tell you to do so. But I will tell you to go and read Lithwick and Turner’s guide.

References

Slate’s Guide to the Patriot Act Read More »

Webmaster to start one-year sentence

Sherman Austin headed to jail on Wednesday to start his one-year prison sentence, guilty of hosting plans for the manufacture of explosives on his anarchist website, RaiseTheFist.com. The plans were not written by Austin, but Austin provided free hosting for anarchists and political protesters. In January of 2002, the FBI raided the home where Austin lived with his parents and confiscated all his computers and backup disks, including the server for RaiseTheFist. Agents also found components to make a Molotov cocktail. Austin was 18 years old at the time. (Austin details the entire story in an interview with CounterPunch.)

A few days later Austin went to the World Economic Forum protest in New York, where he was arrested and held without bail. He was eventually charged with possession of an unregistered firearm (the Molotov cocktail components), and with violating the controversial 1997 federal law that makes it illegal to distribute information about the manufacture of explosives “with the intent that the… information be used for, or in furtherance of, an activity that constitutes a Federal crime of violence.” The law, championed by Sen. Dianne Feinstein (D-Calif.), raised serious first amendment issues when it was proposed. According to a CNET interview with Austin shortly before he went to prison, he is the first person to be convicted under the law.

In a statement on his web site, Austin said he originally planned to contest the charges. He decided to plead guilty to the information dissemination crime in return for the dropping of the firearms charge, because “after my lawyer consulted the USPO working on the case, she found out that a ‘terrorism enhancement’ is applicable to my charge, which could get me an additional 20 years.” According to the LA Times, Austin was offered a plea bargain of four months in prison followed by four months in a halfway house, but U.S. District Judge Stephen V. Wilson rejected the plea and sentenced Austin to a full year in prison. After completing his term, he will be placed on three years probation, and will be barred from associating with any groups that espouse violence to achieve political, economic or social change. He will also need permission from the probation office operate a computer. The EFF has protested that the sentence is too severe for the alleged crime.

Several things bother me about this case.

First are the obvious First Amendment issues with the anti-information law under which he was convicted. Two things are necessary for this law to apply. The first is the distribution of information about explosives, which is clearly pure speech that is protected under the First Amendment. The second is the intent that the information be used for a violent crime, which is inherently difficult to prove or to disprove. It seems quite reasonable that Austin was all bluster and no action, an angry 18-year-old boy who liked to play political terrorist on his website and in his back yard but was not violent in real life. It is telling that the only previous charges brought against Austin were for refusal to disperse, conspiracy to commit a refusal to disperse, unlawful assembly, and disorderly conduct for blocking pedestrian traffic. In other words, for committing peaceful civil disobedience.

It’s not surprising that the FBI thought they were dealing with a dangerous terrorist psychopathic when they went to RaiseTheFist.com and saw pictures of George W. Bush with a gun sight on his head, or read posts saying “Yeah, motherfucker, I’m a terrorist to the United States Government. I’m a terrorist to capitalism.” and “We don’t gather weapons, plan extreme operation, and risk our lives for nothing. This is real.” But that’s just speech, not action. It’s like the old Saturday Night Live running gag where someone says “Well, it’s not like I said I was going to kill the president…” and gets jumped by Secret Service agents that come out of nowhere. It’s also not clear to me whether Austin was the author of any of these more violent postings, or whether he merely hosted them.

The second bothersome point is that the this smacks of selective enforcement. Information on how to make bombs is everywhere, from libraries to web sites to bookstores. This includes the infamous Anarchist’s Cookbook that was published in 1970, and about which the author admits that the “central idea to the book was that violence is an acceptable means to bring about political change.” And yet, the FBI has yet to raid Amazon.com to stop them from distributing this information. Of course, Amazon was not the author of the book, and it would be unfair to assume that Amazon intends violence just because they sell a violent book. Just as Austin did not write the explosives guide, and it is unfair to assume he intends any violence just because he offers web hosting for a violent page. Clearly, the crackdown was at least in part due to RaiseTheFist’s message, and the fact that this message was in alignment with the growing anti-globalization movement.

The final point is most troubling: Austin was never able to argue his case. Plea bargains are meant to be an incentive to surrender when guilt is obvious. In cases like Austin’s, where the plea is for a four-month sentence and the risk is 20+ years, there is huge incentive for a suspect to plead guilty even when he knows he is innocent. Sadly, this is often the rule rather than the exception, especially for the poor. It is only because this case involves mediapathic issues such as First-Amendment rights, the Internet, and terrorism that we have heard about it at all, unlike the hundreds of cases every day where innocent men and women cop a plea to go free based on time served rather than risk further jail time to clear their names.

Austin’s lawyer describes his client as “a very peaceful person” who got carried away “in a very heated political environment.” A clinical psychologist who specializes in threat assessments wrote for the defense that Austin “does not appear to have seriously considered the ramifications” of his actions “and would have been horrified had someone been injured.” Let us hope that his year in prison, and his apparent abuse by the system, does not turn this peaceful-but-angry young man into the very terror the FBI fears.

References

Webmaster to start one-year sentence Read More »

Face Recognition gets the boot in Tampa

Tampa Police have decided to scrap their much-criticized face-recognition system, admitting that during a two-year trial the system did not correctly identify a single suspect. Similar face-recognition systems are still in use in Pinellas County, Florida, and Virginia Beach, Virginia, though neither of these systems have ever resulted in an arrest either.

Face-recognition technology evokes images of automatic cameras scanning bustling crowds, automatically picking out terrorists from the millions of faces that pass by. One day the technology may be able to deliver on this, but currently it is still necessary for a human controller to zoom in on individual faces using a joystick. A 2001 St. Petersburg Times article describes a Tampa police officer scanning the weekend crowd in Ybor City, checking 457 faces out of the some 125,000 tourists and revelers in an evening.

Let’s do some quick math. The police are only scanning 457 out of 125,000 people on a given night, or 0.3%. That means even if ten known bad guys from the watch-list are in the crowd, there’s still only a 4% chance any one of them will be looked at by the system. That number drops to 0.4% if there’s only one bad guy in the crowd that night.

Then there’s the chance that the face recognition system doesn’t sound an alarm. A recently published evaluation of the Identix system used in Tampa gives a base hit rate of 77% (that is, 77% of people on a watch-list were correctly identified). However, that was with a watch-list of only 25 faces. The hit rate goes down as watch-list size goes up, down to 56% with a watch-list of 3000 faces. According to the Associated Press, the Tampa database had over 24,000 mug shots on its watch-list. Then there’s the problem that mug shots were taken indoors and the surveillance cameras were outdoors. According to the evaluation, mixing indoors and outdoors can reduce hit rates by around 40%. (The 40% reduction was seen on identity verification tasks; the watch-list task is actually more difficult.) Finally, these results all assume a 1% false-positive rate, which would result in five false alarms per night. Given all these (well-known) problems, it’s amazing anyone ever thought this was a good idea.

There’re several reasons I hope this failure dissuades similar attempts by other law-enforcement communities. First, as a 2001 ACLU report on the Tampa system points out, our resources could be better spent, and face recognition can give us a false sense of security. Second, a face-recognition systems in a public space gives the impression that everyone is a suspect, regardless of whether the system actually works. And finally, face recognition technology continues to improve. It won’t happen in the next few years, but at some point the technology is going to reach the point where recognition is completely automated, high accuracy, and robust. When that happens, it will be possible to track large numbers of people as they go about their daily lives, and even track people retroactively from recorded video. Hopefully by this time our society will be so inoculated against such privacy violations that such uses will be inconceivable.

References

Face Recognition gets the boot in Tampa Read More »

I got the horse right here…

The story sounds like something out of The Onion, or maybe a dystopian science fiction short story. As reported widely in the news yesterday, the Pentagon has been planning an electronic futures market for analysis of foreign affairs. The idea is to create a market where people can anonymously bet on things like whether the US will reduce troop deployment in Iraq by year’s end, or whether Arafat will be assassinated. The current odds on the bet, so the argument goes, best reflects the actual probability given everything the collected thinkers know. Policy-makers could then use the probability to know where to focus their attention.

By today the firestorm had swept Washington and the Pentagon announced the project has been canceled. Apparently congressmen were not completely aware of what had been planned, in spite of the general plan being up for many months on DARPA’s web site and mention of the project in a March New Yorker article.

I can’t help but feel sympathy for Robin Hanson, the George Mason University Economics professor who has been spearheading the project. Critics were quick to describe the project as a marketplace where terrorists and mercenaries could make money by betting some horrific event would happen and then causing it. But as Hanson describes in interviews and on his Web site, the idea is more that professors, armchair analysts, and frequent travelers from all walks of life would combine their on-the-ground expertise to come to conclusions even the most expert intelligence worker in Washington wouldn’t be able to reach. But interested as I am by the concept I just can’t see it working for a number of reasons:

  • First off, critics are right in thinking there’s something morally repugnant about the whole plan. The US government should not be hosting a Website dedicated to graveyard gambling, regardless of whether it would actually encourage terrorists to make money from their exploits. (Personally I don’t believe there’s any chance a halfway-competent terrorist would bet on his own success on a Website run by DARPA, regardless of their assurances that betters will remain anonymous.) In fact, the whole plan bears a striking resemblance to the Assassination Politics plan devised by the Cypherpunk-anarchist Jim Bell. The plan describes how communities of individuals could put a price on a government official’s head simply by donating a prize to whoever can predict the exact date of that person’s death. (Bell is currently facing a 10-year prison term for harassment of a Federal officer.)
  • There has been a lot of talk about how the attack on the World Trade Center could have been avoided if all the information that was distributed around the country could have been brought together in one place. That may be true, but an ideas futures market wouldn’t have helped. What we needed was more analysis and communication; the marketplace is too abstract and mediated to allow anyone to put the pieces together. An ideas market won’t bring together the CIA agent studying Al Quaeda and the Florida flight-school instructor because neither would have enough pieces of the puzzle to realize what they were looking at. Marketplaces are additive; intelligence requires synthesis.
  • Even if the market was a reasonable risk-estimation system, it’s not clear what the government could do with that information. As Bloomberg.com, points out, the market would be quite noisy, similar to the stock market. As we’ve seen from the constant rainbow of alerts we’ve gone through over the past two years, unspecified and uncoroborated threats aren’t all that useful when you’re trying to set up a defense.

Update: According to futures sales on Tradesports.com, John Poindexter’s chances of keeping his job after this uproar are around 70%.

References:

I got the horse right here… Read More »