September 2003

TSA still pushing on CAPPS II

It seems the Transportation Security Administration is still determined to go forward with their test of the Computer Assisted Passenger Prescreening System (CAPPS II) with live data, even if it means forcing airlines to cooperate. Airlines are understandably hesitant, since Delta Airlines withdrew support after facing a passenger boycott and JetBlue is now facing potential legal action for handing over passengers’ data to a defense contractor without passenger knowledge or consent.

For those who haven’t heard about CAPPS-II, the idea is to replace the current airline security system where passenger’s names are checked against a no-fly list and people with “suspicious” itineraries like one-way flights are flagged for extra search. The TSA has released a disclosure under the Privacy Act of 1974, and Salon published a nice overview on the whole debate a few weeks ago. The ACLU also has a detailed analysis. Extremely briefly, the new system would work like this:

  1. Airlines ask for your Name, Address, Phone Number and Date of Birth.
  2. That info plus your itinerary goes to the CAPPS-II system, which
  3. sends it to commercial data services (e.g. the people who determine your credit rating) who
  4. send back a rating “indicating a confidence level in that passenger’s identity.”
  5. CAPPS-II sends all the info to the Black Ops Jedi-Mind-Reader computer that was provided by aliens back in 1947.
  6. The Black Ops computer comes back with a rating of whether you are or are not a terrorist, ax murderer, or likely to vote against the President.
  7. Based on both identity and threat ratings, the security guard either gives you a once-over, strip-search, or shoots you on sight (actually, just arrest on sight).

Number 6 is the part that really scares people, because the TSA refuses to say anything about how the (classified) black box computer system will identify terrorists. It could be based on racial profiling, political ideology, or i-ching and no one would ever know.

There’s a lot of speculation that the whole “airline security” story is just an excuse to collect travel information from everyday citizens for use in something akin to the Total Information Awareness project that was just killed (or at least mostly just killed) by Congress last week. I’m of two minds on that theory. On the one hand, I can’t believe the people at the TSA would really be so stupid as to think something like CAPPS-II would work for the stated purpose, so they must have ulterior motives. On the other hand, maybe I’m being too generous and they really are that stupid, or at least have been deceived by people a little too high on their own technology hype. Of course, there might be a bit of both going on here.

Too many details are left out of the TSA’s description of CAPPS-II to do a full evaluation, but even with what they’ve disclosed there are some huge technological issues:

  • The commercial database step (#4) is to verify that you are who you say you are. The classified black-box step (#6) is to verify that the person you say you are is not a terrorist. This means a terrorist only has to thwart one of the two checks: he either steals the identity of a mild-mannered war hero who is above suspicion, or he gives his real identity and makes sure he doesn’t raise any red flags himself. Since no biometric info (photo, fingerprints, or the like) is used, it would be trivial to steal someone else’s name, address, phone number and birth date and forge a driver’s license for the new identity.
  • Like all automatic classifiers, CAPPS-II needs to be tuned to trade off the number of false positives (innocent people arrested) vs. false negatives (terrorists let through with just a cursory search). Make it too sensitive and every third person will trigger a request for a full search (or worse, arrest), slowing down the security lines. Make it too lax and terrorists will get through without giving up their nail files. The trouble is that airports screen over a billion people a year, and yet even with our supposed heightened risk these past two years far fewer than one in a billion is a terrorist who plans to hijack a plane. Given those numbers, even if our CAPPS-II system correctly identified an innocent person 99.99999% of the time, we would still arrest 1000 people per year due to false information. And with a 99.99999% accuracy requirement on false positives, the odds are good that even Jedi-mindreading alien technology won’t have a great false-negative rate. This isn’t to say risk-assessment has no effect — it may still give better odds than the system we use currently — but most of the benefit from our security screening comes from the added random risk of being caught that a terrorist faces. And that brings us to the third technical problem: intelligent opponents.
  • Standard classification is a pattern recognition problem. A computer is given large amounts of data and expert knowledge, and tries to predict what class a sample (in this case, a passenger) falls into. Classification of intelligent adversaries is different though — it leaves the realm of normal pattern recognition and enters into game-theory. Once this happens it’s a constant arms (and intelligence) race: terrorists commit 9/11 with one-way tickets, so we double-search people with one-way tickets. So all but the stupidest of terrorists now buy round-trip tickets, thus giving them even better than random chance to get through with just a once-over. Of course, we know that’s what they would do, so we should switch to letting one-way tickets through and double-search round-trip tickets, at least until the terrorists catch on and change their plans. (Surely I cannot choose the wine in front of me.) There is a solution to all this madness: completely random selection of passengers for extra screening cannot be gamed in this way. Anything else and it become a question of who can figure out the other side’s profile faster, and given an intelligent foe who can probe the system to his heart’s content, I know who I’d bet on in that race.

Given that Congress has just moved to delay CAPPS II until the General Accounting Office makes an assessment, I can only hope they’ll have similar questions and concerns. This system is either lunacy or a boondoggle to keep a database on the travel habits of every single American — neither is a comforting option.

TSA still pushing on CAPPS II Read More »

Breaking the brick

Intel’s Personal Server project, lead by Ubiquitous Computing long-timer Roy Want, got some press this past week after it was shown at the Intel Developer Forum. The prototype is a 400MHz computer with Bluetooth, battery and storage, all about the size of a deck of cards. No screen and no keyboard — I/O is handled by whatever devices happen to be around, be they the display and keyboard on your desk, the large-screen projector in the conference room or your portable touch-screen. This concept isn’t new; it’s something that researchers in Ubiquitous Computing and Wearable Computing (including Roy) have been talking about for over a decade. But it is the right concept, and Moore’s Law is finally bringing it to almost within reach.

There are three main reasons why this is the Right Thing(tm):

  • Your hands aren’t getting smaller. Handheld computers are now small enough that the limiting factor is screen and button size. Since our hands aren’t getting any smaller, we’re pretty much at the limit for everything-in-a-single-brick handhelds, at least for current applications. One of the ways out of that box is the wearable computing approach, where interfaces are spread around the body like clothing or jewelry. Displays are shrunk by embedding them directly into the glasses, tiny microphones are used for speech recognition, micro cameras and accelerometers are used for for gesture and context recognition, and specialty input devices such as medical monitors get used instead of more generic input devices. One of the big difficulties with wearables is all the wires leading from the CPU/Disk/Battery unit to the I/O devices, and in fact this problem was a big motivating force behind the IEEE 802.15 short-range wireless standards, which include Bluetooth. Wireless isn’t a complete solution (you still have to worry about powering your I/O devices) but it’s a start.

    The other way to break the hand-size limit is the UbiComp approach: use whatever interfaces are in your surrounding area. When I’m at my desk I want to use my nice flat-panel display and ergonomic keyboard, not my black-and-white cellphone LCD. When I give a presentation I want to use the conference hall’s projector. I don’t need a keyboard at all, just enough to launch my Keynote presentation and change slides. Roy naturally leans towards this second approach, but as I’ve argued before the Ubicomp and Wearables approaches work well together; there’s no need to choose.

  • Always the right tool for the job. Another advantage to breaking the CPU from the I/O is it gets around an inherent conflict in interface design. On the one hand, designers will tell you that you always want the interface to fit the task. Use a hammer to drive nails and a screwdriver to turn screws, and all that. But in the mobile world you don’t want to carry around your cellphone, PDA, MP3 player, two-way pager, camera and laptop everywhere you go. When it comes to mobility, most people choose to carry a Swiss Army knife instead of a full toolchest, even though the one-size-fits-all interface won’t ever be quite right for the task. (That’s why I still carry my Danger Hiptop, which is great for text messaging but feels like I’m holding a bar of soap to my ear when I use it for voice.) When you break the brick, as it were, you can use one CPU, main battery, network connection and storage for all your devices. Then just bring whatever interfaces you need for whatever you tasks you expect that day, and use interfaces in your environment when they’re available.

  • Thin clients don’t grow with Moore’s Law. An alternative to having your personal CPU with you at all times is to run a thin client that has just enough smarts to talk to a server over wireless. The server then does all the heavy lifting. The trouble with this approach is that thin clients rely mainly on two resources: wireless bandwidth and the rather significant battery power needed to get to the nearest cell tower. The trouble is, these are the two resources that are growing most slowly. Since 1990, the RAM in mobile computers has improved a hundred-fold, CPUs 400-fold, and disk-space a whopping 1200-fold. In that same time, long-haul wireless speed has improved only 20-fold and battery efficiency only three-fold. (Thanks to Thad Starner for those numbers.) And, of course, thin clients don’t work when you’re in a wireless deadzone.

It’s not clear when Intel (or Apple or Sony for that matter) will finally come out with a successful Personal Server style product. The hardware is just one necessary piece to the puzzle, with resource discovery, communication standards, good interface design and of course the all-important “killer app” to bring it all together. But in spite of the hurdles yet to come, this is the right approach. I’m glad to see Intel is giving it the support it deserves.

Breaking the brick Read More »

ISWC registration deadline this Friday

As a reminder for those who are interested in wearable computers, the early registration deadline for the 7th IEEE International Symposium on Wearable Computers is this Friday, September 26th. You can check out the advance program here.

I’ll be co-teaching the Introduction to Wearable Computers tutorial with Thad Starner, and am also tutorials chair and on the program committee.

ISWC registration deadline this Friday Read More »

Privacy and soft walls

I’ve been reading up on IBM’s recently announced WebFountain project. The system, which has been dubbed Google on steroids, spiders the Net and other databases and applies various data-mining, natural-language processing and pattern recognition techniques to the data. The current system uses 500 parallel-processing Linux boxes, all accessing about half a terabyte of storage in the basement of the IBM Almaden Research Center. IBM’s infrastructure allows clients to customize their searches and standing queries using a library that will “tokenize the data to identify people and companies, and discover patterns, trends and relationships in the data.” The technology is being offered as a service, and is already being sold through a partnership with Factiva. It is being marketed mainly for trend identification and for “reputation management,” where a company watches chat rooms, bulletin boards, newspapers and other sources to see what people are saying about it.

I’m quite interested in the technology, and even have a friend from grad school who has been working on it (Hi Dan!). But the thing that got me thinking was a comment about privacy by Robert Morris, the director of IBM Almaden. As reported in the San Jose Mercury News:

The technology could potentially raise privacy concerns if companies turned its power on analyzing individuals. But Hart and Morris said both companies would protect user privacy.

“Anything we mine is public data on the Web,” Morris said.

But it isn’t yet clear how the company would restrict users trying to use the tool to invade someone’s privacy.

The quote is in line with the comment by The Economist: “No doubt some people will say it sounds a little intrusive. But all WebFountain does is reveal information that is hidden in plain sight.”

Unfortunately, the idea that anything findable on the Net is “public” is a dodge — “public data” is a simplification of what is a much more complex set of social rules. Counter-intuitive as it may sound, privacy rules are not primarily about restricting information access to particular people. The primary purpose of privacy rules is to keep people from using the information in ways that would harm the person who is keeping it secret. This is why companies wink at sharing trade secrets with your wife or husband but are adamant about not revealing them to potential competitors, unless they’ve first signed a non-disclosure agreement. The NDA explicitly restricts harmful uses of the data, making the privacy rules unnecessary.

The idea that privacy is a restriction on power was brought home to me a few years ago by an old fraternity brother of mine. Back when he was still finishing his PhD at MIT he got a call from an MIT campus policeman, who somewhat sheepishly explained that he was calling on behalf of an irate member of the Massachusetts Maritime Police Department. Apparently this maritime policeman had been surfing the Web and had come across a picture from my friend’s undergraduate fraternity days, showing him firing water balloons from a giant funnelator. The campus policeman said he was calling to inform my friend that slingshots are illegal in Massachusetts, and that he wanted to make sure that the device had been destroyed.

So here was a picture that was clearly “public” in that it had been published for anyone to see. The intended audience was anyone who was interested in our fraternity’s annual Water War, plus anyone else who might get a chuckle out of it. You could even say the intended audience was everyone in the world except for particularly humor-impaired members of the Massachusetts Maritime Police Department. If webservers had provided such vaguely-defined access rules, we certainly would have used them.

A more realistic idea of public vs. private spaces is one of intended use, with restrictions on access as a proxy for limiting that use. When I write an article for an academic journal or even a blog entry I expect to be called upon to defend my position. When I write a LiveJournal post I expect much less criticism, and I expect that people who read my postings will be the sort of people who generally agree with me and will be accepting of whatever personal thoughts I write. Both are published on the Web, both are “public,” but different social rules are implied by the relative ease of access, ease of discovery, and the different communities that are most likely to come across my posts. Difficult access provides a kind of “soft wall” that restricts access to certain communities, and the social rules of those communities provide a soft wall that limit how my information will be used. I expect most LiveJournal users would feel violated if information from their posts wound up being used in targeted marketing literature, even though most posts aren’t password protected.

I don’t intend to slam WebFountain with this argument — WebFountain is just the latest technology that is moving soft walls around by changing the ground rules. It was also only a matter of time before such a service was be offered. As a coworker of mine has pointed out, it is almost a certainty that the NSA has already developed such technology. (The argument goes: (a) The NSA would have to be really incompetent not to have done this, and (b) the NSA is not incompetent.) Given this is likely, it seems better for society that such technology be out in the open so people can adjust their expectations about how soft those soft walls really are.

Privacy and soft walls Read More »

Different copynorms for HTTP and P2P?

Ernest Miller has an interesting post over at LawMeme about why there is moral outcry about shutting down music filesharing on peer-to-peer systems, but not about sharing via the Web. (Props to Freedom to Tinker for the link).

Yet there hasn’t been much outcry over the fact that the RIAA has and continues to shut down hundreds of noncommercial websites offering copyrighted MP3s for download without authorization. The RIAA has even threatened lawsuits and gotten college students expelled over their refusal to remove MP3s from college websites. There has been concern (often expressed on LawMeme) about abuse of the DMCA’s notice and takedown procedures, but not much outcry when direct copyright infringement has been shown. Why is there no outraged defense of http filesharing?

I venture that there seems to be a different set of copynorms for the practice of filesharing via P2P and http. Certainly some defend filesharing via both P2P and http, but others strongly defend P2P with nary a word in favor of http filesharing. Although I have no proof, I suspect that the public’s attitude toward filesharing would differ based on the protocol at issue. Would 12-year old Brianna Lahara think it was okay for her to put all her music on a website for the world to copy? Why don’t we see people uploading files to their websites more often? Why aren’t they more upset when told they can’t upload to their website then when they make files available via a filesharing program?

I believe that the difference is that filesharing by http is seen clearly as a public act, while P2P seems more like a private act [Can’t stay away from that Public/Private distinction, huh? – Ed.]. If I were to stand on a street corner handing out CD-Rs to strangers (even were I doing so with no possibility of remuneration of any sort), most people would not consider that proper. If the RIAA were to sue me for such an act, would there be such an outcry over the injustice of it all? Yet, if I handed a CD-R to a friend, most would defend it. The difference is that one is private and the other public.

I think we’re seeing three effects here:

  • I agree that P2P feels more private than the Web, and so people feel the law should butt out. I would argue that the main for this is that P2P software is easier to set up than a webserver, so “normal” people think of P2P as private and HTTP as “that place that I pay someone else to host content for me,” if indeed they have a personal webserver at all.

    If everyone had their own website, I expect you would see similar copynorms for both P2P and HTTP. As completely anecdotal evidence, my more techie friends who have their own websites also share music via password-protected HTTP. It would be interesting to see if this distinction between the copynorms for the two protocols exists on college campuses where every student is given his or her own personal webspace.

  • P2P is newer, and so copynorms were set for P2P at a time when the Internet was used by normal, everyday people. Remember that when Mozilla and Netscape were first introduced people were still panicking about evil teen-aged hackers who could kill people from their bedroom computer and the horrible discovery that (gasp!) there was pornography on the Net. By the time mainstream America was online all the music-sharing websites had already been shutdown and sharing had moved to Napster, so “normal” people haven’t experienced it first-hand.
  • There’s the idea of reciprocation with P2P, where you are generally sharing with people who share with you. I think it’s this aspect that most counteracts the feeling that you’re handing out CD-Rs to strangers and brings it closer to the idea that you’re sharing mix-tapes with your 5,000 closest new friends.

Different copynorms for HTTP and P2P? Read More »

Volokh-Solum debate on IP

I meant to blog this earlier, but Ed Felten beat me to it. Eugene Volokh (The Volokh Conspiracy blog) and Lawrence Solum (Legal Theory Blog) are having an interesting debate on the theory behind the idea of treating intellectual property as tangible property, hinging mostly on the idea of the level of property rights necessary to offer incentives to produce intellectual and tangible goods. The postings so far:

Volokh-Solum debate on IP Read More »

Move over Zuccarini

For those who don’t know, John Zuccarini is the most notorious of the so-called “typo-squatters,” people who register domain names that are common typos of popular websites and then flood the poor fat-fingering visitor with advertisements. Zuccarini had at least 5,500 copycat Web addresses, and the FTC estimated he was earning between $800,000 and $1 million annually from the mostly porn-based banner ads he displayed, in spite of numerous lawsuits against him for trademark violations. Zuccarini was arrested last week under the new Truth in Domain Names provision in the PROTECT Act of 2003, which makes it illegal to use misleading domain names to lure children to sexually explicit material.

But to add insult to injury, no sooner has Zuccarini been arrested than he has been toppled as the typo-squatting king by a new upstart: the domain-name register VeriSign. Trumping Zuccarini’s 5,500 copycat domain names, VeriSign has used their position as keeper of the keys to redirect ALL unregistered typos to their site. Try going to http://whattheheckisverisignsmoking.com/ and see for yourself. VeriSign has posted a white paper on their new move, which creates a top-level “wildcard” registry for every domain-name request in the .com or .net domains. The change redirects any entry without DNS service to VeriSign’s own SiteFinder search engine, including reserved domain names such as a.com and domain names that are registered to other people but don’t have an active name server.

The main problem is that VeriSign is abusing their position as gatekeeper of the com and net domains, which are a public trust and not VeriSign’s commercial property. Network types have also been quick to point out other ways this move breaks things on the Net. Most important to everyday users, Web browsers are no longer able to gracefully handle bad links or mistyped URLs. Most browsers pop up a small dialog box for a bad URL, leaving the user on the old page. With the new changes, browsers cannot give this functionality. (Of course, for people who use versions of IE that redirect to Microsoft’s search-page, the only difference will be a change of masters.) Furthermore, debugging scripts often use domain-not-found errors to check for routing problems; these are no longer returned. And finally, anti-spam software also often uses domain-not-found errors to detect mail from invalid email addresses. (There was also concern that email sent to a typoed domain name would not bounce properly, but it seems this was either not the case or has been fixed.)

As one might expect, the flameage has been fast and furious on this one. Of particular note is the discussion on the North American Network Operators Group mailing list, where members have already contributed several patches to routing software that would essentially ignore VeriSign’s wildcard lookup, restoring the Internet (or at least the portions that apply the patch) to the old way things operated. Many are also simply dropping the IP address for sitefinder.verison.com (64.94.110.11) on the floor. If widely adopted such actions would essentially neutralize VeriSign’s change, but I expect the adoption levels will only be enough to be a statement of protest, not an actual revolution. However, Computer Business Review notes that the Internet Corporation for Assigned Names and Numbers (ICANN), which manages aspects of the DNS for the US government, has yet to weigh in on whether VeriSign’s changes are actually valid according to agreed-upon specs.

UPDATE: It seems VeriSign is only half-handling email correctly. What they’ve done is hooked up their own special mail-handler (which they call the Snubby Mail Rejector Daemon v1.3) that returns a fixed set of responses to SMTP transactions. Currently, VeriSign reads the From and To headers and then returns an error code. This means all misaddressed email relies on VeriSign’s server to bounce mail, and should the server not be available bounces might be delayed by several days. It also means that all addresses of typoed email are actually sent to VeriSign before being bounced, rather than stopped locally. Of course, I’m sure no VeriSign employee would be so criminal as to actually use this information for industrial espionage, nor would he change the Snubby Mail Daemon to actually collect the contents of said messages.

Friends of mine have also pointed out that ISPs and businesses “cache” DNS addresses on their local DNS servers. By claiming that all DNS requests are legitimate, VeriSign is clogging these caches with bad requests.

References

Move over Zuccarini Read More »

Homefront Confidential

Since March 2002, the Reporters Committee for Freedom of the Press has released a semiannual report on how the War on Terrorism is affecting “access to information and the public’s right to know.” The fourth edition of this report, Homefront Confidential, has just been released.

The 89-page report ranks threats to a free press on the same color-code used by the Department of Homeland Security:

  • Red (severe): Access to terrorism & immigration proceedings; restrictions on Freedom of Information Act requests
  • Orange (high): Covering the war; military tribunals
  • Yellow (elevated):The USA PATRIOT Act and beyond; The reporter’s privilege; The rollback in state openness
  • Blue (guarded): Domestic coverage

Homefront Confidential is a stark contrast to the kind of “information wants to be free” rhetoric I so usually find (and, I’ll admit, often speak) here in Silicon Valley. In my techno-optimistic world, information naturally flows straight from bloggers in the field to a public eager for news, with no gatekeepers between us. There is some truth to this notion, and blogs have been credited with breaking the Monica Lewinsky story and keeping Trent Lott’s racist remarks about Strom Thurmond in the public eye, as well as many other successes.

But while blogs and other Internet reporting can both accelerate a story’s propagation and occasionally magnify the voice of an eyewitness or whistleblower, most important news starts in the hands of a few important decision-makers. Without cooperation from the Justice Department, information about closed terrorism and immigration proceedings (including the detainees’ names) is simply not available. Without access to battlefields and military officers, details about our progress in war is not available. The government also has extensive powers to keep information bottled up, from criminal prosecution of whistleblowers under the Homeland Security Act, to legal restrictions on commercial satellite imaging companies, to use of subpoenas to force reporters to reveal their sources. These are all effective restrictions on the flow of information that aren’t deterred by the blogger’s nimble RSS feed.

Information wants to be free in this networked age, but the information that is most important for keeping our government in check is still behind several gatekeepers. In deciding the laws and policies of our land it’s important to remember the converse of this techie creed: Yes, information wants to be free, but freedom also requires information.

References

Homefront Confidential Read More »

More RIAA Blowback

The blowback from the RIAA’s lawsuits continues. First, recording artists like the Grateful Dead’s Bob Weir, Chuck D of Public Enemy, DJ Moby, Steve Miller and Huey Lewis are all speaking out against the lawsuits, and more importantly against the myth that the RIAA is out to protect the artists. The plight of artists is the only source of sympathy the RIAA has, so this kind of talk hurts a lot. Then in a turnabout-is-fair-play move, a California man has filed lawsuit against the RIAA, alleging that their clean slate program is fraudulent because it offers an amnesty the RIAA does not have the right to grant. Finally, the EFF has started a petition to congress that protests the RIAA’s lawsuits, calls for “the development of a legal alternative that preserves file-sharing technology while ensuring that artists are fairly compensated” and asks that the EFF be included in upcoming hearings on the subject. The petition has already received over 12,000 signatures in first two days.

Meanwhile, RIAA president Cary Sherman is invoking that old standby Devil, child pornography, in congress. A pedophile could send “an instant message to the unwitting young person who downloads an Olsen twins or Pokemon file from the pedophile’s share folder on Kazaa,” Sherman said.

What strikes me is how differently this battle is playing out in the press than the CyberPorn and Kevin Mitnick battles did back in 1995. Remember back then, when the word “hacker” was spoken in the same frightened reverence with which we speak the word “terrorist” now. For better or worse, our society has realized in this last decade that there are worse crimes than porn on the Net, worse violations of our civil liberties than export restrictions on our cryptography, and more dangerous people than our own children. We’re wiser now, and that’s good, but I also find I long for the days when I wore my Cypherpunk Criminal t-shirt for political protest, not out of nostalgia.

References

More RIAA Blowback Read More »

Slate’s Guide to the Patriot Act

With tomorrow’s anniversary of 9/11, John Ashcroft wrapping up his national tour for promoting the USA Patriot Act, and President Bush asking for more authority under what is being called the first of several Patriot-II laws, I highly recommend people go read Dahlia Lithwick and Julia Turner’s four-part series, A Guide to the Patriot Act, published in Slate. Lithwick and Turner manage to cut through the spin-doctoring on both sides of the debate, presenting the more controversial parts of the Act without shilling for one side or the other, but while still presenting their own analysis and thoughtful interpretation. It’s a breath of fresh air, cutting between punditry and objective-to-a-fault reporting-without-analysis:

How bad is Patriot, really? Hard to tell. The ACLU, in a new fact sheet challenging the DOJ Web site, wants you to believe that the act threatens our most basic civil liberties. Ashcroft and his roadies call the changes in law “modest and incremental.” Since almost nobody has read the legislation, much of what we think we know about it comes third-hand and spun. Both advocates and opponents are guilty of fear-mongering and distortion in some instances.

The truth of the matter seems to be that while some portions of the Patriot Act are truly radical, others are benign. Parts of the act formalize and regulate government conduct that was unregulated — and potentially even more terrifying — before. Other parts clearly expand government powers and allow it to spy on ordinary citizens in new ways. But what is most frightening about the act is exacerbated by the lack of government candor in describing its implementation. FOIA requests have been half-answered, queries from the judiciary committee are blown off or classified. In the absence of any knowledge about how the act has been used, one isn’t wrong to fear it in the abstract — to worry about its potential, since that is all we can know.

Ashcroft and his supporters on the stump cite a July 31 Fox News/Opinion Dynamics Poll showing that 91 percent of registered voters say the act had not affected their civil liberties. One follow-up question for them: How could they know?

If you haven’t read all 300-plus pages of the legislation by now, you should.

Since I haven’t read all 300-plus pages of the legislation myself, I won’t tell you to do so. But I will tell you to go and read Lithwick and Turner’s guide.

References

Slate’s Guide to the Patriot Act Read More »