Secret (software) Agent Man

Saturday’s Washington Post article on the NSA’s domestic eavesdropping program has a short aside I find rather chilling (emphasis mine):

Even with 38,000 employees, the NSA is incapable of translating, transcribing and analyzing more than a fraction of the conversations it intercepts. For years, including in public testimony by Hayden, the agency has acknowledged use of automated equipment to analyze the contents and guide analysts to the most important ones.

According to one knowledgeable source, the warrantless program also uses those methods. That is significant to the public debate because this kind of filtering intrudes into content, and machines “listen” to more Americans than humans do. NSA rules since the late 1970s, when machine filtering was far less capable, have said “acquisition” of content does not take place until a conversation is intercepted and processed “into an intelligible form intended for human inspection.”

When I was in the Software Agents Group at MIT in the late ’90s, we had lots of discussion about whether people would be legally responsible for the actions of automated software programs (agents) they use. If I tell eBay’s software to bid up to a given price, can I be held to that agreement even though the “agent” did the bidding and not me? If I knowingly write and unleash an intelligent virus, am I responsible for the damage it causes? The answer to these questions has to be yes if responsibility means anything in our increasingly automated society, and the question would be completely ludicrous were it not for the complexity of what software can now do without our direct intervention. Imagine the murder defense “I didn’t kill those people, my gun did!” And yet, this is the logic being used by the NSA when they claim eavesdropping only counts if the interception is shown to a human. “I didn’t spy on innocent Americans, my software did it!”

There are times where being watched by electronic eyes is preferable to being watched by humans. For example, I trust that Google’s automated system will only use my email to generate relevant advertisements (and nothing else) more than I would if they had humans reading and tagging every email by hand. However, in the NSA’s case their software is doing exactly what they themselves are prohibited from doing both by statute and the Fourth Amendment, namely looking for illegal activity by trolling through mountains of private domestic communications without probable cause. Even if the software only produced a human-readable summary or a ranked list of suspicious people, that output would be tainted just as surely as if an NSA analyst had produced it.

(Thanks to Nelson both for the link and the reminder to donate to the ACLU.)