Why are secret URLs “security through obscurity”?

Yesterday’s InformationWeek had an article about how cellphone pictures sent via MMS (Multimedia Messaging Service) by customers of U.K. mobile network Operator O2 are winding up available via Google search pages. The article, titled Picture Leak: O2’s Security Through Obscurity Can’t Stop Google, explains that O2 provides a fallback for customers who try to send photos from their cellphone to cellphones that don’t support MMS, namely they post the photos online and then send the recipient a URL to the picture via email. For security, each URL includes a 16-hex-digit (64-bit) hex digit message ID. The “problem”, as they breathlessly explain it, is that some of these URLs are getting indexed by Google, and can be discovered by performing a search with the inurl: search type.

The whole thing is much ado about nothing — further investigation shows that the reason a handful of these “secret” URLs wound up in Google is that people were using MMS to post photos directly to their public photoblogs. While it’s not the case here, I do have to wonder at the charge that secret URLs are somehow just security through obscurity, which usually refers to a system that is secure only as long as its design or implementation details remain secret. That’s not the case here — even a modest 16-hex-digit ID is about as difficult to guess as a random ten-character password containing numbers and upper & lowercase letters. What can be a risk is that people and programs are used to URLs being public knowledge, and so sometimes they aren’t safeguarded as well as one might safeguard, say, his bankcard PIN number. On the plus side, unguessable URLs can easily be made public when it’s appropriate, for example when posting to your photo blog from your O2 cellphone. Now if only we could selectively prevent clueless reporters trying to write scare-stories from finding them…

Why are secret URLs “security through obscurity”? Read More »

Microsoft builds tool to steal data off computers

From the “what could possibly go wrong” department, Microsoft just announced that they’ve developed a simple one-button tool to break into a computer and suck down an entire hard drive’s contents onto a thumb drive:

COFEE, a preconfigured, automated tool fits on a USB thumb drive. Prior to COFEE the equivalent work would require a computer forensics expert to enter 150 complex commands manually through a process that could take three to four hours. With COFEE, you simply plug into a running computer to extract the data with the click of one button –completing the work in about 20 minutes.

It’s basically a whole bunch of existing password guessers and other cracking software into a single one-touch device — and since it works on the live computer it can bypass encrypted disks like Vista’s BitLocker so long as the user is still logged in.

Apparently Microsoft isn’t concerned that they’re building tools that can turn any two-bit felon into a highly-skilled data thief, or that they’re providing products that exploit their very own security holes. After all, they’re only supplying these devices to law enforcement — so what could possibly go wrong?

Microsoft builds tool to steal data off computers Read More »

Calculating the Birthday Paradox

For the last couple years I’ve been working on a program that generates a large number of essentially random ID strings (it’s actually a replicated document storage system that uses the hash of a file’s content as its ID, but the details don’t matter). Since IDs are independently generated there will always be some chance that two different files will just happen to have the same ID assigned — so how long do I need to make my ID string before that probability is small enough that I can sleep at night?

This is essentially the Birthday Paradox, just with bigger numbers and in a different form. For those who haven’t heard of it, the canonical form of the Birthday Paradox asks what the probability is that, out of a random group of 23 people, at least two in share the same birthday. (The “paradoxical” part is that the answer is just over 50%, much higher than most people’s intuition would suggest.) My question just turns that around and asks “how many random N-bit IDs have to be generated before there is a one in a million chance of any two of them being identical?”

Rejiggering the formulas given in Wikipedia, here’s the approximation:

n ≅ (-2 · S · ln(1 – P))1/2


  • n is the number of entities required to reach the given probability
  • P is the probability desired
  • S is the size of the set of all possible entities

For example, the number of people you would need for a 50% chance that at least two of them have the same birthday is (-2 · 365 · ln(1/2))1/2, or between 22 and 23 people. As a more practical example, you would only need to generate 77,163 PGP keys before having a 50% chance of a collision between their 8-character short-form fingerprints.

As for my one-in-a-million chance, you’d need to randomly generate roughly 2(N – 19)/2 N-bit strings before having a one-in-a-million chance of a collision, which means I would need to randomly generate around 270 of my 160-bit ID strings before there would be a one-in-a-million chance of having a collision. I think I can sleep at night.

Calculating the Birthday Paradox Read More »

Russian hacker-zine analysis of Skype anti-reverse-engineering measures

Russian hacker magazine Xakep Online has posted an interesting analysis of all the measures Skype goes to to avoid reverse-engineering of their protocol and code. If you can’t read the original Russian you can get the gist (as I did) from the Google translation. A few highlighted techniques:

  • Binary file is fully encrypted and dycrypted as it’s dynamically loaded into memory.
  • Eliminated almost all static function calls, and critical procedures are called via a dynamically-obtained pointer determined via obfuscated code. That makes figuring out what’s going on in a debugger difficult.
  • Recognizes the Windows kernel-mode debugger SoftICE and refuses to run when it sees it.
  • Measure how long it takes to execute certain sections of code to try to detect whether it’s being run in emulation. (I’m not sure how this would work, given the range of CPUs it has to run on…)
  • Do a checksum of the resulting decrypted code.

The article also goes into all the ways Skype routes around firewalls by looking for open ports, and suggests that along with encrypted traffic and peer-to-peer distribution it’s the perfect tool to deliver a worm, trojan or virus payload under the radar of virus checkers and firewalls… if only you can find a way to get the target client to run your code. Essentially you’re left with just one level of protection, namely Skype itself. I’m not convinced this is any more problematic than the Swiss-cheese that is Windows security already, but it’s something to think about as we go forward.

(Thanks to Sergey for the link and summary of the Russian!)

Russian hacker-zine analysis of Skype anti-reverse-engineering measures Read More »

Look hard enough, and you’ll always find two identical fingerprints

Today’s LATimes reports that Brandon Mayfield just won his $2 million lawsuit against the FBI for his wrongful detention in 2004. Brandon is the Oregon lawyer who the FBI pinched in connection to the 2004 Madrid train bombings because a partial fingerprint found in Madrid was a “close enough” match to his own. One quote from the article:

Michael Cherry, president of Cherry Biometrics, an identification-technology company, said misidentification problems could grow worse as the U.S. and other governments add more fingerprints to their databases.

The problem is emphasized in the March report from the Office of the Inspector General on the case, which reads much like a Risks Digest post and has a lot of take-home lessons. The initial problem was that the FBI threw an extremely wide net by running the fingerprints found in Madrid through the Integrated Automated Fingerprint Identification System (IAFIS), a database that contains the fingerprints of more than 47 million people who have either been arrested or submitted fingerprints for background checks. With so many people in the database the system always spits out a number of (innocent) near-matches, so the FBI then goes over the results. The trouble is that in this case (a) Mayfield’s fingerprints were especially close, and (b) the FBI examiner got stuck in a pattern of circular reasoning, where once he found many points of similarity between the prints he began to “find” additional features that weren’t really in the lifted print but were suggested by features in Mayfield’s own prints.

People tend to forget that even extremely rare events are almost guaranteed to happen if you check often enough. For example, even if there was only a one in a billion chance of an innocent person being an extremely close match for a given fingerprint, that leaves about a 5% chance for each fingerprint checked of getting such a false positive. If we were to double the size of the database, that would rise to almost 10%. This kind of problem is inevitable when looking for extremely rare events, and applies even more broadly to fuzzy-matching systems like the TSA’s no-fly list and Total Information Awareness (in all its newly renamed forms), which try to identify terrorists from their credit card purchases, where they’ve traveled or how they spell their name.

Look hard enough, and you’ll always find two identical fingerprints Read More »

Snooping search terms from the browser cache with JavaScript

SPI Dynamics has an interesting proof-of-concept page that can snoop your browser’s cache of visited URLs and figure out whether you’ve searched for specific terms on Google. Or rather, I assume it can on some people’s computers… for some reason it always returns “yup, you searched for that” on both Firefox and Safari on my Mac.

Regardless, it’s an interesting attack. It’s based on the fact that your browser changes the color of links you’ve already visited, and sites can determine which style the browser has applied to a link using JavaScript and CSS, thus determining whether a particular URL has been visited or not. This basic concept was described by Jeremiah Grossman’s history extractor at Black Hat his year. SPI Dynamics takes it one step further by probing for the URL corresponding to a set of query terms on the popular search sites. They can’t just get a list of all your searches, but they could in theory troll for a list of interesting search terms, be they names of competing products, porn sites, common illnesses, etc. and then modify the page being displayed based on that information. (Via Google Blogoscoped.)

Snooping search terms from the browser cache with JavaScript Read More »

Why don’t we only search terrorists?

Bruce Schneier answers the question “why do we bother making people with security clearances go through airport security?” with the obvious answer “how would an airport screener know if you have a security clearance?”

Heck, as long as we’re living in fantasy land, why don’t they let non-terrorists bypass security and just focus on The Terrorists? After all, it must not be too hard to tell who’s a Terrorist and who isn’t, since we’re already single them out for torture, rendition to Syria and indefinite detention without review. What’s forcing them to spend extra time in line at the airport compared to that?

Why don’t we only search terrorists? Read More »

The danger of forwarding

Kevin Drum has posted an email exchange between convicted lobbyist Jack Abramoff and Karl Rove’s assistant, Susan Ralston, part of a larger set released in a bipartisan report by The House Government Reform Committee. Apparently Abramoff sent an email asking for favors to Ralston’s personal(?) pager, and that email was forwarded to the Deputy Assistant to the President and then on to a White House aide. That aide in turn warned a colleague of Abramoff’s that “it is better not to put this stuff in writing in their email system because it might actually limit what they can do to help us, especially since there could be lawsuits, etc.” Abramoff’s response to his colleague’s warning: “Dammit. It was sent to Susan on her mc pager and was not supposed to go into the WH system.”

Political scandal aside, this teaches a fundamental security issue with email. I have no idea whether Ralston’s pager was set to automatically forward email while she was on vacation or (more likely) that she forwarded it on to the Deputy Assistant herself as a way to keep him in the loop. Regardless, it’s clear that Abramoff recognized that having such emails in the official White House system would be a liability, but he had no control over whether its recipients (either Ralston or possibly her automatic forwarder) would be as prudent.

People who want to speak “off the record” usually think about whether a communication channel is likely to be archived, is subject to subpoena, is secure and so forth. But as it becomes easier to transfer between channels that becomes harder to predict. You might not expect me to archive my voicemail, but if I automatically forward my messages to my email as audio attachments then it probably will be. Similarly, you might expect email sent within a company to stay protected inside the firewall, but if just one recipient forwards his email to his GMail account then that security is blown wide open. The folks involved in the Abramoff scandal deserve to be outed, but the next person to be tripped up by this kind of error might not be so deserving.

The danger of forwarding Read More »