Showing posts with label rants. Show all posts
Showing posts with label rants. Show all posts

Monday, February 22, 2016

Apple vs the FBI

What's up with Apple and the FBI?

Several of my friends and family have asked me about this case, which has been in the news a lot recently. A whole lot of news stories have been written trying more-or-less successfully to explain what's going on here, often with ill-fitting analogies to locks and keys, and it seems like a lot of people (including some of our presidential candidates) are just as confused about what's going on now as they were when the whole thing started. The Wired article above is really very good, but it's long, fairly-technical, and doesn't cover the non-technical side of things particularly well.

So, since y'all asked, here are some of my thoughts on the case. I'm going to be kind of all over the map here, because I've gotten questions about the moral side of things as well as the technical. I'm going to mostly skip over the legal side of things (because I'm unqualified to comment), except for a couple of specific points.

On the off-chance that someone stumbles across this who doesn't already know who I am, I'm a computer programmer, and I have worked on encryption and digital security software for a number of different companies, including 3 of the 5 largest PC manufacturers.

I'm going to try to stay away from using any analogies, and just explain the actual technology involved as simply as I can, since I know you can handle a bit of jargon, and the analogy-slinging I see on Facebook isn't making things any clearer for people, as far as I can see. There will be links to Wikipedia articles in here. You don't need to read them, but they are there in case you want to read more about those subjects.

First, a very quick run-down of what this is all about:
  • The FBI has an iPhone that was used by Syed Rizwan Farook, one of the shooters in the San Bernardino shootings last December.
  • The phone is locked (of course), and the FBI wants Apple to help them unlock it, and in fact has a court order requiring Apple to do so.
  • Apple is refusing to do what the FBI wants, for some fairly-complicated reasons.
  • A whole lot of people, including information security experts, law experts, and politicians, have weighed in on how they think this should go.

So, what's my take on all this?

Encryption does not work the way you might think it does, from watching movies or TV.


In the movies, you always see "hackers" running some piece of software that puts up a progress bar, and the software makes gradual progress over the course of seconds or minutes, until the encryption is "broken", and the spy gets access to the data they need. In the real world, unless the encryption implementation is fundamentally-broken by design, the only way to break in is by trying every possible key (we call this a "brute force attack"), and there are an enormous number of possible keys. You could get in with the very first key you try, or you might end up checking every possible key before you find the right one. Nothing about this process gives you any information about whether you're "close" to getting the right key, or whether you've still got billions of keys to try.

The data on the iPhone is encrypted with a key long enough that trying to decrypt it through brute force is essentially impossible.

The data on the iPhone is encrypted using AES, the Advanced Encryption Standard, which was developed by the US government for companies like Apple to use to secure data for their customers. as far as anybody knows, brute-force is the only way to attack AES, and with a 256-bit key (as is used on the iPhone), it'd take literally billions of years to try every possible key, if you used all of the computing power in the world.

Apple doesn't have that key to hand it over to the FBI

The key used to encrypt data on the iPhone is derived from a combination of a device-specific key, and the pass-code which the user has set on the phone. There's no way to extract the device-specific key from the phone, and there's no record of which phone uses which device-specific key. This is done on purpose, because if you could get that data, it'd make it much easier for anyone to extract your personal data from your phone.

Given that you can't get the device-specific key, then even if all of the data was extracted from the phone, you'd be faced with performing a brute-force attack on the encryption (which is impossible, see above).

You don't need the device-specific key if you can guess the pass-code to the phone

Obviously, if the phone has a 4-digit pass-code, you only need to try 1,000 10,000 different codes in order to unlock it (0000-9999). You could sit an FBI intern down in a cubicle with the phone, and a day or so later, it'd be unlocked. That'd be a really boring shift for them, but you could still do it. If the phone has a 6-digit lock code, that becomes substantially less-convenient, and you're into the range of a full-time job for a year or more.

But you might not be able to do that either, depending on the phone's settings. One of the security settings you can set on the iPhone is for it to erase the data on the phone after 10 incorrect password attempts. The FBI seems to think that this option is enabled for Farook's iPhone.

Here's what the FBI says that they want Apple to do

The FBI wants Apple to produce a custom version of iOS (the iPhone software), and load it onto Farook's iPhone, to enable them to quickly try all of the possible pass-codes.

This custom software would:

  1. Disable the "erase after 10 incorrect codes are entered" feature (of course)
  2. Allow the FBI to feed possible pass-codes to the iPhone from a connected computer, rather than requiring some poor intern to enter each one by hand.
  3. Reduce the amount of time required between entering each code, so they can check them faster. That wouldn't matter if there was a 4-digit code set, so maybe Farook used a longer code.


Can Apple do it?

Apparently so, or at least Apple CEO Tim Cook hasn't made the claim that they can't comply with the court order, just that they shouldn't be required to. It probably would not be that much work, actually. Items 1 and 3 up there should be trivially-easy to change, and #2 is probably not a huge amount of work for someone who's familiar with interfacing the iPhone to a PC. Somewhere between "one guy working over the weekend" and "two guys working on it for a week" is probably a reasonable guess.

Here's why Apple says that they shouldn't be forced to do this


It's a violation of their customers' privacy

Tim Cook says in his open letter that the FBI's request amounts to:
The government is asking Apple to hack our own users and undermine decades of security advancements that protect our customers — including tens of millions of American citizens — from sophisticated hackers and cybercriminals. 
Earlier models of the iPhone were much simpler for Apple to bypass the pass-code on, and they've expended substantial effort over the last few revisions to make it much harder for people to break into iPhones (and newer ones are even more-secure than the phone in this case). This is valuable protection for the individual customers' data, and has contributed in large part to reducing the number of phones stolen, since they can be locked in such a way that they can't be easily re-sold. This same cryptographic technology is also what keeps trade secret information that's stored on businesspeople's phones from being copied as a matter of course overtime they travel to any foreign country.

This is not a normal subpoena, it's a special court order

Normally, law enforcement agencies will get a court order to compel a company or individual to turn over information or physical evidence that is relevant to a particular investigation. Apple has cooperated in previous investigations (and even in this specific case) with those sorts of orders. This is something else entirely.

Using the All Writs Act, an obscure 18th-century law, the FBI is trying to force Apple to engage in an activity that they wouldn't otherwise do (and which will have a negative impact on their business and customers). The All Writs act has some significant restrictions in terms of when it can be invoked, but there's remarkably-little restriction on what a court can use it to order.

Once the FBI successfully uses the All Writs Act to force Apple to produce a custom version of iOS, they will have established a precedent where they can use it to compel Apple (or any other technology company) to take whatever actions they think might be helpful to aid any investigation they might launch. Lest you think I'm veering into conspiracy-theory territory here, consider the following:

Several statements that the FBI has made to the court and in the news are either extremely naive or deliberately misleading.

The FBI has made statements both in their court filings and in the press which are simply untrue. If it weren't for the fact that the people making these claims are actual forensics experts (or work with such experts), I'd be inclined to say that they just don't know what they're talking about. Given that they do work for the FBI, I think it's reasonable to hold them to a much higher standard of clueful-ness.

It's just for this one phone for this one investigation

I can't believe that anybody would think they could get this argument past a judge. Of course if this tool exists, the FBI (and every other police/security agency in the US and every other country in the world) will require that a this custom firmware version be loaded on whatever iPhones they might have picked up in the course of an investigation. And it'd be so much easier if they could just hold on to the firmware themselves, and apply it themselves to iPhones where they have a warrant. This isn't even a "slippery slope" argument, it's just what will obviously happen.

Several news articles have mentioned China, but really any country that has a poor human rights record would obviously misuse this tool, if it was available. In particular, the Gulf states have an atrocious record on human rights, and a track record of forcing technology companies to compromise customer security to make things easier on their state security agencies (See: Saudi Arabia and Blackberry).

There may be critical information on this phone that leads to other terrorists that Farook was in contact with.

It's very unlikely that there's actually any information on this phone that'd be useful to the FBI investigation. First off, this isn't even Farook's personal phone. It's the work phone that was issued to him by his employer, the County of San Bernardino. I mean, you can never underestimate the intelligence of criminals, but what kind of idiot would plan their attack on a county facility using their county-supplied phone?

In any case, Farook destroyed his own personal phone, as well as a laptop and several other devices, before carrying out the attack. If he went to all that trouble to destroy evidence, it seems unlikely that he just plain forgot to destroy his work phone. It's much more-likely that there was never anything remotely-incriminating on it to begin with.

Secondly, the FBI already has access to backups of that phone all the way up to 1 month before the attack. So they'd only be potentially getting information that was added to the phone in the last couple of weeks before the attack.

And finally, almost all of the relevant data you might get from that phone is already in the FBI's hands through other channels. They've already got access to the call records, emails, and other communications from that phone and Farook's other devices.

Apple can make this hack so that it only works on this one iPhone, eliminating any risk to Apple's other customers.

Well, sure, in a trivial sense. In a much more-significant sense, this is a content-free statement. In the trivial sense, Apple cannot course add extra code to this custom version of iOS so that it only works on Farook's phone. But really, they can't do that - they have to test it first, of course, so that means it has to be installable on at least two phones. And it'd obviously be trivial to change which phones it works on later, which brings us back to the original "it's only for this one phone" nonsense above.

Additionally, this runs into conflict with the requirements of the "All Writs Act", which is the justification for this order. They're not allowed to create an "undue burden" on Apple, and having Apple set up a whole new internal process for creating thousands of custom versions of iOS for every investigation in which it might be useful is not a trivial thing.

Right now, Apple needs to be very careful about which OS updates it digitally "signs", which is the process that's needed to allow the software to be installed on customers' phones. There are hundreds or maybe thousands of Apple employees who have access to the tools and the source code to make changes in iOS. But that final step of signing an update is necessarily restricted, because the key for that process allows you to say to the world "this software is approved by Apple". They're presumably quite careful with that key. You can make the argument (and people have) that digitally-signing a file is essentially the same as a physical signature, and you shouldn't be able to compel someone to sign something digitally any more than you can legally compel them to sign a physical piece of paper.

I don't know about Apple, but at one of my former employers, we kept our code-signing key, and the laptop with the software for signing our updates, physically locked up in a safe. The key was actually split into multiple parts, which were only accessible to certain people. Because if you can sign software, you can make it do anything you want. You can remove the DRM which is used to secure purchased content, steal all of a customer's personal data, anything.

There's a larger issue at stake here - the very idea of privacy is under attack

Ever since the ratification of the Bill Of Rights, there has been a back-and-forth argument in this country over the right balance between the citizen's right to privacy, and the state's need for security. Since the development of effective commercial cryptography in the late 20th century, the debate has gotten significantly more-heated.

Privacy is a basic right, guaranteed by the Bill of Rights here in the US

The 4th Amendment to the US Constitution says:
The right of the people to be secure in their persons, houses, papers, and effects, against unreasonable searches and seizures, shall not be violated, and no Warrants shall issue, but upon probable cause, supported by Oath or affirmation, and particularly describing the place to be searched, and the persons or things to be seized.

This controls the sorts of searches that the police (FBI, etc) can perform. In particular, they need probable cause, and a court-issued warrant. Over the last few centuries, that's been dialed back a bit, and US citizens are routinely searched without a warrant, and without probable cause. But there are still limits, and if you, your home, or your stuff is unreasonably-searched, you can contest that in court (and you might even win).

When the constitution was written, the founding fathers could not have imagined the sort of surveillance technology we have today.

In 1789, if you wanted to have a private conversation with a friend or family member, you could take them aside into a room, or go for a walk in the woods, and if you couldn't see anybody else, chances are nobody would overhear what you had to say. With a wax seal on your mail, you could see whether it had been tampered with (or read by someone else) in transit.

Modern communication systems (email, telephone, chat) are much easier to listen in on, and when new technology comes along, it has typically taken a while for the Supreme Court to come to the conclusion that whatever new-fangled communication system you use, it's essentially the same as a letter, for legal purposes. Tapping phone lines without a warrant used to be totally legal. Same with intercepting email and other electronic communications.

The question of whether or not you can be compelled to unlock your own phone, even if it contains potentially-incriminating evidence, is possibly still open, despite the fact that that seems like an obvious violation of the 5th Amendment.

Strong encryption flips the balance of privacy back to the way things were in the 18th century

When you have access to strong encryption, you have privacy by default. This is as it should be. Until the early 1990s, most encryption that was available commercially was just terrible. Since the development of the World Wide Web, the level of sophistication of the cryptography available to individuals and commercial users has vastly improved.

The US government has fought the availability of effective encryption for decades

After World War II, a war which the Allies won in part due to their ability to decrypt German secret messages, the US government set up the NSA to ensure that they had a lead in cryptographic technology. And until the growth of academic cryptographic research in the 1980s and 1990s, their expertise was unmatched. The NSA has a weird double mission. On the one hand, they're supposed to protect US military and civilian communications from foreign spies. On the other side, they're supposed to develop ways to break encryption used by other organizations, to support US intelligence-gathering. When it comes to commercial encryption, these goals are directly in conflict.

When the first truly effective encryption systems began to become commercially available, the NSA tried to keep their ability to listen in on communications by restricting the length of keys that could be used in software that was being exported. Eventually, it became obvious that that was only going to disadvantage US software makers, and the restriction was lifted.

During the Clinton administration, the NSA proposed Clipper, a cryptography system that would make it easy for law enforcement to listen in on communications (with a warrant, at least in principle), but would be very difficult for foreign governments, hackers, and others to break. It turned out to have a number of fundamental flaws, and was pretty quickly killed.

More-recently, the NSA has been possibly caught inserting a flaw into a security standard that they helped develop.

Law enforcement and security agencies now have much greater ability to collect data that's not specifically protected with encryption

Despite better security of communications overall, the security apparatus has continued to press the boundaries of what information they can gather without a proper warrant. Here are a few recent(wish) examples.

The FISA court

In order to allow allow Federal law enforcement and intelligence agencies to obtain search warrants, without having to publicly disclose what they're searching for, and who they're searching, Congress created a parallel court system, the Federal Intelligence Surveillance Court. This court provides search warrants, and has been involved in issuing court orders to compel commercial companies to cooperate with the NSA in collecting data, including information on US citizens, which the NSA is explicitly barred from collecting.

Telephone metadata collection

The NSA has been, for many years, collecting telephone meta-data (who's calling whom) for essentially all telephone call placed inside the United States (and several other countries). This only came to light because of Edward Snowden's whistle-blowing, because of course they got the authority for that from the secret FISA court.

StingRay

The StingRay system is basically a "fake" cell tower that broadcasts a signal that causes every mobile phone within range to report its location. They can be used to track the location of mobile-phone users in bulk, and can also apparently be used to intercept calls. These systems have been provided to local police forces via grants from the Department of Homeland Security, and they're used in a variety of ways that are at best quasi-legal (in that they haven't been specifically declared illegal yet).

Automated number plate readers

These machines are just about everywhere. They're used to automatically collect tolls, the police use them to search for cars that are associated with wanted criminals, repo men use them to look for cars that the owners have stopped making payments on, etc, etc. My local city has them mounted on the parking enforcement golf-carts, so they can just cruise down the street and collect the location and license plate numbers of every single car parked anywhere in the city.

And again, there's no law telling either the police or the private companies what they can or can't do with this information, so it gets used (and mis-used) for all sorts of things. The police have no need and no right to know where my car is at all times, as long as I'm actually following the parking rules.

What happens now?

I think there's a good chance that the court will make the "right" decision here, and side with Apple after considering their response. Either way, you should expect that Apple (and other manufacturers) will make additional efforts to ensure that they themselves cannot circumvent their own security systems. If the court makes the "wrong" decision, then there will be a whole lot more of these court orders issued in the near future, and that's bad news for privacy.


Saturday, January 24, 2009

This week's iPhone SDK sob story

I have ranted about this before, I know, but I'm a little irritated. Every single time I update the iPhone tools, I run into some crazy issue building code that worked just fine on a previous version.

This week, after digging my office out from under all the mess from moving to a new house, I revisited one of my older projects (yes, Pictems is finally getting an update!). And I ran into not one, but two of these issues. That's not counting the usual Code Signing errors, which I don't even pay attention to - I just click randomly on the Code Signing options until they go away.

(For my friends on the XCode team: Yes, I will file bugs on these issues, once I figure out what's going on. This is not a bug report)

Issue #1: During some early experimentation, I had set the "Navigation Bar Hidden" property on one of my Nib files. It didn't seem to do what I wanted, but I didn't bother to change it back. At some point, a change was made such that it now works. Great, but apparently the change was actually made in one of the iPhone tools, so even if I build my old project, with the SDK set to 2.0, I still get the new behavior. Easy to fix, but it's weird to have to change my "archived" version of my source so it builds correctly with the current version of XCode. If I build my old project against the old SDK, I'd expect to get the old behavior.

Issue #2: One of my resource files has a $ character in the name. One of the XCode copy scripts apparently changed such that it's not escaping the filename correctly, so now the resource doesn't get copied. Amusingly, no error message results - the file just ain't there. Yes, it's dubious to name a file with a $ in the name. But, again, it used to work just fine.

Oh, well. In the bigger scheme of things, I still prefer XCode/iPhone to Eclipse/Android...

Tuesday, November 11, 2008

Just In Time compilation vs. the desktop and embedded worlds

Okay, rant mode on. As I was waiting for Eclipse to launch again today, it occured to me that one of the enduring mysteries of Java (and C#/.NET) for me is the continued dominance of just-in-time compilation as a runtime strategy for these languages, wherever they're found. We've all read the articles that claim that Java is "nearly as fast as C++", we also all know that that's a bunch of hooey, particularly with regard to startup time. Of course, if Eclipse wasn't periodically crashing on me with out-of-memory errors, then I'd care less about the startup time - but that's another rant. Back to startup time and JIT compilation...

If you're creating a server-based application, the overhead of the JIT compiler is probably pretty nominal - the first time through the code, it's a little pokey, but after that, it's plenty fast, and you're likely throttled by network I/O or database performance, anyway. And in theory, the JIT compiler can make code that's optimal for your particular hardware, though in practice, device-specific optimizations are pretty minimal.

On the other hand, if you're writing a desktop application (or worse yet, a piece of embedded firmware), then startup time, and first-time through performance, matters. In many cases, it matters rather a lot.

There are a number of advantages to writing code in a "managed", garbage-collected language, especially for desktop applications - memory leaks are much reduced, you eliminate buffer overflows, and there is the whole Java standard library full of useful code that you don't have to write for yourself. I'm willing to put up with many of the disadvantages of Java to gain the productivity and safety advantages. But waiting for the Java interpreter to recompile the same application over and over offends me on some basic level.

On a recent project, we used every trick in the book to speed up our startup time, including a "faked" startup splash screen, lazy initialization of everything we could get away with, etc, etc. Despite all that effort (and unecessary complication in the code base), startup time was still one of the most common complaints from users.

Quite a bit of profiling was done, and in our case, much of the startup time was taken up deep inside the JIT, where there was little we could do about it. Why oh why doesn't anybody make a Java (or .NET) implementation that keeps the safe runtime behavior, and implements a simple all-at-once compilation to high-performance native code? Maybe somebody does, but I haven't heard of them.

For that matter, why don't the reference implementations of these language runtimes just save all that carefully-compiled native code so they can skip all that effort the next time? The .NET framework even has a global cache for shared assemblies. Why those, at least, aren't pre-compiled during installation, I can't even imagine.

Update:
I was helpfully reminded of NGen, which will pre-compile .NET assemblies to native code. I had forgotten about that, since so much of my most recent C# work was hosted on Mono, which does things a bit differently. Mono has an option for AOT (ahead of time) compilation, which works, sort of, but could easily be the subject of another long article.

Sunday, October 19, 2008

Returning to Java, after 10 years away


I'm once again writing Java code professionally, something that I haven't done in nearly 10 years (no, really - I had to stop and think it through because I didn't believe it, either). A couple of thoughts did occur to me, after I'd figured out the time frames involved.

I was a little taken aback by the very idea that Java is more than 10 years old. It just seems weird that a new programming language could go from introduction to being a major part of the world's IT infrastructure and college curriculums, in less time than I've been living here in California.

Java sure has evolved a lot in the last 10 years. There have been major changes to the language, the libraries, and the tools. I'd bet that some of my 10-year old Java code would throw deprecation warnings for nearly every line of code...

On the other hand, my final thought is along the lines of "Oh, my god. So much has changed, but Java is still irritating in nearly all the ways that made me crazy ten years ago! What have these people been up to for the last decade?"

Oh, and I was the first person I knew to "quit" Java, much like I was the first person to "quit" World of Warcraft. Hopefully backsliding on the Java thing doesn't mean I'm about to backslide on the WoW thing - I can't afford the lost time. I've got to learn about how you do things in Java again.

One good thing for my loyal readers (if any exist) is that I have a bunch of stored-up vitriol about Java that I can just uncork and pour out, so I should be updating more frequently.

Tuesday, March 18, 2008

Is it my imagination?


Am I mis-remembering, or did Internet Explorer formerly produce more helpful error messages than this one when failing to connect to a website? Is this another IE7 "feature"? Why would you display the exact same error message for no network connectivity, a DNS failure, and a site that doesn't respond?

Given that it's trivially easy to distinguish each of these three from the other cases, why not at least tell me which is actually the problem?

Grr.