Yet another infrequently-updated blog, this one about the daily excitement of working in the software industry.
Sunday, August 09, 2020
Good takes and bad takes on Apple Silicon
Tuesday, June 23, 2020
What Apple Announced at WWDC 2020
What Apple Announced at WWDC 2020
Mac with Apple Processors, across the line
Rosetta 2, to run existing Intel Mac Applications
Running iPad and iPhone applications without modification
Porting assistance for Open-Source apps and frameworks
So, what’s it all mean?
There is one major downside, for a certain subset of users
What about Games?
WIll I buy one?
Saturday, June 13, 2020
ARM Macs are coming, and faster than you think
(note: This is a lightly-edited version of a post originally published on June 13th, 2020)
We all knew this was coming. In fact, some of us have been expecting it for years. Various rumor outlets are saying that Apple will announce at WWDC that they're transitioning the Macintosh line from using Intel's processors to Apple's own processors, which are based on the ARM architecture.
A bunch of people have written extensively on this rumor, but I have a few thoughts that I haven't seen others concentrate on.
One thing you see a lot of disagreement about online is how long it'll take for Apple to convert its whole lineup of Macs to use its own processors, or if it even will. I've seen people say that they think they'll announce a single model of ARM Mac, then over the course of 2-3 years, move all of the product line over to ARM. I've even seen people predict that they'll keep the "Pro" line of computers on x86 for the foreseeable future, and only convert the portable line to ARM.
A case can be made for those positions, but here's what I think: If Apple announces a transition to ARM at WWDC, it'll happen surprisingly quickly. I wouldn't be at all surprised if the first ARM Macs ship before the end of 2020 , and the whole line is switched over before the end of 2021. That seems like a pretty far out-there prediction, compared to the "consensus" view, so let's take a look at the previous transitions, and how this one is different.
We've been here before, but then again, this is very different
This will be the third major processor transition for the Macintosh (and the 5th major software transition overall). Originally, the Mac used the Motorola m68k processor family. After 10 years, the m68k family was failing to make regular improvements in performance, and Apple started to look at other options, finally settling on the PowerPC. They moved the Mac products from m68k to PPC over the course of about 18 months. Years later, they transitioned from PowerPC to Intel, over the course of about 12 months. And now, we're apparently on the cusp of another processor transition. How will this one turn out? And most importantly: WHY NOW?
Transition 1: The m68k to PowerPC transition
"Any sufficiently-advanced technology is indistinguishable from magic"
– Arthur C. Clarke
This transition was very difficult, both for Apple and for third parties. At the time that Apple announced the change, they were still using what we now call the Classic MacOS. Large parts of the operating system, and the applications that ran on it, were written in Assembly, with an intimate understanding of the hardware they ran on.
Consequently, Apple developed a processor emulator, which would allow existing m68k code to run on the PowerPC without any changes. You could even have an application load a plugin written for the other architecture. The new PPC version of MacOS maintained a shadow copy of all its internal state in a place where 68k applications could see (and modify) it - that was the level of compatibility required to get anything to work. A heroic effort, and it paid off - most software worked out of the box, and performance was "good enough" with emulated code, because the PPC chips were much faster than the m68k chips they were replacing.
The downside of that sort of transition is that it takes many years to complete. There was relatively little pressure on the third parties to update their applications, because they ran just fine on the new models. Even the MacOS itself wasn't completely translated to native code until several years later.
Transition 2: The MacOS X transition
“If I need to make that many changes, I might as well drop the Mac, and go to Windows”
– Some Mac developer, a Halloween party in 1999
A few years after the PPC transition, Apple announced MacOS X, and software developers were faced with another transition. Originally, OS X was intended to be a clean break with Classic MacOS, with an all-new underlying operating system, and a brand new API, Cocoa (based on the OPENSTEP software which came in with the NeXT acquisition).
Major developers were (understandably) not enthusiastic about the prospect of rewriting the majority of their existing applications. Eventually, Apple caved to the pressure, and provided Carbon, a "modern" API that kept much of the same structure, but removed some of the more egregious aspects of Classic MacOS programming. Apple made it clear that they considered Carbon a transitional technology, and they encouraged developers to use Cocoa. The reaction from the larger developers was pretty much "meh." Quite a few smaller long-time MacOS developers enthusiastically embraced the new APIs though, appreciating the productivity boost they provided.
A footnote to this chapter of the saga is that the "Developer Preview" versions of Rhapsody, the first Mac OS X releases, actually had support for running the OS on Intel-based PC hardware. That didn't survive the re-alignment which gave us Carbon, and MacOS X 10.0 shipped with support for PowerPC Macs only.
Things were pretty quiet on the Macintosh front for a few years. New versions of OS X came out on a regular schedule, and Apple kept coming out with faster and better PowerBooks, PowerMacs, iBooks, and iMacs. And then, suddenly, the PowerPC processor line had a few unexpected hiccups in the delivery pipeline.
Transition 3:The Intel transition
“Wait, you were serious about that?”
– Carbon-using developers, overheard at WWDC 2005
The PowerPC processors were looking less and less competitive with Intel processors as time went by, which was an embarrassment for Apple, who had famously built the PowerPC advertising around how much faster their chips were than Intel's. The "G5" processor, which was much-hyped to close the gap with Intel, ran years late. It did eventually ship, in a form that required liquid cooling to effectively compete with mainstream Intel desktop PCs. The Mac laptop range particularly suffered, because the low-power laptop chips from Motorola just...never actually appeared.
And so, Apple announced that they were transitioning to Intel processors at WWDC 2005. I was working in the Xcode labs that year, helping third-party developers to get their code up and running on Intel systems. I worked a lot of "extra shifts", but it was amazing to see developers go from utterly freaked out, to mostly reassured by the end of the week.
For any individual developer, the amount of “pain” involved in the transition was variable. If they’d “kept up with” Apple’s developer tools strategy in the years since the introduction of Mac OS X, no problem! For the smaller indie developers who had embraced Xcode, Cocoa, and all of Apple's other newer framework technology, it actually was a trivial process (with one exception). They came into the lab, clicked a button in Xcode, fixed a bunch of compiler warnings and errors, and walked away with a working application, often in just an hour or so.
For the developers with large Carbon-based applications built using the Metrowerks compiler, it was a real slog. Because of CodeWarrior-specific compiler extensions they'd used, different project structures, etc, etc, it was hard to even get their programs to build in Xcode.
The exception to the "it just works" result for the up-to-date projects is any kind of external I/O. Code that read or wrote to binary files, or communicated over a network, would often need extensive changes to flip the "endianness" of various memory structures. Endianness is something you generally don’t need to think about as a developer in a high-level language, especially if you're only developing for one platform, which also just happens to use the same endianness as the Internet does. Luckily, these changes tended to be localized.
Transition 4: The 64-bit transition
“You can't say we didn't tell you this was coming..."
– Some Apple Developer Tools representative (probably Matthew), at WWDC 2018
The first Intel-based Macs used processors that could only run in 32-bit mode. This is what I consider one of Apple's major technology mistakes, ever. They should have gone directly to 64-bit Intel from the get-go, though that would have required waiting for the Core II Duo processors from Intel, or using AMD chips, or doing the iMac first, and the notebooks last.
Regardless, after the first year, all Macs were built with 64-bit capable processors, and MacOS started supporting 64-bit applications soon after. Technically, the previous versions of Mac OS X supported 64-bit applications on the "G5" processors, but that was only available in the Power Mac G5, and very few applications (other than ports from workstation hardware) bothered to support 64-bit mode.
Unfortunately for the folks hoping to see a glorious 64-bit future, there was again very little incentive for developers to adopt 64-bit code on MacOS. One of the advantages of Intel-based Macs over the PowerPC versions is that you could reuse code libraries that had been written for Windows PCs. But, of course - almost all Windows applications are written for 32-bit mode, so any code you share between the Windows and Mac versions of your application need to be 32-bit. You also can't mix-and-match 32-bit and 64-bit code in the same process on MacOS. So most MacOS applications remained 32-bit for years after there were no longer any 32-bit processor Macs being sold. Even when OS X 10.7 dropped support for 32-bit processors entirely, most applications stayed 32-bit.
Apple told developers at every WWDC from probably 2006 on that they should really convert to 64-bit. They'd talk about faster performance, lower memory overhead for the system software, and various other supposed advantages. And every year, there just didn’t seem to be any great need to do so, so mustly, all Mac software remained 32-bit. A few new APIs were added to MacOS which only worked in 64-bit applications, which just had the unfortunate effect of those features never seeing wide adoption.
Eventually, Apple's tactics on this issue evolved from promises to threats and shaming. Developers were told at WWDC that 32-bit applications would not be supported "without compromises" in High Sierra. Then, when High Sierra shipped, we found that Apple had added a warning message that 32-bit applications were "not optimized" for the new OS. That got the end users to start asking developers about when they were going to “optimize” for the new operating system. For the better part of a year, many developers scrambled to get their apps converted before MacOS Mojave shipped, because they made the reasonable assumption that the warning message was implying that Mojave wouldn’t support 32-bit applications. But then Mojave shipped, and 32-bit apps ran the same as they ever have, with the same warning that was displayed in High Sierra. And then, in MacOS Catalina, they finally stopped allowing 32-bit software to run at all.
Converting an existing 32-bit Cocoa application to 64-bit is not particularly difficult, but it is...tedious. You end up having to make lots of small changes all over your code. In one project that I helped convert, there were changes needed in hundreds of source code files. We got there, but nobody thought it was fun, and it seemed so pointless. Why inflict this pain on users and developers, for what seemed like no gain?
You couldn't say that we weren’t warned that this was coming, since Apple had been telling developers for literally a decade to convert to 64-bit. But third-party developers were still pretty confused about the timing. Why "suddenly" deprecate 32-bit apps for Catalina? Just to incrementally reduce the amount of maintenance work they needed to do on MacOS? Or to reduce the amount of system overhead by a handful of megabytes on and 8GB Mac? It didn’t make sense. And why did they strongly imply it was coming in Mojave, then suddenly give us a reprieve to Catalina?
Transition 5: The ARM Mac
“The good news is, you've already done the hard part”
– Apple, WWDC 2020
With all of this in mind, I think that the sudden hard push for 64-bit in High Sierra and beyond was a stealth effort to get the MacOS third-party developers ready for the coming ARM transition. When High Sierra shipped, almost all MacOS software was 32-bit. Now that Catalina is out, almost all the major applications have already transitioned to 64-bit. Perhaps the reason the “deadline” was moved from Mojave to Catalina was because not enough of the “top ten” applications had been converted, yet?
Prior to finally getting all third-party developers to adopt 64-bit, the transition story for converting to ARM would have been complicated, because the applications were all 33-bit, and the Mac ARM chips would be 64-bit (the iOS platform having had their 64-bit conversion a few years back). Apple would have been telling developers: "First, you need to convert to 64-bit. Then, you can make any changes needed to get your code running on ARM".
Now, it's going to be very simple: "If your application currently builds for Catalina, with the current SDK, you can simply flip a switch in Xcode, and it'll be able to run on the new ARM Macs, as well". That's not going to be literally true for many applications, for various reasons (binary dependencies on some other third-party SDK, Intel-specific intrinsics in performance-critical code, etc).
But this time, there is no endianness issue, no apps that will need to change their toolchain, none of the previous issues will be relevant. I also think it's quite likely that there will be very few arbitrary API deprecations in MacOS 10.16, specifically to make this transition as painless as possible for as many developers as possible.
What’s it all mean, then?
"All of these products are available TODAY"
– Steve Jobs's famous tagline, possibly heard again this year?
So then - on to timing. For the Intel transition, there were about 6 months between announcement and availability of the first Intel Mac, and another 8 months before the transition was complete. This time, it's likely that Apple will shrink the time between announcement and availability, because there's comparatively little work that needs to be done to get most applications up and running on the new hardware.
It's possible that they'll announce ARM Macs, shipping that very day. If Adobe and Microsoft are already ready to go on day one, it might even be plausible. I think they'll want to give developers some time to get ready, though. So, I predict 3 months, meaning ARM Macs in September, 2020. And I think they'll move aggressively to put as many Macs on their own processors as they can, because it's all a win for them - lower costs, better battery life, etc, etc.
"But what about the Mac Pro?", you'll hear from some experts. "Nobody's ever produced a Xeon-equivalent performance ARM chip. It'll take years to make such a design, if it's even possible at all".
The obvious comeback here is: Nobody knows what Apple has running in a lab somewhere, except the people who are working on the project. Maybe they already have an 80 Watt ARM powerhouse chip running in a Mac Pro chassis, right now. But even if they don't, I think it's reasonable to look at this from the "Why Now?" perspective, again.
The previous processor transitions were mainly driven by a need to stay competitive in performance with Intel. That is not the case this time, since the desktop/laptop competition is almost exclusively on the same Intel processors that Apple is using. The success, or rather the lack thereof, of other ARM-architecture computers & operating systems (Windows for ARM, ChromeBooks) doesn't make a compelling case for a switch to “keep up” with Microsoft or Google, either. So there's no hurry.
Given that there's no external pressure to switch, Apple must think that they have a compelling story for why they're switching. And that has to include the entire product line, since the Mac isn't their most-important product, and they surely aren't going to support two different architectures on it, just to keep existing Mac Pro users happy. They either have a prototype of this Mac Pro class processor ready to go, or they're very sure that they can produce it, and they have a believable roadmap to deliver that product. Otherwise, they’d just wait until they did.
Which Macs are switching first?
“I bet you didn’t see that coming, skeptics!”.
– Me, maybe?
Everybody is expecting to see a new Macbook, possibly bringing back the 12-inch, fanless form factor, and taking maximum advantage of the power-saving and cooler operation of Apple’s own chips. Some folks are expecting a couple of different models of laptops.
What I would really love to see (but don’t much expect) is for Tim Cook to walk out on stage, announce the first ARM-based Mac, and have it not be a super-small, low-power consumer laptop product. I want it to be something high end that decisively outperforms the current Mac Pro, and establishes that this is going to be a transition of the whole line. I think that'd be a bold statement, if they could swing it.
Monday, November 20, 2017
Mergers: The Good
How to help your acquired employees succeed
Out of the 6 acquisitions I've been involved with, two really stand out as positive experiences, both for the acquired and the parent company. Here's what was different about those two mergers, as opposed to the ones that didn't go so well.
Integrate the new team quickly (Apple/NeXT)
In Apple's case, they acquired NeXT both in order to get new technology to base their next-generation OS on, and to get a fully-functional engineering organization. You can't really understand just how screwed up Apple was in 1996 unless you were there, but in the last quarter of the year, Apple lost over a billion dollars. They had, at that point, had 2 or 3 (depending on how you count) different "next generation" OS projects crash and burn, and the latest one, Copland, was on the verge of disaster – I've seen credible evidence that it wouldn't have shipped for another 5 years, if ever. Into all this swirling chaos, we bring the NeXT team, understandably freaked out to be called on to "save" this huge, dysfunctional company from itself.But one thing that was hugely encouraging, and helped us to all settle in, was how quickly we were integrated into the Apple organization as a whole. Within a month after the acquisition, we were meeting with our counterparts in Cupertino, we had apple.com email addresses, our systems were on the Apple network, and we'd had an army of Apple HR folks over to the NeXT offices to get us transferred over to Apple's payroll and benefits.
It was still a very hard slog, and there was a LOT of anger from folks at Apple that had their friends laid off right after the acquisition, but feeling like we were legitimately part of the team, and not just a bunch of outsiders, helped us to fight the battles we had to fight.
Put the full support of the larger organization behind the newcomers (LG/WebOS)
After the debacle that was the HP's acquisition of Palm (see the "Ugly" segment, coming soon), the folks remaining on the WebOS team were pretty nervous when we were told that we were being sold off to LG. "Oh, great, another absentee owner who will tell us we're important, but then never do anything".And then we had our first meetings with LG's upper management. And we were told that we would be building the user interface for all of LG's high-end smart TV's, that we were going to ship in less than a year, and that we were expected to deliver something BETTER than the existing NetCast software, which they had been shipping for a few years. "Oh, crap, I thought - none of us know anything about Smart TVs, or TVs in general". But then they told us: "The CEO has expressed his full support of this project, and you'll have as much support as you need".
I really didn't believe that we were going to get "as much support as you need", but sure enough, within a short time period after the acquisition, truckloads of current-generation TVs and prototype logic boards for the next generation started flooding into the office. And in the months after that, truckloads of engineers from Korea, who knew the hardware and the existing NetCast software intimately. Anything we asked for, we got – score one for top-down, authoritarian management style, I guess.
And we did it - a small group of developers, working their asses off, managed to build something in less than a year which was immensely better than the existing product, which had been shipping for several years. The next-generation smart TVs, with a new version of WebOS, were even better. This was definitely a high point for the "acquire a smaller company to bring innovation to the larger company" strategy. And it succeeded because the project had a powerful advocate within the larger company, and a VERY clear vision of what they wanted to accomplish.
Next week
What not to do to make your new employees feel welcome, and how to tell (as an employee) when things are likely to go sour quickly.Monday, February 22, 2016
Apple vs the FBI
What's up with Apple and the FBI?
Several of my friends and family have asked me about this case, which has been in the news a lot recently. A whole lot of news stories have been written trying more-or-less successfully to explain what's going on here, often with ill-fitting analogies to locks and keys, and it seems like a lot of people (including some of our presidential candidates) are just as confused about what's going on now as they were when the whole thing started. The Wired article above is really very good, but it's long, fairly-technical, and doesn't cover the non-technical side of things particularly well.So, since y'all asked, here are some of my thoughts on the case. I'm going to be kind of all over the map here, because I've gotten questions about the moral side of things as well as the technical. I'm going to mostly skip over the legal side of things (because I'm unqualified to comment), except for a couple of specific points.
On the off-chance that someone stumbles across this who doesn't already know who I am, I'm a computer programmer, and I have worked on encryption and digital security software for a number of different companies, including 3 of the 5 largest PC manufacturers.
I'm going to try to stay away from using any analogies, and just explain the actual technology involved as simply as I can, since I know you can handle a bit of jargon, and the analogy-slinging I see on Facebook isn't making things any clearer for people, as far as I can see. There will be links to Wikipedia articles in here. You don't need to read them, but they are there in case you want to read more about those subjects.
First, a very quick run-down of what this is all about:
- The FBI has an iPhone that was used by Syed Rizwan Farook, one of the shooters in the San Bernardino shootings last December.
- The phone is locked (of course), and the FBI wants Apple to help them unlock it, and in fact has a court order requiring Apple to do so.
- Apple is refusing to do what the FBI wants, for some fairly-complicated reasons.
- A whole lot of people, including information security experts, law experts, and politicians, have weighed in on how they think this should go.
So, what's my take on all this?
Encryption does not work the way you might think it does, from watching movies or TV.
In the movies, you always see "hackers" running some piece of software that puts up a progress bar, and the software makes gradual progress over the course of seconds or minutes, until the encryption is "broken", and the spy gets access to the data they need. In the real world, unless the encryption implementation is fundamentally-broken by design, the only way to break in is by trying every possible key (we call this a "brute force attack"), and there are an enormous number of possible keys. You could get in with the very first key you try, or you might end up checking every possible key before you find the right one. Nothing about this process gives you any information about whether you're "close" to getting the right key, or whether you've still got billions of keys to try.
The data on the iPhone is encrypted with a key long enough that trying to decrypt it through brute force is essentially impossible.
The data on the iPhone is encrypted using AES, the Advanced Encryption Standard, which was developed by the US government for companies like Apple to use to secure data for their customers. as far as anybody knows, brute-force is the only way to attack AES, and with a 256-bit key (as is used on the iPhone), it'd take literally billions of years to try every possible key, if you used all of the computing power in the world.Apple doesn't have that key to hand it over to the FBI
The key used to encrypt data on the iPhone is derived from a combination of a device-specific key, and the pass-code which the user has set on the phone. There's no way to extract the device-specific key from the phone, and there's no record of which phone uses which device-specific key. This is done on purpose, because if you could get that data, it'd make it much easier for anyone to extract your personal data from your phone.Given that you can't get the device-specific key, then even if all of the data was extracted from the phone, you'd be faced with performing a brute-force attack on the encryption (which is impossible, see above).
You don't need the device-specific key if you can guess the pass-code to the phone
Obviously, if the phone has a 4-digit pass-code, you only need to tryBut you might not be able to do that either, depending on the phone's settings. One of the security settings you can set on the iPhone is for it to erase the data on the phone after 10 incorrect password attempts. The FBI seems to think that this option is enabled for Farook's iPhone.
Here's what the FBI says that they want Apple to do
The FBI wants Apple to produce a custom version of iOS (the iPhone software), and load it onto Farook's iPhone, to enable them to quickly try all of the possible pass-codes.This custom software would:
- Disable the "erase after 10 incorrect codes are entered" feature (of course)
- Allow the FBI to feed possible pass-codes to the iPhone from a connected computer, rather than requiring some poor intern to enter each one by hand.
- Reduce the amount of time required between entering each code, so they can check them faster. That wouldn't matter if there was a 4-digit code set, so maybe Farook used a longer code.
Can Apple do it?
Apparently so, or at least Apple CEO Tim Cook hasn't made the claim that they can't comply with the court order, just that they shouldn't be required to. It probably would not be that much work, actually. Items 1 and 3 up there should be trivially-easy to change, and #2 is probably not a huge amount of work for someone who's familiar with interfacing the iPhone to a PC. Somewhere between "one guy working over the weekend" and "two guys working on it for a week" is probably a reasonable guess.Here's why Apple says that they shouldn't be forced to do this
It's a violation of their customers' privacy
Tim Cook says in his open letter that the FBI's request amounts to:The government is asking Apple to hack our own users and undermine decades of security advancements that protect our customers — including tens of millions of American citizens — from sophisticated hackers and cybercriminals.Earlier models of the iPhone were much simpler for Apple to bypass the pass-code on, and they've expended substantial effort over the last few revisions to make it much harder for people to break into iPhones (and newer ones are even more-secure than the phone in this case). This is valuable protection for the individual customers' data, and has contributed in large part to reducing the number of phones stolen, since they can be locked in such a way that they can't be easily re-sold. This same cryptographic technology is also what keeps trade secret information that's stored on businesspeople's phones from being copied as a matter of course overtime they travel to any foreign country.
This is not a normal subpoena, it's a special court order
Normally, law enforcement agencies will get a court order to compel a company or individual to turn over information or physical evidence that is relevant to a particular investigation. Apple has cooperated in previous investigations (and even in this specific case) with those sorts of orders. This is something else entirely.Using the All Writs Act, an obscure 18th-century law, the FBI is trying to force Apple to engage in an activity that they wouldn't otherwise do (and which will have a negative impact on their business and customers). The All Writs act has some significant restrictions in terms of when it can be invoked, but there's remarkably-little restriction on what a court can use it to order.
Once the FBI successfully uses the All Writs Act to force Apple to produce a custom version of iOS, they will have established a precedent where they can use it to compel Apple (or any other technology company) to take whatever actions they think might be helpful to aid any investigation they might launch. Lest you think I'm veering into conspiracy-theory territory here, consider the following:
Several statements that the FBI has made to the court and in the news are either extremely naive or deliberately misleading.
The FBI has made statements both in their court filings and in the press which are simply untrue. If it weren't for the fact that the people making these claims are actual forensics experts (or work with such experts), I'd be inclined to say that they just don't know what they're talking about. Given that they do work for the FBI, I think it's reasonable to hold them to a much higher standard of clueful-ness.It's just for this one phone for this one investigation
I can't believe that anybody would think they could get this argument past a judge. Of course if this tool exists, the FBI (and every other police/security agency in the US and every other country in the world) will require that a this custom firmware version be loaded on whatever iPhones they might have picked up in the course of an investigation. And it'd be so much easier if they could just hold on to the firmware themselves, and apply it themselves to iPhones where they have a warrant. This isn't even a "slippery slope" argument, it's just what will obviously happen.Several news articles have mentioned China, but really any country that has a poor human rights record would obviously misuse this tool, if it was available. In particular, the Gulf states have an atrocious record on human rights, and a track record of forcing technology companies to compromise customer security to make things easier on their state security agencies (See: Saudi Arabia and Blackberry).
There may be critical information on this phone that leads to other terrorists that Farook was in contact with.
It's very unlikely that there's actually any information on this phone that'd be useful to the FBI investigation. First off, this isn't even Farook's personal phone. It's the work phone that was issued to him by his employer, the County of San Bernardino. I mean, you can never underestimate the intelligence of criminals, but what kind of idiot would plan their attack on a county facility using their county-supplied phone?In any case, Farook destroyed his own personal phone, as well as a laptop and several other devices, before carrying out the attack. If he went to all that trouble to destroy evidence, it seems unlikely that he just plain forgot to destroy his work phone. It's much more-likely that there was never anything remotely-incriminating on it to begin with.
Secondly, the FBI already has access to backups of that phone all the way up to 1 month before the attack. So they'd only be potentially getting information that was added to the phone in the last couple of weeks before the attack.
And finally, almost all of the relevant data you might get from that phone is already in the FBI's hands through other channels. They've already got access to the call records, emails, and other communications from that phone and Farook's other devices.
Apple can make this hack so that it only works on this one iPhone, eliminating any risk to Apple's other customers.
Well, sure, in a trivial sense. In a much more-significant sense, this is a content-free statement. In the trivial sense, Apple cannot course add extra code to this custom version of iOS so that it only works on Farook's phone. But really, they can't do that - they have to test it first, of course, so that means it has to be installable on at least two phones. And it'd obviously be trivial to change which phones it works on later, which brings us back to the original "it's only for this one phone" nonsense above.Additionally, this runs into conflict with the requirements of the "All Writs Act", which is the justification for this order. They're not allowed to create an "undue burden" on Apple, and having Apple set up a whole new internal process for creating thousands of custom versions of iOS for every investigation in which it might be useful is not a trivial thing.
Right now, Apple needs to be very careful about which OS updates it digitally "signs", which is the process that's needed to allow the software to be installed on customers' phones. There are hundreds or maybe thousands of Apple employees who have access to the tools and the source code to make changes in iOS. But that final step of signing an update is necessarily restricted, because the key for that process allows you to say to the world "this software is approved by Apple". They're presumably quite careful with that key. You can make the argument (and people have) that digitally-signing a file is essentially the same as a physical signature, and you shouldn't be able to compel someone to sign something digitally any more than you can legally compel them to sign a physical piece of paper.
I don't know about Apple, but at one of my former employers, we kept our code-signing key, and the laptop with the software for signing our updates, physically locked up in a safe. The key was actually split into multiple parts, which were only accessible to certain people. Because if you can sign software, you can make it do anything you want. You can remove the DRM which is used to secure purchased content, steal all of a customer's personal data, anything.
There's a larger issue at stake here - the very idea of privacy is under attack
Ever since the ratification of the Bill Of Rights, there has been a back-and-forth argument in this country over the right balance between the citizen's right to privacy, and the state's need for security. Since the development of effective commercial cryptography in the late 20th century, the debate has gotten significantly more-heated.Privacy is a basic right, guaranteed by the Bill of Rights here in the US
The 4th Amendment to the US Constitution says:The right of the people to be secure in their persons, houses, papers, and effects, against unreasonable searches and seizures, shall not be violated, and no Warrants shall issue, but upon probable cause, supported by Oath or affirmation, and particularly describing the place to be searched, and the persons or things to be seized.
This controls the sorts of searches that the police (FBI, etc) can perform. In particular, they need probable cause, and a court-issued warrant. Over the last few centuries, that's been dialed back a bit, and US citizens are routinely searched without a warrant, and without probable cause. But there are still limits, and if you, your home, or your stuff is unreasonably-searched, you can contest that in court (and you might even win).
When the constitution was written, the founding fathers could not have imagined the sort of surveillance technology we have today.
In 1789, if you wanted to have a private conversation with a friend or family member, you could take them aside into a room, or go for a walk in the woods, and if you couldn't see anybody else, chances are nobody would overhear what you had to say. With a wax seal on your mail, you could see whether it had been tampered with (or read by someone else) in transit.Modern communication systems (email, telephone, chat) are much easier to listen in on, and when new technology comes along, it has typically taken a while for the Supreme Court to come to the conclusion that whatever new-fangled communication system you use, it's essentially the same as a letter, for legal purposes. Tapping phone lines without a warrant used to be totally legal. Same with intercepting email and other electronic communications.
The question of whether or not you can be compelled to unlock your own phone, even if it contains potentially-incriminating evidence, is possibly still open, despite the fact that that seems like an obvious violation of the 5th Amendment.
Strong encryption flips the balance of privacy back to the way things were in the 18th century
When you have access to strong encryption, you have privacy by default. This is as it should be. Until the early 1990s, most encryption that was available commercially was just terrible. Since the development of the World Wide Web, the level of sophistication of the cryptography available to individuals and commercial users has vastly improved.The US government has fought the availability of effective encryption for decades
After World War II, a war which the Allies won in part due to their ability to decrypt German secret messages, the US government set up the NSA to ensure that they had a lead in cryptographic technology. And until the growth of academic cryptographic research in the 1980s and 1990s, their expertise was unmatched. The NSA has a weird double mission. On the one hand, they're supposed to protect US military and civilian communications from foreign spies. On the other side, they're supposed to develop ways to break encryption used by other organizations, to support US intelligence-gathering. When it comes to commercial encryption, these goals are directly in conflict.When the first truly effective encryption systems began to become commercially available, the NSA tried to keep their ability to listen in on communications by restricting the length of keys that could be used in software that was being exported. Eventually, it became obvious that that was only going to disadvantage US software makers, and the restriction was lifted.
During the Clinton administration, the NSA proposed Clipper, a cryptography system that would make it easy for law enforcement to listen in on communications (with a warrant, at least in principle), but would be very difficult for foreign governments, hackers, and others to break. It turned out to have a number of fundamental flaws, and was pretty quickly killed.
More-recently, the NSA has been possibly caught inserting a flaw into a security standard that they helped develop.
Law enforcement and security agencies now have much greater ability to collect data that's not specifically protected with encryption
Despite better security of communications overall, the security apparatus has continued to press the boundaries of what information they can gather without a proper warrant. Here are a few recent(wish) examples.The FISA court
In order to allow allow Federal law enforcement and intelligence agencies to obtain search warrants, without having to publicly disclose what they're searching for, and who they're searching, Congress created a parallel court system, the Federal Intelligence Surveillance Court. This court provides search warrants, and has been involved in issuing court orders to compel commercial companies to cooperate with the NSA in collecting data, including information on US citizens, which the NSA is explicitly barred from collecting.Telephone metadata collection
The NSA has been, for many years, collecting telephone meta-data (who's calling whom) for essentially all telephone call placed inside the United States (and several other countries). This only came to light because of Edward Snowden's whistle-blowing, because of course they got the authority for that from the secret FISA court.StingRay
The StingRay system is basically a "fake" cell tower that broadcasts a signal that causes every mobile phone within range to report its location. They can be used to track the location of mobile-phone users in bulk, and can also apparently be used to intercept calls. These systems have been provided to local police forces via grants from the Department of Homeland Security, and they're used in a variety of ways that are at best quasi-legal (in that they haven't been specifically declared illegal yet).Automated number plate readers
These machines are just about everywhere. They're used to automatically collect tolls, the police use them to search for cars that are associated with wanted criminals, repo men use them to look for cars that the owners have stopped making payments on, etc, etc. My local city has them mounted on the parking enforcement golf-carts, so they can just cruise down the street and collect the location and license plate numbers of every single car parked anywhere in the city.And again, there's no law telling either the police or the private companies what they can or can't do with this information, so it gets used (and mis-used) for all sorts of things. The police have no need and no right to know where my car is at all times, as long as I'm actually following the parking rules.