Sunday, March 21, 2021

He went that-a-way!

 This blog has moved to a new Wordpress site, here. All of the old posts have been migrated there, and I'll be writing there, going forward. I just got tired of constantly fighting with the Blogger engine.

I hope to see all 6 of you subscribers there.

-Mark


Monday, November 30, 2020

Fact-checking some Apple Silicon exuberance

The first Apple Silicon Macs are out in customers' hands now, and people are really excited about them. But there are a couple of odd ideas bouncing around on the Internet that are annoying me. So, here's a quick fact check on a couple of the more breathless claims that are swirling around these new Macs.


Claim: Apple added custom instructions to accelerate JavaScript!

False.

There is one instruction on the M1 which is JavaScript-related, but it's part of the standard ARM v8.3 instruction set, not a custom Apple extension. As far as I know, there are no (documented) instruction set extensions for M1 outside of the ARM standard.

The "JavaScript accelerator" instruction is FJCVTZS, or "Floating-point Javascript Convert to Signed fixed-point, rounding toward Zero". You might well wonder why this *one* operation is deserving of its own opcode. Well, there are a couple of things at work here: All JavaScript numbers are stored as double-precision floats (unless the runtime optimizes that out), and if you do something as simple as applying a bit mask to a number, you've got to convert it from the internal representation to an integer, first.


Claim: Faster reference counting means the M1 can do more with less RAM!

False.

This one is just a baffling combination of ideas, mashed together. David Smith, an Apple engineer, posted a thread on Twitter about how the M1 is able to do a retain/release operation much faster than Intel processors. They're even faster when translated with Rosetta than they are on native x86 code.

This is a pretty awesome micro-optimization, but it got blown all out of proportion, I think largely due to this post on Daring Fireball, which takes the idea that this one memory management operation is faster, and combines that with a poorly understood comparison between Android and iOS from this other blog post, and somehow comes to the conclusion that this means that 8GB of RAM in an M1 system is basically equivalent to 16GB on Intel. There's a caveat in there, which I guess isn't being noticed by most people quoting him, that if you "really need" 16GB, then you really need it.

Yes, reference counting is lighter on RAM than tracing garbage collection, at a potential cost in additional overhead to manipulate the reference counts. But that advantage is between MacOS and other operating systems, not between ARM and Intel processors. It's great that retain/release are faster on M1, but application code was not spending a significant percentage of time doing that. It was already really fast on Intel. If you actually use 16GB of RAM, then 8GB is going to be an issue, regardless of Apple's optimizations.


Claim: The M1 implements special silicon to make emulating Intel code faster!

True.

Amazingly, this one is basically true. One of the most surprising things about Rosetta2 is that the performance of the emulated code is really, really good. There's a lot of very smart software engineering going into that of course, but it turns out that there is actually a hardware trick up Apple's sleeve here. 

The ARM64 and x64 environments are very similar - much more so than PowerPC and x86 were, back in the day. But there's one major difference: the memory model is "weaker" on the ARM chips. There are more cases where loads and stores can be rearranged. This means that if you just translate the code in the "obvious" way, you'll get all sorts of race conditions showing up in the translated code, that wouldn't have happened on the Intel processor. And the "obvious" fix for that is to synchronize on every single write to memory, which will absolutely kill performance.

Apple's clever solution here is to implement an optional store ordering mode on the Performance cores on the M1, that the OS can automatically turn on for Rosetta-translated processes, and leave off for everything else. Some random person posted about this on GitHub, along with a proof of concept for enabling it on your own code. Apple hasn't documented this switch, because of course they don't want third-party developers to use it.

But it's a very clever way to avoid unnecessary overhead in translated applications, without giving up the performance benefits of store re-ordering for native code.

Claim: The unified memory on M1 means that you don't need as much memory, since the GPU and CPU share a common pool.

False.

This one is kind of hard to rule on, because it depends a lot on how you're counting. If you compare to  a typical CPU with a discrete GPU, then you will need less total memory in the unified design. Instead of having two copies of a lot of data (one for the CPU, and one for the GPU), they can just share one copy. But then again, if you've got a discrete GPU, it'll have its own RAM attached, and we normally don't count that when thinking about "how much RAM" a computer has.

An 8GB M1 Mac is still going to have the same amount of usable RAM as an 8GB Intel Mac, so again, this is not a reason to think that an 8GB model will be an adequate replacement for your 16GB Intel Mac.

Claim: Apple's M1 is an amazing accomplishment

True!

Leaving aside some of the the wilder claims, it is true that these processors represent am exciting new chapter for Apple, and for the larger industry. I'm excited to see what they do next. I have some thoughts about that, for a future post...

Sunday, October 11, 2020

What I learned by NOT playing D&D

 

What I learned by NOT playing D&D

I really loved playing Dungeons & Dragons as a kid, but I really haven’t played much since becoming an adult. And I haven’t run a game in this century. Every now and then, I’d talk with Yvette about it, and she’d say, “We should write up an adventure, and run it for our friends!”. It always sounded good, but I had doubts - doubts that I could run a game effectively, doubts that I could write an adventure that would hold people’s interest, doubts that people would actually show up regularly…


Then, something pretty amazing happened. One of Yvette’s friends asked her, “do you know anybody who’s into D&D? My daughter watched a game being played at the library, and she wants to play”. Which was hilarious, since Yvette met this friend through her brother, who was someone she played D&D with, back when they were all kids.


Yvette asked me if I’d be willing to run a game for the girl, M, and her friend, K. And for whatever reason, I said yes. So since late July, I’ve been running a regular weekly D&D game for my wife, two teenage girls, and a couple of sometimes players, including another teenager (a boy), and one of OUR friends (about my age), who’s someone we’ve played games with at gaming conventions.


And it’s gone really well. I was very stressed before the first few sessions, but it’s gotten a lot easier, and I think I might actually be pretty good at this. I’m definitely much better at it than I ever was as a teenager.


But I haven’t run a game, or even played much, in the last few decades. How is it possible that I’ve gotten better, without actually practicing the craft? It turns out I have been learning how to do this, while not actually doing it.

Downtime and “Leveling Up”

D&D has this idea of “character levels”, which represent how good your character is at being whatever kind of adventurer they are. They go out, kill some monsters, intimidate some guards, steal some ancient artifacts, and they get better at what they do. Except they don’t normally get better at adventuring while they’re adventuring. They go out, have an adventure, and then go back to town, where they study and train, and then they go up a level. They call this “downtime”, and it’s part of the natural ebb and flow of getting better at something. I think this works in the real world sometimes, too.

How I leveled up during my 40 years in the desert

While I haven’t actually tried to run a game in many years, I have been learning a lot of new skills in the intervening years which just so happen to make me better at the things that were hard for me when I tried to do this before.


I always dreaded coming up with characters, locations, and plot lines, for each game. It seemed like an impossible task to create a whole world from scratch, populate it with people, and write a story that takes place there, especially when the players might easily take off and do something I hadn’t planned for in advance.

A little bit of research

I’ve always been interested in how games “work”, and I’ve bought a lot of rulebooks for games that I’ve never played. And I’ve learned a lot. Looking at how a game like D&D evolved from the versions that I played, to the pinnacle of complexity that they eventually became, and two subsequent ground-up redesigns, has really taught me a lot about how they work under the surface.


And then there was Fate. The Fate RPG is sort of the benchmark for a modern, “rules light” RPG design. It’s very very different from D&D in terms of designs, and comes in a cute little booklet that lays out the rules quickly, and then goes into great depth into how to run a collaborative storytelling experience. And it would not be an exaggeration to say that it opened up my eyes to a completely different way of playing these sorts of games. But before I could hear what Fate had to teach me, I needed to be in the right headspace to be receptive. Me from 20 years ago would have been totally baffled by all of the things “missing” from the Fate rules.


Letting everyone contribute to the story

“Aspects are always true” - Fate rulebook

Fate has this idea of “aspects”, which are statements about a character, a location, or an object. Players define aspects for their characters, and anybody involved in a scene can create an aspect on either the location, or any of the characters or objects in the scene. If you’re in a fight in a warehouse, and one of the players throws a molotov cocktail, that might add “the building is ON FIRE” as an aspect of the scene. If a player says that their character has an aspect of “my family is my world”, then that naturally brings up the question of what sorts of complications might arise if someone threatens their family. 


This turns out to be a fantastically useful tool in D&D, too. When a player tells me something about themselves, I can take it as true, and start working parts of it into the story. We might run into that older brother of theirs, or perhaps the group they’re on the run from will send an assassin after them. Who knows?

Improv, and the power of Yes, and…

One of the guiding principles of Improv, is “Yes, and…”. When you’re in a scene, you accept whatever your partner says as true, and you work from there. You don’t negate what they say, and you don’t try to steer them back onto what your original idea of the scene was. If someone addresses you as “mother”, then you’re their mother, and you go from there.


It’s pretty obvious, at the lowest level, that Roleplaying Games would involve...playing a role. But because of the interactive nature of playing a cooperative storytelling game with a group of other people, you can’t actually practice your lines ahead of time. You have to react, and you have to take what the other players say, and build on top of that.


I have taken a couple of classes in improvisational acting from the local Santa Barbara Improv Workshop, and that’s been super-helpful in teaching me to naturally react to the other players. If I’m playing a religious fanatic Lizard-person, and one of the player characters asks me a question I haven’t prepared an answer for, I can just jump into the headspace of that character, and answer as the character I’m playing would answer.

Don’t be afraid to steal from your collaborators

"Good artists borrow, great artists steal" - Pablo Picasso

The other interesting thing about collaborative storytelling is that it works, even when some of the people involved don’t know that they’re collaborating. When the players are speculating about the Big Bad Guy’s evil plan, or what they think the motivations of a minor character might be, I can listen in, and just “borrow” those ideas, and weave them into the story.

And they loved it!

After all that, you may be wondering how the game went. Everybody had a great time, they loved the story, and they really want to play again, after we take a short time off. A bunch of seeds have been planted to connect the characters’ backstories to places and people in the world, I’ve introduced a bunch of hopefully recurring characters, and at least one player is starting to see conspiracies everywhere, which is always fun.

What have you leveled up in?

Maybe now is a good time to think about something you “used to do”, and evaluate whether now you’re in a position to apply some of the things you’ve learned in the meantime? See if some of your previous creative blocks are no longer relevant. You never know, maybe you “leveled up” when you weren’t watching!


Sunday, August 09, 2020

Good takes and bad takes on Apple Silicon

Good takes and bad takes on Apple Silicon

There are a lot of people out there that seem to be clinging to some weird ideas about what the Apple Silicon transition is going to look like, and what the Apple Silicon team "can do", in terms of what they can deliver.

Good Takes

First, someone who seems to "get it" pretty well, Rene Ritchie. Here are two of his very clear and informative videos about why "Apple Silicon" is not just "ARM", and what that means for the Mac going forward:

Rene Richie on YouTube:


The key takeaway here is that many of the huge advantages that Apple Silicon will bring to the Mac come from everything else that Apple will integrate into the chips, not just from using their own CPU cores. Having said that, I think there's a lot of confusion about the likely performance of those cores. More on that below.


Bad takes

I'm not going to link to blog posts, tweets, and YouTube videos of the bad takes, for two reasons. Primarily, because bagging on people who are coming at this from a position of ignorance seems kind of mean. But also, because there are so many bad takes for each of these positions,  there's no point in singling out any particular instance of them. Having said that, here are some bits of commonly "known facts" about Apple Silicon that I think are really wrong-headed, and why I think so.


Claim: ARM Macs will never be as fast as Intel Macs

You see a bunch of variations on this, from "obviously, the ARM processors will be for the low-end, but they'll continue to use Intel for the Mac Pro and the MacBook Pro", to "Obviously, the low-end Macs will transition first, and we won't see 'pro' Macs until late next year, at the earliest".

My take: You will see 'Pro' Apple Silicon Macs this year
Apple's flagship product is the MacBook Pro. It's the product that everybody wants, and also the one that a lot of people buy, especially "Pro Users", whatever that name might mean. Apple will definitely not carve out solely the low end of the Mac range for their new Apple Silicon processors, because the perception of "Apple Silicon means low-end" is not something they want to have stick to the new product line. 

In addition, based on what we know, and can extrapolate from, the first Apple Silicon processors are likely going to be substantially faster than the Intel processors in the existing laptop range. In single-core performance, the A12Z is already on par with the Intel processors in the MBP line. It's really hard to say what the performance improvement will be from the A12Z to the first Apple Silicon Mac chip, but my best guess is somewhere between 50% and 100% improvement over the A12Z. At that speed, those Apple Silicon chips will just wipe the floor with the current Intel processor MacBook Pros in single-core speed. Beyond that, it's mostly a question of how many "performance" cores go into that processor.


Claim: ARM Macs will not support discrete GPUs

This is apparently based on a single slide from a single WWDC 2020 presentation: Bring your Metal app to Apple Silicon Macs. Based on seeing "Intel, Nvidia, and AMD GPUs" under the Intel-based Mac heading on one side of the slide, and "Apple GPU" on the other side, under Apple Silicon, some people have apparently concluded that discrete GPU support is not going to be available on Apple Silicon.

My Take: We really don't know, but it seems unlikely that discrete GPUs will never be supported
The point of the presentation at WWDC was very much not "we won't support discrete GPUs on Apple Silicon". The point of the presentation was "you definitely don't want to assume that 'integrated equals slow', when dealing with Apple Silicon". It's likely that Apple will still have discrete GPU options on some of their Pro devices.

However, I would not be at all surprised if the first few Macs released didn't have discrete GPUs, because the integrated GPU will have better performance than the top laptop GPUs currently available. We do know that Apple Silicon Macs will have Thunderbolt and PCIe, so they will have the hardware capability to support discrete GPU configurations, including external GPUs. It's just a question of drivers, at that point. Apple will either write the needed drivers, or pay the GPU vendor to write them, if they're needed to achieve a particular performance level.


Claim: Much existing software will not come to Apple Silicon Macs soon, or indeed at all

This is often tied to the argument that "x86 processors are just better for 'heavy processing' than ARM, which are optimized for power efficiency". Given that assumption, they then say you won't see Photoshop, or Logic, or Premiere, or whatever other piece of software on ARM Macs, because they won't be fast enough. A different argument is that the effort of porting will be too high, and so third-party developers will not bother porting to the Apple Silicon architecture.

My Take: Building for Apple Silicon is pretty darn easy, and Rosetta2 is better than you think
I talked about this in a previous post, but this transition is going to be much less painful for most developers than the PPC->Intel transition was, or in fact than the transition from x86-32bit to x86-64, which a bunch of developers just went through for Catalina. If an app runs on Catalina, it'll run on Apple Silicon, eventually.

I need to be careful about what I say in this next section, because I do have the Apple DTK, and it came with a fairly restrictive NDA, saying not to talk about performance or benchmark results. But I have run a bunch of third-party software under Rosetta2, and other than a compatibility issue related to page size that's described in the DTK release notes (you may not be able to read that, without a developer account), almost everything I've tried runs perfectly well. It's actually hard to tell the difference between running under Rosetta, and running something native. It does use more CPU power to run a Rosetta process than a native process, and they're slow to start the very first time, but other than that, it's completely seamless.

Someone posted Geekbench scores from a DTK, with Geekbench running under Rosetta, and the performance is 20-30% slower than native (as compared to the iPad Pro running the iOS version natively). Assuming that holds generally, and that the Apple Silicon Mac processors will be substantially faster than the existing Intel Mac processors, I wouldn't be too surprised if for many users, running their existing software under Rosetta would still be a performance improvement over their current Mac. Once native versions of these apps become available, there will be no contest.


Claim: The Surface Pro shows that ARM isn't ready for the desktop

The Surface Pro is an obvious comparison to make, because it's an "ARM laptop", running an ARM version of Windows. They're great, for what they are. But they haven't been a great success. The lack of software support, and disappointing performance when emulating x86 code, has been used to justify skepticism of the viability of Apple Silicon Macs.

My Take: The Surface Pro is a great example of the key differences between Apple and Microsoft.
From a third-party developer's perspective ARM Windows is this weird off-shoot of the main Windows product line. Even if you want to support it, it's a much smaller market than the x86 mainstream Windows family, and so the payoff for the porting work is uncertain. When Apple switches to Apple Silicon, they will completely switch over. At the end of the two year transition, every new Mac will be running on Apple Silicon. If you want to be in the Mac software market, you will need to support Apple Silicon.

It turns out that there is hardware support for Total Store Ordering, or TSO, built in to Apple Silicon processors. This was somehow discovered by a third party, and they've subsequently released a proof-of-concept for enabling TSO on a per-thread basis on the DTK. The relevance here is that TSO is one of the major differences between the memory model of Intel processors and ARM processors. By providing this switch, Apple have eliminated a huge source of synchronization slowdowns (and potentially bugs) when translating x86 code to ARM code in Rosetta. This is a hardware feature implemented just to support a particular piece of software, and a great illustration of the advantages Apple gets from controlling the entire stack from hardware to applications.


Claim: Geekbench is not a realistic benchmark, and doesn't reflect real-world performance

This is a fun one. Since Geekbench shows the A12Z as being on par with current Intel laptop chips, it must be that the benchmark is wrong, or intentionally skewed in Apple's favor.

My Take: Benchmarking is hard, but Geekbench is at least trying to be fair
You can see descriptions of the Geekbench micro-benchmarks here and here. There's nothing in here that would obviously bias these tests towards or away from any particular processor. They're artificial benchmarks, but are built up of things that real applications actually do - image compression, database operations, etc, etc.

Conclusion

The first round of Apple Silicon Macs are going to be setting the conversation for the next year about the wisdom of Apple's rather risky decision to abandon Intel. Apple obviously knows this, and I would not be at all surprised if the first Apple Silicon MacBook Pro (or whatever they call the pro laptop, if they rename it) will be the fastest Mac laptop yet. And the desktop Macs will also be an impressive upgrade over their current counterparts.

Tuesday, June 23, 2020

What Apple Announced at WWDC 2020

What Apple Announced at WWDC 2020

It looks like I got some things right, and some things wrong, in my previous post. Let’s look at  what Apple actually announced.

Mac with Apple Processors, across the line

Yes, they really are transitioning the whole Mac line to Apple’s own processors. The timeline they announced is “within 2 years” to transition the whole line. I still expect that they’ll end up well within two years, maybe even closer to a year. Similar to how the Intel transition was announced as “a year and a half”, but ended up being shorter.

The first ARM Macs will be available this year. This was a surprise to a lot of pundits, but it makes total sense to me, given that several major third-party software vendors (Adobe, Microsoft, and Epic) are on-board with the switch, and have their software working already. I was expecting both Adobe and Microsoft to show working pre-release software, just because it really is that easy to move a modern Mac code base to ARM, and they’ve both recently gone through a fairly-painful 64-bit transition for Catalina.

Rosetta 2, to run existing Intel Mac Applications

The big surprise for me is that they did include x64 compatibility in Big Sur. I’m happy to be wrong about that, it’s obviously good news for users. I just figured that the chance to make a clean break would be very tempting to Apple.

Rosetta 2 uses a combination of translation at install time for applications, and translation at load time for plugins. I think that ahead of time translation is a good tradeoff between taking some extra time to get started, in exchange for getting better translation. JIT translation of machine code is hard to balance between performance and latency.

The Rosetta 2 documentation is pretty sparse right now, but I did get the impression that x64-JIT compilers are supported in Rosetta apps, which is interesting. Presumably, when you make the syscall to make a segment of code executable, they translate it then. Pretty slick, though I wonder how much it’ll cause performance hiccups in, for example, web browsers, which rely heavily on JIT to get adequate performance.

Running iPad and iPhone applications without modification

Another thing that seems to have taken a bunch of people by surprise is that ARM Macs will be able to run iPad and iPhone apps without any modifications. This is a logical outgrowth of the Catalyst software that lets you rebuild iPad apps to run on the Intel version of MacOS. You just don’t need to recompile on the ARM Mac, because they’re already the same processor architecture.

Just like existing Catalyst applications, it’s possible for developers to add Mac-specific features (e.g. menus),  to create a better user experience on the Mac. This really does make UIKit (or SwiftUI) the “obvious” choice for new application development.

Porting assistance for Open-Source apps and frameworks

An interesting news item that came out of WWDC is that Apple is going to make patches available to a number of Open Source projects, to help get them up and running on ARM Macs. This includes Chromium, Node.js, OpenJDK, and Electron, which should mean that those projects won’t lag far behind.

So, what’s it all mean?

To me, this seems like just about entirely a win. The new Macs will be faster, use less power, will have a larger software library (all those iOS apps), and more capabilities.

Some software will get “left behind” in the transition, but not very much, at least in the short term. Running software under Rosetta will likely not be great, performance-wise, but it’ll be adequate for a lot of uses.

There is one major downside, for a certain subset of users

No x64 processor means no Bootcamp support for booting into WIndows, and no virtualization software to run your other x86 operating systems. I have a friend who uses Docker extensively as part of his developer workflow, and he’s just going to be out of luck, as far as using a Mac goes.

There is virtualization support on the ARM Macs, but it’s for virtualizing other ARM operating systems. You’ll be able to run an ARM Linux OS inside Parallels (for example), but if your workflow right now includes running code in x64 Windows or Linux, the ARM Macs won’t be a good fit.

What about Games?

It seems like every time a major new MacOS version comes out, they claim it’s going to be “great for games”, but the games mostly don’t actually come.

Having Unity supporting ARM Macs will definitely make it easier for anyone already using Unity to support the new Macs. But the current version of Unity already supports Mac, and still a lot of games never make it there, so I don’t think that’s a win. If anything, it’s a loss, since anybody who wants to support the Mac at least needs to test with both Intel and ARM Macs.

While a lot of big-name games never make it to the Mac, there’s actually quite robust support for the Mac among small indie games developers on Steam and Itch. Again, some of these folks will look at the cost to support both kinds of Macs, and decide it’s not worth it, so we’ll probably lose a few there, as well.

But then there are iPad games. A lot of iPad games are “casual games”, which is just fine by me, since I play more of that sort of thing than I do first-person shooters. And given that iPad games will, by and large “just work” on the new Macs, we may see more iPad games developers moving a bit more upscale. It’ll be interesting to see.

WIll I buy one?

We’ll see what gets announced, but I expect that I will, whenever the “pro” laptops are available. 

Saturday, June 13, 2020

ARM Macs are coming, and faster than you think

ARM Macs and transition timeframes

(note: This is a lightly-edited version of a post originally published on June 13th, 2020)


We all knew this was coming. In fact, some of us have been expecting it for years. Various rumor outlets are saying that Apple will announce at WWDC that they're transitioning the Macintosh line from using Intel's processors to Apple's own processors, which are based on the ARM architecture.


A bunch of people have written extensively on this rumor, but I have a few thoughts that I haven't seen others concentrate on.


One thing you see a lot of disagreement about online is how long it'll take for Apple to convert its whole lineup of Macs to use its own processors, or if it even will. I've seen people say that they think they'll announce a single model of ARM Mac, then over the course of 2-3 years, move all of the product line over to ARM. I've even seen people predict that they'll keep the "Pro" line of computers on x86 for the foreseeable future, and only convert the portable line to ARM.


A case can be made for those positions, but here's what I think: If Apple announces a transition to ARM at WWDC, it'll happen surprisingly quickly. I wouldn't be at all surprised if the first ARM Macs ship before the end of 2020 , and the whole line is switched over before the end of 2021. That seems like a pretty far out-there prediction, compared to the "consensus" view, so let's take a look at the previous transitions, and how this one is different.

We've been here before, but then again, this is very different

This will be the third major processor transition for the Macintosh (and the 5th major software transition overall). Originally, the Mac used the Motorola m68k processor family. After 10 years, the m68k family was failing to make regular improvements in performance, and Apple started to look at other options, finally settling on the PowerPC. They moved the Mac products from m68k to PPC over the course of about 18 months. Years later, they transitioned from PowerPC to Intel, over the course of about 12 months. And now, we're apparently on the cusp of another processor transition. How will this one turn out? And most importantly: WHY NOW?


Transition 1: The m68k to PowerPC transition

"Any sufficiently-advanced technology is indistinguishable from magic"

– Arthur C. Clarke


This transition was very difficult, both for Apple and for third parties. At the time that Apple announced the change, they were still using what we now call the Classic MacOS. Large parts of the operating system, and the applications that ran on it, were written in Assembly, with an intimate understanding of the hardware they ran on.


Consequently, Apple developed a processor emulator, which would allow existing m68k code to run on the PowerPC without any changes. You could even have an application load a plugin written for the other architecture. The new PPC version of MacOS maintained a shadow copy of all its internal state in a place where 68k applications could see (and modify) it - that was the level of compatibility required to get anything to work. A heroic effort, and it paid off - most software worked out of the box, and performance was "good enough" with emulated code, because the PPC chips were much faster than the m68k chips they were replacing.


The downside of that sort of transition is that it takes many years to complete. There was relatively little pressure on the third parties to update their applications, because they ran just fine on the new models. Even the MacOS itself wasn't completely translated to native code until several years later. 


Transition 2: The MacOS X transition

“If I need to make that many changes, I might as well drop the Mac, and go to Windows”

– Some Mac developer, a Halloween party in 1999


A few years after the PPC transition, Apple announced MacOS X, and software developers were faced with another transition. Originally, OS X was intended to be a clean break with Classic MacOS, with an all-new underlying operating system, and a brand new API, Cocoa (based on the OPENSTEP software which came in with the NeXT acquisition).


Major developers were (understandably) not enthusiastic about the prospect of rewriting the majority of their existing applications. Eventually, Apple caved to the pressure, and provided Carbon, a "modern" API that kept much of the same structure, but removed some of the more egregious aspects of Classic MacOS programming. Apple made it clear that they considered Carbon a transitional technology, and they encouraged developers to use Cocoa. The reaction from the larger developers was pretty much "meh." Quite a few smaller long-time MacOS developers enthusiastically embraced the new APIs though, appreciating the productivity boost they provided.


A footnote to this chapter of the saga is that the "Developer Preview" versions of Rhapsody, the first Mac OS X releases, actually had support for running the OS on Intel-based PC hardware. That didn't survive the re-alignment which gave us Carbon, and MacOS X 10.0 shipped with support for PowerPC Macs only. 


Things were pretty quiet on the Macintosh front for a few years. New versions of OS X came out on a regular schedule, and Apple kept coming out with faster and better PowerBooks, PowerMacs, iBooks, and iMacs. And then, suddenly, the PowerPC processor line had a few unexpected hiccups in the delivery pipeline.


Transition 3:The Intel transition

“Wait, you were serious about that?”

– Carbon-using developers, overheard at WWDC 2005


The PowerPC processors were looking less and less competitive with Intel processors as time went by, which was an embarrassment for Apple, who had famously built the PowerPC advertising around how much faster their chips were than Intel's. The "G5" processor, which was much-hyped to close the gap with Intel, ran years late. It did eventually ship, in a form that required liquid cooling to effectively compete with mainstream Intel desktop PCs. The Mac laptop range particularly suffered, because the low-power laptop chips from Motorola just...never actually appeared.


And so, Apple announced that they were transitioning to Intel processors at WWDC 2005. I was working in the Xcode labs that year, helping third-party developers to get their code up and running on Intel systems. I worked a lot of "extra shifts", but it was amazing to see developers go from utterly freaked out, to mostly reassured by the end of the week.


For any individual developer, the amount of “pain” involved in the transition was variable. If they’d “kept up with” Apple’s developer tools strategy in the years since the introduction of Mac OS X, no problem! For the smaller indie developers who had embraced Xcode, Cocoa, and all of Apple's other newer framework technology, it actually was a trivial process (with one exception). They came into the lab, clicked a button in Xcode, fixed a bunch of compiler warnings and errors, and walked away with a working application, often in just an hour or so. 


For the developers with large Carbon-based applications built using the Metrowerks compiler, it was a real slog. Because of CodeWarrior-specific compiler extensions they'd used, different project structures, etc, etc,  it was hard to even get their programs to build in Xcode.


The exception to the "it just works" result for the up-to-date projects is any kind of external I/O. Code that read or wrote to binary files, or communicated over a network, would often need extensive changes to flip the "endianness" of various memory structures. Endianness is something you generally don’t need to think about as a developer in a high-level language, especially if you're only developing for one platform, which also just happens to use the same endianness as the Internet does. Luckily, these changes tended to be localized.


Transition 4: The 64-bit transition

“You can't say we didn't tell you this was coming..."

– Some Apple Developer Tools representative (probably Matthew), at WWDC 2018


The first Intel-based Macs used processors that could only run in 32-bit mode. This is what I consider one of Apple's major technology mistakes, ever. They should have gone directly to 64-bit Intel from the get-go, though that would have required waiting for the Core II Duo processors from Intel, or using AMD chips, or doing the iMac first, and the notebooks last.


Regardless, after the first year, all Macs were built with 64-bit capable processors, and MacOS started supporting 64-bit applications soon after. Technically, the previous versions of Mac OS X supported 64-bit applications on the "G5" processors, but that was only available in the Power Mac G5, and very few applications (other than ports from workstation hardware) bothered to support 64-bit mode.


Unfortunately for the folks hoping to see a glorious 64-bit future, there was again very little incentive for developers to adopt 64-bit code on MacOS. One of the advantages of Intel-based Macs over the PowerPC versions is that you could reuse code libraries that had been written for Windows PCs. But, of course - almost all Windows applications are written for 32-bit mode, so any code you share between the Windows and Mac versions of your application need to be 32-bit. You also can't mix-and-match 32-bit and 64-bit code in the same process on MacOS. So most MacOS applications remained 32-bit for years after there were no longer any 32-bit processor Macs being sold. Even when OS X 10.7 dropped support for 32-bit processors entirely, most applications stayed 32-bit.


Apple told  developers at every WWDC from probably 2006 on  that they should really convert to 64-bit. They'd talk about faster performance, lower memory overhead for the system software, and various other supposed advantages. And every year, there just didn’t seem to be any great need to do so, so mustly, all Mac software remained 32-bit. A few new APIs were added to MacOS which only worked in 64-bit applications, which just had the unfortunate effect of those features never seeing wide adoption.


Eventually, Apple's tactics on this issue evolved from promises to threats and shaming. Developers were told at WWDC that 32-bit applications would not be supported "without compromises" in High Sierra. Then, when High Sierra shipped, we found that Apple had added a warning message that 32-bit applications were "not optimized" for the new OS. That got the end users to start asking developers about when they were going to “optimize” for the new operating system. For the better part of a year, many developers scrambled to get their apps converted before MacOS Mojave shipped, because they made the reasonable assumption that the warning message was implying that Mojave wouldn’t support 32-bit applications. But then Mojave shipped, and 32-bit apps ran the same as they ever have, with the same warning that was displayed in High Sierra. And then, in MacOS Catalina, they finally stopped allowing 32-bit software to run at all.


Converting an existing 32-bit Cocoa application to 64-bit is not particularly difficult, but it is...tedious. You end up having to make lots of small changes all over your code. In one project that I helped convert, there were changes needed in hundreds of source code files. We got there, but nobody thought it was fun, and it seemed so pointless. Why inflict this pain on users and developers, for what seemed like no gain?


You couldn't say that we weren’t warned that this was coming, since Apple had been telling developers  for literally a decade to convert to 64-bit. But third-party developers were still pretty confused about the timing. Why "suddenly" deprecate 32-bit apps for Catalina? Just to incrementally reduce the amount of maintenance work they needed to do on MacOS? Or to reduce the amount of system overhead by a handful of megabytes on and 8GB Mac? It didn’t make sense. And why did they strongly imply it was coming in Mojave, then suddenly give us a reprieve to Catalina?


Transition 5: The ARM Mac

“The good news is, you've already done the hard part”

– Apple, WWDC 2020


With all of this in mind,  I think that the sudden hard push for 64-bit in High Sierra and beyond was a stealth effort to get the MacOS third-party developers ready for the coming ARM transition. When High Sierra shipped, almost all MacOS software was 32-bit. Now that Catalina is out, almost all the major applications have already transitioned to 64-bit. Perhaps the reason the “deadline” was moved from Mojave to Catalina was because not enough of the “top ten” applications had been converted, yet?


Prior to finally getting all third-party developers to adopt 64-bit, the transition story for converting to ARM would have been complicated, because the applications were all 33-bit, and the Mac ARM chips would be 64-bit (the iOS platform having had their 64-bit conversion a few years back). Apple would have been telling developers: "First, you need to convert to 64-bit. Then, you can make any changes needed to get your code running on ARM".


Now, it's going to be very simple: "If your application currently builds for Catalina, with the current SDK, you can simply flip a switch in Xcode, and it'll be able to run on the new ARM Macs, as well". That's not going to be literally true for many applications, for various reasons (binary dependencies on some other third-party SDK, Intel-specific intrinsics in performance-critical code, etc). 


But this time, there is no endianness issue, no apps that will need to change their toolchain, none of the previous issues will be relevant. I also think it's quite likely that there will be very few arbitrary API deprecations in MacOS 10.16, specifically to make this transition as painless as possible for as many developers as possible.


What’s it all mean, then?

"All of these products are available TODAY"

– Steve Jobs's famous tagline, possibly heard again this year?


So then - on to timing. For the Intel transition, there were about 6 months between announcement and availability of the first Intel Mac, and another 8 months before the transition was complete. This time, it's likely that Apple will shrink the time between announcement and availability, because there's comparatively little work that needs to be done to get most applications up and running on the new hardware.


It's possible that they'll announce ARM Macs, shipping that very day. If Adobe and Microsoft are already ready to go on day one, it might even be plausible. I think they'll want to give developers some time to get ready, though. So, I predict 3 months, meaning ARM Macs in September, 2020. And I think they'll move aggressively to put as many Macs on their own processors as they can, because it's all a win for them - lower costs, better battery life, etc, etc.


"But what about the Mac Pro?", you'll hear from some experts. "Nobody's ever produced a Xeon-equivalent performance ARM chip. It'll take years to make such a design, if it's even possible at all".


The obvious comeback here is: Nobody knows what Apple has running in a lab somewhere, except the people who are working on the project. Maybe they already have an 80 Watt ARM powerhouse chip running in a Mac Pro chassis, right now. But even if they don't, I think it's reasonable to look at this from the "Why Now?" perspective, again.


The previous processor transitions were mainly driven by a need to stay competitive in performance with Intel. That is not the case this time, since the desktop/laptop competition is almost exclusively on the same Intel processors that Apple is using. The success, or rather the lack thereof, of other ARM-architecture computers & operating systems (Windows for ARM, ChromeBooks) doesn't make a compelling case for a switch to “keep up” with Microsoft or Google, either. So there's no hurry.


Given that there's no external pressure to switch, Apple must think that they have a compelling story for why they're switching. And that has to include the entire product line, since the Mac isn't their most-important product, and they surely aren't going to support two different architectures on it, just to keep existing Mac Pro users happy. They either have a prototype of this Mac Pro class processor ready to go, or they're very sure that they can produce it, and they have a believable roadmap to deliver that product. Otherwise, they’d just wait until they did.

Which Macs are switching first?

I bet you didn’t see that coming, skeptics!”.

– Me, maybe?


Everybody is expecting to see a new Macbook, possibly bringing back the 12-inch, fanless form factor, and taking maximum advantage of the power-saving and cooler operation of Apple’s own chips. Some folks are expecting a couple of different models of laptops.


What I would really love to see (but don’t much expect) is for Tim Cook to walk out on stage, announce the first ARM-based Mac, and have it not be a super-small, low-power consumer laptop product. I want it to be  something high end that decisively outperforms the current Mac Pro, and establishes that this is going to be a transition of the whole line. I think that'd be a bold statement, if they could swing it.


Saturday, April 21, 2018

Movie Mini-Review: Rampage

Rampage was...not good. (Warning - "spoilers" below, though honestly, there's not much to spoil)

I see you folks in the back, snickering and saying "well, what did you expect?". Here's the thing though - Rampage was probably the most-disappointing movie I've seen in the last year. It’s an onion of layered badness. Even The Rock couldn’t save this one. What a joyless, boring, poorly-made movie.

Whenever any movie is adapted from some other medium, whether it's a book, a play, or a video game, some amount of changes are inevitable. You often have to trim the plot or characters of a novel in order to "fit" it into a movie, for example. And the writers and director will want to make their own changes, to adapt the story to the medium, or just to put their own twist on a well-known property.

But I've never seen a film adapt source material whose full plot could be written on a 3x5 note card, and use NONE of it. In theory, Rampage is an adaptation of the classic video game of the same name, which was released in 1986. The player takes control of a giant monster - an ape, a werewolf, or a lizard - and climbs onto buildings, smashes them, and eats people. In those days, video games really didn't have "plots", as such. We didn't have the memory for that :-) The entirety of the plot exposition takes place as text that scrolls on the screen at the start the game, and you're shown the origin story of whichever of the monsters you've chosen to control. In each case, it's a human being who's mutated into a giant monster by exposure to some dangerous chemical.

The monsters then go on a 128-city tour of North American cities, starting and ending in Illinois, leaving destruction in their wake. In every city, the layout of the buildings is the different, and there are additional features in some cities, such as the trains in Chicago, and the bridge over the river in Detroit. As you're destroying the buildings, soldiers will shoot you from the windows, and some will inexplicably throw dynamite. Smashing the windows of the buildings will occasionally reveal surprises, as well - from a roasted chicken which you can eat for extra health, to electrical devices that can shock you and make you lose your grip. The whole thing is gloriously silly, in the way of Saturday afternoon Creature Features, with guys in rubber suits beating each other up in model cities made of balsa wood and paper.

Essentially none of that is in the movie. There is a giant ape, who's named George. But he's not a human who turned into an ape - he's a gorilla affected by a DNA-editing toxin, which causes him to grow very rapidly. His handler, played by The Rock, has to search for an antidote to cure him. This is obviously part of the process of making room for the human characters to actually be the stars, and it hugely alters the tone of the thing, as well. Instead of a fun movie about giant monsters smashing stuff, we get a much more-typical blockbuster hero movie, where the muscular hero dude and his female sidekick have to race against time to save the world (or at least Chicago) from destruction.

Rampage is a game about monsters punching buildings and eating people...but an hour into the movie, there were no buildings wrecked (well, one partially wrecked), and almost nobody got eaten. The film did try to inject some humor into things, but a lot of the funny bits fell pretty flat, because they didn't really fit the "grim and gritty reboot" that the rest of the movie was trying to be.

And then there's the gore, which I found really off putting. PG-13 is apparently The Uncanny Valley for gore. In little kids movies, there's no gore. In R-rated movies, you can have either realistic gore or ridiculous over-the top gore, take your pick. PG-13, you can get enough blood to be disturbing, but not enough to be funny.

On my way in, I saw a couple and their young (maybe 8 or 9 year-old) daughter settling in to watch the movie. I know that all kids are different, and maybe her parents weren't really thinking about the "13" in the PG-13 rating, but this is a movie that starts out with a fairly intense chase scene in a space station filled with blood, severed heads, and detached limbs. Probably not what they thought they were getting, based on the trailer, and the fact that The Rock was the headline star. Unsurprisingly, the little girl was pretty upset after being subjected to that. I didn't see them when I left the theater, but I'm guessing they didn't see the whole thing.

Interestingly, some folks have praised Rampage as "The most faithful video game adaptation" or gone into great detail on the various nods to the source material. I guess it all depends on what you're looking for. For me, adapting something to film, and losing the "soul" of the thing along the way is just sad. Someone could have made a really fun Rampage movie, but this definitely wasn't it.