Wednesday, May 28, 2008

Why you should visit the Computer History Museum sooner, rather than later

If you live in Silicon Valley, you've probably at least heard of the Computer History Museum. If you're like me, you probably even planned on going "one of these days". There's a very good reason to schedule that visit sometime in the next 10 months or so, though.

A full-scale working model of Charles Babbage's Difference Engine Number Two is currently on display at the CHM until May of 2009. There is only one other like it in the world, at the Science Museum in London, UK.

It's one of the most beautiful pieces of machinery I've ever seen, as well as marking a significant milestone in the evolution of computers. We went down for the exhibit's opening, and actually got to see the machine in operation. I gather that they're not running it all the time on regular days - that's a pity, but seeing as it cost millions of dollars to make, and they're merely borrowing it, I guess I understand that.

It's almost hypnotic to watch, with the carry mechanism rippling up the columns of digits, and hundreds of pounds of brass and steel moving up and down for each step of the calculation.

In addition (ha, ha!) to the Difference Engine, the museum has a large number of other historic computer systems on display, including a Connection Machine, several Cray super computers, and pieces of the SAGE system. Walking through their "Visible Storage" area, a loosely-organized subset of the museum's collection, was fascinating. There's everything from abacuses to supercomputers in there, along with various bits of esoterica that the younger generation wouldn't have seen or even heard of before - drum memory, magnetic cores, etc, etc.

All in all, well worth a visit. The museum is definitely a work in progress, but admission is free, and there's nothing quite like it anywhere else. They also have a regular lunch lecture series, starting up again soon, with discussions of various people and machines in computing history. Since I work right down the street from the museum, I'm planning to try to attend events at the museum more often.

Tuesday, May 06, 2008

The intermediate approach...

I've written before about how the shared memory multi-threaded model for concurrent processing is an evolutionary dead-end, and how we'll "inevitably" need to move to some other model. I'm a big fan of message-passing systems, but there are other models that also take advantage of multiple processors without the fiendish complexity of mutex-protected, shared memory threads.

Well, you might have heard that Microsoft (with some assistance from the academic community and hardware vendors) is working on their own solution to the "multicore dilemma". Unsurprisingly, they're not recommending that everybody throw away their imperative C++ and C# code, and adopt an entirely new programming paradigm, like functional programming. Instead, they're implementing a series of tools that make it much easier to get a lot of the benefit of multiple processors, without having to learn a new way of thinking.

Late last year, the first fruits of this work became available. The "Parallel Extensions to .NET" (or Parallel FX) are available from Microsoft as a free download, as a Community Technology Preview (basically, an open Beta). You can read much more about it at Microsoft's website here. There's a great MSDN Magazine article that hits the high points, as well.

One key point in their design is the realization that anonymous functions (they call them anonymous delegates, but that's just jargon) provide a simple way to package up a "unit of work" to pass off to another processor. This is nothing new to the functional programmers out there, I know. But for folks who are firmly rooted in the C/C++ tradition, learning that they can replace:

for (int i = 0; i < 100; i++) {
a[i] = a[i]*a[i];


Parallel.For(0, 100, delegate(int i) {
a[i] = a[i]*a[i];

and get a massive speedup on multi-core machines, is likely to be something of a revelation. Parallel FX does some fun stuff under the hood with all those delegate invocations, as well, enabling better work sharing - see the MSDN article for details. I thought this was an interesting hybrid approach, rather than the "all or nothing" approach to addressing concurrency.

For those that are already familiar with OpenMP, it's a somewhat similar model, in many ways. The additional abstraction provided by having an anonymous function does simplify the simple cases, though - the arguments to the function are "private", and everything else is "shared", basically.

One of these weekends, I'll have to take a deeper look into Parallel FX. I did find it fascinating that a single language feature (anonymous delegates) can be leveraged to do so much, though.