Tuesday, May 06, 2008

The intermediate approach...

I've written before about how the shared memory multi-threaded model for concurrent processing is an evolutionary dead-end, and how we'll "inevitably" need to move to some other model. I'm a big fan of message-passing systems, but there are other models that also take advantage of multiple processors without the fiendish complexity of mutex-protected, shared memory threads.

Well, you might have heard that Microsoft (with some assistance from the academic community and hardware vendors) is working on their own solution to the "multicore dilemma". Unsurprisingly, they're not recommending that everybody throw away their imperative C++ and C# code, and adopt an entirely new programming paradigm, like functional programming. Instead, they're implementing a series of tools that make it much easier to get a lot of the benefit of multiple processors, without having to learn a new way of thinking.

Late last year, the first fruits of this work became available. The "Parallel Extensions to .NET" (or Parallel FX) are available from Microsoft as a free download, as a Community Technology Preview (basically, an open Beta). You can read much more about it at Microsoft's website here. There's a great MSDN Magazine article that hits the high points, as well.

One key point in their design is the realization that anonymous functions (they call them anonymous delegates, but that's just jargon) provide a simple way to package up a "unit of work" to pass off to another processor. This is nothing new to the functional programmers out there, I know. But for folks who are firmly rooted in the C/C++ tradition, learning that they can replace:

for (int i = 0; i < 100; i++) {
a[i] = a[i]*a[i];
}

with

Parallel.For(0, 100, delegate(int i) {
a[i] = a[i]*a[i];
});

and get a massive speedup on multi-core machines, is likely to be something of a revelation. Parallel FX does some fun stuff under the hood with all those delegate invocations, as well, enabling better work sharing - see the MSDN article for details. I thought this was an interesting hybrid approach, rather than the "all or nothing" approach to addressing concurrency.

For those that are already familiar with OpenMP, it's a somewhat similar model, in many ways. The additional abstraction provided by having an anonymous function does simplify the simple cases, though - the arguments to the function are "private", and everything else is "shared", basically.

One of these weekends, I'll have to take a deeper look into Parallel FX. I did find it fascinating that a single language feature (anonymous delegates) can be leveraged to do so much, though.

1 comment:

Anonymous said...

http://www.theregister.co.uk/2008/07/28/sun_dziuba_tm/

discusses (in some, um, colorful language) some thoughts about Sun's Rock processor and Transactional Memory. At first glance, it seems like it would play very well with a massively-parallel world, but I suspect that under the hood, it's designed for a massively-capable CPU, not a massively-capable CPU system.