Yet another infrequently-updated blog, this one about the daily excitement of working in the software industry.
Wednesday, April 09, 2008
Tolkien is probably rolling in his grave
The Friends List Of The Ring
edit 2008/05/06: The original link is gone. Here's a YouTube link.
Wednesday, March 19, 2008
Most surpising headline of the year
Unfortunately this turns out to be bad reporting...
Tuesday, March 18, 2008
Is it my imagination?
Am I mis-remembering, or did Internet Explorer formerly produce more helpful error messages than this one when failing to connect to a website? Is this another IE7 "feature"? Why would you display the exact same error message for no network connectivity, a DNS failure, and a site that doesn't respond?
Given that it's trivially easy to distinguish each of these three from the other cases, why not at least tell me which is actually the problem?
Grr.
Wednesday, March 12, 2008
L. U. A. That spells Moon!
As I mentioned previously, I've been working with Lua recently. I'm really enjoying it. I'd place Lua somewhere slightly to the right of Tcl on my Programming Language Complexity Chart, which probably explains why I found it so immediately appealing.
There are many kinds of simplicity. I think that one of Lua's strong points is that it tries to get the most possible mileage out of each language feature. For example: Lua is dynamically typed, meaning that values have a type, but any variable can hold a value of any type. Lua defines the following value types:
- number
- string
- boolean
- nil
- function
- table
- thread
- userdata
For purposes of this discussion, we're going to ignore the userdata type, since that's mostly used when dealing with opaque data from the "host" program (Lua being designed to be used as an embeddable interpreter).
Now, this is obviously more types than some other scripting languages, but it's still a fairly short list. Most of the types are fairly self-explanatory - number represents a numeric value, strings are used for text, booleans are used for true and false, nil is the value of an uninitialized variable, and function represents a function.
Tables are an interesting case. Tables are Lua's implementation of the Associative Array, everybody's (or at least my) favorite universal data structure. Once you've got a data structure that can index arbitrary values with arbitrary keys, you can use it for all sorts of things - and Lua does. As the book Programming in Lua puts it:
Tables in Lua are not a data structure; they are the data structure.
Tables are used to implement records, arrays, Object-Oriented Programming features, namespaces (the package system), and a variety of other features. Even global variables are implemented as entries in a table.
One extremely powerful feature of Lua is metatables, which are tables that affect the behavior of other tables. Every table can have a metatable attached to it, which controls the behavior of the table when certain operations are perfomed on it. This is conceptually similar to operator overloading in C++, but again just implemented as a series of entries in a table.
People have implemented both class-based and prototype-based OOP systems for Lua using just a few relatively simple functions that manipulate tables and metatables. That's a pretty good example of the power that's available.
Monday, March 10, 2008
Leaning towards the left...
This diagram is an adaptation of something I wrote on a co-worker's cube wall. I based this on my own experiences, so it probably reflects my prejudices, especially with regard to the placement of the "Human Limit" marker. It'd be interesting to come up with some kind of objective measure for the complexity of a programming language. Maybe multiply the number of reserved words by the number of operators, or something.
I think that excessive simplicity is one issue that limits adoption of the languages to the left of the diagram. Though I've never heard someone come out and say it, I think a lot of programmers are actually put off by syntax that's too simple. People complain about "all those parantheses" in Lisp, but what really bugs them is that the parentheses are really all there is to Lisp, the language.
I tend to prefer languages with simple syntax. I find that a simple set of rules to remember helps me to focus on what I'm trying to accomplish, rather than how to harness the myriad of tools available to me to actually do the work. I also find that simple languages are paradoxically more flexible. In a language like Lisp or Tcl, it's easy (and more or less encouraged) to extend the language with your own constructs, which are "first class citizens" as far as the language is concerned.
Having said all that, even I have my limits. While Smalltalk is a wonderful system for puttering around with, I don't love it. I think the lack of a more-sophisticated syntax for algebraic expression really hampers the usability of the language.
I'll add some additional commentary tomorrow, especially with regards to where Lua (the most recent language I've learnt) fits on the diagram, and what that implies about Lua...
Wednesday, February 27, 2008
A fun little puzzle
A 5*5 array is initialized to all 1s. Each cell can be either 0 or 1. When you flip a cell (m,n) (say from 1 to 0) all its four neighbors(left, right, up and down) flip. You need to change the array into all ZEROs by flipping the cells. How many minimum flips are required?
If I was really clever, I'd insert a Javascript widget here that let you play the game (assuming that Blogger even allows that), but I think the description is probably clear enough.
What was fun about the puzzle to me is that there are a huge number of states that the puzzle can be in, but it's obvious that there just has to be a better way to find a solution than just thrashing around blindly.
I was out sick from work yesterday, and in between naps, I tried working on the puzzle. Unfortunately, I just wasn't able to get much traction on the problem by trying to figure it out logically. Once I got frustrated enough, I resorted to a brute-force search of the solution space, which found 4 solutions, all of which were reflections and rotations of the same moves.
Leaving aside whether it's really very fair to ask for a "minimal" solution to a puzzle that only has one solution, it wasn't until I was copying the solution out of the terminal window that I saw the symmetry trick that makes it work.
Normally, when I'm trying to solve something like this, which just "has" to have a simple solution, I start by solviung a much simpler version, then try to scale up to the original problem as stated. For some reason, that process simply didn't occur to me yesterday. Had I tried starting with a 2x2 matrix and working my way up, I probably would have seen a pattern in the solutions pretty easily, and solved the puzzle by hand in a few minutes.
What did occur to me was that the total number of states, while large, was still well within the range where an exhaustive search would be reasonable. Once I had the program written to search for solutions, I ran it, and it finished suspiciously quickly. It turned out I had a bug, along the lines of using 2^25 instead of 2^26-1 in a critical place. I fixed that, re-ran it, and it still took less than 10 seconds. What the heck?
I tend to forget that the computers we have these days are so darn fast. Despite the fact that I'd done nothing to optimize that algorithm I was using, the PC was cranking along at 1 or 2 billion instructions a second, evaluating the 33,554,431 possible solutions at a speed of a few million iterations a second.
It's easy to think "this computer is so slow", when you're waiting multiple seconds for it to wake from sleep, or for some crazy Java applet to load, but when I see the results of the machine just cranking on some simple calculation, I'm just amazed at the power there.
If you want to see my thought process while I was working on this puzzle, you can read the techInterview discussion here, though my solution is there, so don't read it if you don't want to be spoiled. I'll probably post my code here later, so you can see what kind of C code I write when nobody's watching, and while I'm sick to my stomach :-)
Friday, February 22, 2008
Monkey madness!
My latest foray into tool development started innocently enough. At Zing, we develop software and hardware for portable entertainment devices. Examples would be MP3 players, satellite radios, and portable internet radio streaming devices.
Anyway, one of the guys on the development team suggested that we ought to implement a bit of software to simulate the actions of a user - pressing buttons, spinning the scroll wheel, changing the volume, that sort of thing. Apparently Palm used a similar system (called a Gremlin) to flush out some of the more obscure timing-related bugs in their software.
Our version was called the UI Monkey (after the Infinite Monkey Theorem). The initial implementation was fairly straightforward, generating random user-input events and feeding them into the event queue on the device.
Then someone wanted a way to record and play back events, so we could do some testing automation. That was straightforward enough, though anyone who's implemented UI automation on such a low level can tell you than the resulting scripts are very fragile, in that any tiny change to the UI will break the script.
Once I had the basic framework in place, more requests came in, everything from "can we have the monkey scripts check for success?", to "how about implementing a way to repeat a bunch of actions over and over?". Since the whole Monkey code structure was based on sending and receiving UI events, other features had to be hacked in as "special" events, which each had their own syntax. After I had finished "goto", and had started working on some other features, I decided it was time to take a step back from the ledge and re-evaluate my options.
Despite the great amount of interest expressed in UI automation, nobody else was using the system I'd created. Given the sheer impenetrability of the syntax, I can't really blame them, but it was a little disappointing. I decided it was time to come up with something a little less insane, and a bit more appropachable.
The one thing I was pretty sure of was that I didn't want to implement my own scripting language from scratch. While designing your own language is an interesting project, and gives you a lot of power, it always takes a lot longer than you'd expect, and you're never really done. Implementing a "Monkey library" on top of a more-standard language seemed like a better bet.
I considered using C# as our UI scripting language. Unfortunately, C# requires a fair amount of boilerplate code just to make a basic loadable assembly, and a lot of the folkks that want to do scripting aren';t familiar with the development tools. So, some kind of scripting language seemed to be in order. As it turns out, we already have a scripting language interpreter embedded into our product (which is another story, and a great example of the Law Of Unintended Consequences at work).
So, I threw out all of the crazy hacky code I had been writing, and re-built the Monkey on top of Lua, a light-weight scripting language. Instead of an event recording and playback system, we now have a set of event-related primatives, and a real programming language to call them from, with functions, for loops, if-then-else, and variables. The integration of the Lua and C++/C# code was pretty easy, as well.
Wednesday, January 09, 2008
XO Laptop, part 2
Friday, January 04, 2008
XO laptop mini-review
My first hours with the XO laptop
Back when the One Laptop Per Child Foundation was running their "give one, get one" promotion, I signed up to donate a laptop, and get one for myself. I figured I could do some good, and get a chance to see what it's all about.
Initial impressions:
Hardware:
It's small - really small. Seems more deserving of the "notebook" designation, rather than calling it a "laptop". It'd probably fit well on a child's lap, though.
The exterior case feels very solidly constructed, and has a built-in handle. It reminds of my original iBook. It does look a little like a toy, but that impression goes away pretty rapidly once you start using it.
The keyboard is a rubber dome "chiclet" keyboard of the sort you might have found on inexpensive home computers in the 1980's in the USA. It's not too hard to type on, with the exception of the "space bar", which seems to be made up of 10 or so individual switches, and my thumb keeps hitting it between the individual switches.
The screen is very clear and readable. I haven't used it in black & white mode very much yet, but it's readable in a (fairly bright) room with the backlight off. Not bad at all.
Software:
The user interface is a little weird, if you're used to a standard PC operating system interface. I think someone who's coming to it with no preconceptions would find it fairly easy to get started.
It's running Ubuntu Linux, but you'd never know it from looking at the UI. Everything uses just one mouse button - none of the right-click, middle-click crap from KDE.
The "window manager" doesn't so much manage windows as screens - most of the applications run in full-screen mode, to make better use of the low resolution screen.
The web browser is plain but functional. I'm using it to type this entry, as a matter of fact. So far the only problem is that the cursor disappears in some text boxes. That makes it a little harder to edit text than it should be. Might be a Blogger Javascript problem, I'll see if it come up elsewhere.
I'll report back with an update after I've tried out the other included applications.
Thursday, December 06, 2007
Programmer's purity test
It occurred to me that that'd make for an interesting variation on the classic "purity test". A number of "hacker" and "geek" purity tests are out there, but I haven't seen one specifically for programming. There are way too many extremely obscure languages on that Wikipedia list, though.
If we trimmed out the truly obscure languages, we'd get something like this:
Have you ever:
- Programmed a computer?
- In ADA?
- In ALGOL?
- In APL?
- In APPLESCRIPT?
- In Assembly?
- In AWK?
- In B, or BCPL?
- In BASIC?
- In brainf*ck?
- In Bourne Shell?
Nah - too boring, and we haven't even gotten out of the B's yet. Maybe we could organize it by generation:
Have you ever:
- Programmed a computer?
- With jumper wires?
- In machine code?
- ...without a coding sheet or other aid?
- ...with toggle switches?
- ...from a Hex keypad?
- In assembly language?
- On punched cards?
- In a language whose syntax assumes that you're still using punched cards (eg Fortran, RPG)?
- In COBOL?
- In C or Pascal?
- In Forth?
- In Lisp (Scheme, Logo)?
- In Smalltalk?
- In a 4GL?
- In C++?
- In Java or C#?
- With a scripting language?
- In a modern functional language (Haskell, etc)?
- In an object-oriented language without class-based inheritance?
That's a pretty good start, maybe we could add a few questions on how you used these various tools.
Have you ever...
- Written a program that directly controlled objects in the physical world?
- ...did you ever injure anyone with a bug?
- ...other than yourself?
- Written software for internal business use?
- Written software that was sold at retail?
- Written software that sends email?
- ...did it ever send thousands of messages due to a bug?
- ...outside the organization you were working at?
- Programmed in a language of your own design?
- ...did anyone else ever use your language?
- ...did it become a de-facto standard?
- ...or an ISO or ECMA standard?
- Written a compiler?
- ...not as an assignment for a class?
- ..."by hand" (without using lex/yacc or related tools)?
- Created self-modifying code?
- Written code that modifies some other program's binary?
- Written self-reproducing code?
- ...without it getting away from you?
- Changed the class of an object at runtime?
- ...in a language without dynamic dispatch?
- Created a program that took longer to run (once) than it did to write?
- ...while running on a cluster of computers?
- ...or a conventional supercomputer?
I seem to have run out of ideas. Suggestions for additional questions would be greatly appreciated. A traditional Purity Test would have 100 questions, so you could easily generate a percentage score.
For what it's worth, I scored 32/44, or about 27% pure. I think that probably indicates that the test is a little too focused on my own experiences. Send me your questions, and I'll work up a better list...
Saturday, November 24, 2007
Fear of teaching
Since I knew other people would want to read about it later, especially given that I gave the presentation on the Wednesday before Thanksgiving, I put my notes up on our Wiki. I have a bit of a love-hate relationship with the Wiki. While I love the idea of a single place to look for information, the usability of Wiki markup languages really stinks. It's a bit like Blogger's "plain text" format, in that if you don't care about what things come out looking like, it's alright, but I always spend more effort trying to work around the limitations of the format than I do actually writing the content.
I've learned over the years that there are about three major methods of preparing for giving a presentation. Some people actually go to the effort of writing out everything that they want to say, similar to a speech as given by politician. Some people write nothing down, and ad-lib the whole thing, and then there's the approach I've always used. I usually write an outline that contains all the topics I want to cover, and then I ad-lib the presentation around the outline, making whatever mid-course corrections might seem necessary based on the audience's reactions.
Because I wanted people to be able to get something out of my notes without having to be at the presentation, I filled in the outline with some additional explanatory text. That's very similar to the process I usually use when I write other things (for example, blog posts). I start with an outline, then I replace items in the outline with paragraphs and sentences. Even when I do something more free-form like this post, I have the outline in my head, at least. Revising an outline on the iPhone would have pretty painful...
When I finished the presentation, I got some very positive feedback from the audience, including at least one actual pat on the back, something I had thought of as a metaphor before. Afterwards, I thought a little bit about what I thought was good about that presentation, especially compared to other presentations I've seen lately, and I think it's all about not over-planning or under-planning it. The worst presentations I've been subjected to are of the "guy reading directly from his Powerpoint slides" style. Second worst are the "guy who is totally not prepared or sure what he wants to say" variety. Once again, the happy medium wins out.
Sunday, October 28, 2007
The C++ FQA (frequently questioned answers)
Via a discussion on Joel On Software, I got directed to this:
The C++ FQA
It's a response, of sorts, to the C++ FAQ. You can read moreabout it on the site, but he basically goes through the questions in the C++ FAQ, and explores what it is about C++ that makes those questions "frequently asked". There is some sarcasm, and some rather insightful commentary on why C++ is so very hard to develop real expertise in.
It would be neat to see something like this done for Java and C#. I think the idea of looking at a language from the standpoint of "why are these areas confusing to so many users?" is an interesting approach.
I have always felt that my resistance to really learning C++ was a failure on my part, but after reading "Effective C++", and with the backing of the C++ FQA, I feel a little better about taking the position that C++ is really far too complex for the good of the people who need to work with it.
Wednesday, October 24, 2007
You're probably using "unsigned" incorrectly
You're probably using "unsigned" incorrectly, and that makes me sad.
Chances are that if you write code in C (or related languages like Java, C#, or C++), then you've come across the "unsigned" type, and its relatives "unsigned long" and "unsigned short". If you've written code that uses unsigned types, it's also quite likely that you've used them incorrectly, at least by my standards.Misuse of "unsigned" in C is one of those things that I keep seeing over and over, with different developers, even folks who really ought to know better. I find it immensely frustrating. If I had to pick one aspect of C that was responsible for more stupid bugs than anything else, this'd be one of the top candidates. Probably not the top candidate - the string-handling functions in the standard library probably win that handily.
Here are my simple rules for the use of unsigned integer types:
- Don't use unsigned just because "that value should never be less than zero"
- Always compile your code with all warnings enabled
- Avoid mixing the use of signed and unsigned integers in the same calculation
- Do use unsigned when modelling hardware registers that hold unsigned values
- Do used unsigned when performing bit-wise arithmetic
Don't use unsigned just because "that value should never be less than zero"
This is by far the most common abuse of unsigned types that I see on a regular basis. It's not even a bad idea, as far as it goes. A majority of the values in a typical program are going to be non-negative by design - sizes, screen coordinates, loop counters, etc, etc. The problem really isn't unsigned values per se, it's how unsigned and signed values interact.Part of the problem is that constant values in C are signed by default, which means that signed values will creep into your program unless you make a concerted attempt to avoid them. When you compare signed and unsigned values, the results will often not be what you expect. For example:
unsigned four = 4;Looking at this code, it's pretty obvious what the programmer intended, but in fact the comparison "neg_one < four" evaluates to false in this case. This is because the signed value will be "promoted" to unsigned, turning it from a small negative number to a very large positive number, before the comparison is made.
int neg_one = -1;
if (neg_one < four)
{
printf("true\n");
}
else
{
printf("false\n");
}
In actual cases of this problem in the wild, the declarations will typically be a long way away from the comparison, and it won't be at all obvious what the cause of the problem actually is. I've seen experienced programmers stare at the debugger in disbelief when it seems to be showing them that their program thinks that -1 is greater than 4. An additional complication is that constants in C are signed by default, so you can replace the "neg_one" variable in the example with the constant "-1", and you'll get the same behavior.
A related problem comes with the handling of sizes and lengths. A size is typically going to bea non-zero value, so it "makes sense" to use unsigned variables. The problem is that sizes are often calculated by subtracting one value from another. If you accidentally subtract a larger value from a smaller one with signed variables, you get a negative size, which you can at least detect and handle (with an assert(), if nothing else). If you're using unsigned math, you just get a huge bogus "size", which may or may not be immediately obvious.
Always compile your code with all warnings enabled
Admittedly, this rule is more general, rather than specifically tied to problems with using "unsigned" correctly. Most C and C++ compilers have an option to warn on comparisons between signed and unsigned values, when there's a chance the comparison will be interpreted incorrectly. It's even more frustrating to debug one of these issues when compiling with warnings enabled would have produced a warning message that points to exactly where the problem is, but some yutz has that particular warning disabled.Of course, they have it disabled because enabling warnings on comparisons between signed and unsigned tends to generate zillions of bogus warnings. That's just a good reason to avoid using unsigned variable, where possible - it obscures the actual problem areas with bogus warnings.
Avoid mixing the use of signed and unsigned integers in the same calculation
Given the example above of a simple comparison going wrong, it ought to be obvious that anything more complex is at least as likely to go subtly wrong in some way. Again, the real problem arises because the declarations of the variables (and constants) will be far far away from the point of the errant calculation.So, when is it okay to used unsigned types?
Do use unsigned when modelling hardware registers that hold unsigned values
This is most likely how unsigned types got into C in the first place. If you're writing low-level OS or driver code that talks to the hardware, you'll often find that the unsigned int type exactly matches what the hardware is using. This also segues nicely into the next rule...Do used unsigned when performing bit-wise arithmetic
If you're doing something with fancy binary arithmetic, like an encryption algorithm, or something else where you're using an integer as a sollection of bits, unsigned types are probably what you want. I'd put something in here about using "unsigned" with bitfields, but the bitfield construct in C is pretty useless (and a topic for another rant), so I'll just mention that it's worth thinking about whether you want an unsigned or signed bitfield, if you ever use them.Unfortunately, you actually can't avoid "unsigned" values
As it turns out, there are types that the standard library uses that are usually unsigned, for example size_t. So, your inteactions with the standard library will occsionally force unsigned values to creep into your program. Still, that's no reason for you to make it any harder on yourself.Monday, May 28, 2007
New Blender!

Last week, we got a new blender - a Blendtec Total Blender. Since Yvette started her weight loss program, we've gone through 4 blenders or so. Making multiple shakes every morning wears out your typical bargain basement blender in 6 months or less. We're hoping that the new blender lasts us a whole lot longer. It had better last a long time, it was nearly 10 times as expensive as the last blender we bought.
The Total Blender is based on Blendtec's commercial blenders (my local Starbucks uses Blendtec blenders). It's got a 1500 watt motor, as compared to the 300-600 watt motors in a typical home blender. It's got a very solid square blender jar, and a scary-looking set of ultra-sharp blades. Instead of a set of speed buttons that are labeled with wacky labels like "frappe" and "fold", it's got buttons naming what food you're making - "milkshake", "soup", "smoothie". Each button initiates an automatic program which speeds up or slows down as necessary to perfectly mix whatever you're making. It even stops automatically at the end of the program. Nifty!
So how well does it work? It's incredibly powerful - it crushes ice without even slowing down. It's also very loud at full speed. The instructions that came with the blender were minimal, but it did come with a pretty big cookbook. Apparently, the motor is strong enough to actually grind flour, make peanut butter from whole nuts, etc. On the "soup" setting, the friction of the blending actually makes the soup hot! For simple stuff, like the aforementioned diet shakes, it does the job about twice as fast as our old blender. The jar is super easy to clean, and doesn't seem like it'll get food stuck under the blades like our old blender tended to.
There is one downside that I've identified so far. Have you ever had a blender accident, where an overloaded blender popped the top off and splashed stuff everywhere? Try to imagine what it looks like when that happens with a blender that's 4 times as powerful. This is particularly problematic with hot liquids - the fast start of the blender throws the hot liquid up in the jar, which causes the air in the jar to expand and jet out the top. I think of it as Mount Blendtec erupting. Cleaning shake mix off the ceiling isn't very much fun.
I think I've got the technique dialed in now. I do wish they'd implement a slightly softer start for the low-speed settings, though. All in all, it's definitely a massive improvement over our old, tired blender. I'm going to try some of the recipes from the cookbook. That should help familiarize me with the various cycles. Besides, it's just plain fun to use - it'll blend darn near anything. Speaking of which, if you haven't seen it, check out Blendtec's Will It Blend? for amusing demos of the blender in action.
Wednesday, May 02, 2007
They're running a contest over at Worse Than Failure
For those not familiar with the site, their theme is Curious Perversions in Information Technology. People submit particularly awful pieces of code, or database schemas, or business practices, and the readers of the site make various insightful, witty, outraged, or just plain misguided, comments on them.
The contest: Implement a four-function calculator program (for Windows, or UNIX/GTK). Your program must pass the specified test cases, have a GUI driveable using a mouse, and should cause people who read the code to shake their heads sadly in disgust.
There's much more detail on the contest website, but I thought the idea of the contest was interesting enough to mention. The contest discussion forum is hilarious, as well. Check out this comment:
Mine's too slow right now. It's taking about 45 minutes to add 9876 and 1234, when I was hoping for about three seconds. I knew it was O(n^2), but I was expecting the constant to be a bit smaller. I may need to replace the Mersenne twister with a faster random-number generator.
I have what I think is a pretty good set of ideas for a submission, I'm just not sure that I'll be able to finish in the time remaining (12 days left). I'll probably submit whatever I have by then, even though I almost certainly won't win. I'm looking forward to seeing the other entries, though.
Tuesday, April 10, 2007
Slashdot really irritates me sometimes...
This would allow manufacturers to fix design defects in already-manufactured chips, rather than fixing the defects in subsequent revisions of the chips, and leaving earlier customers with the buggy chips they bought (which is what they do now).
Unfortunately, the article that's linked in the Slashdot submission is a little light on details, and the summary is just plain misleading, so the Slashdot comments are completely swamped in responses like: "this is an old idea!", "what's so novel about an FPGA?", "FPGA's are expensive, slow, and inefficient!", and other nonsense.
I actually read the article and tried to understand what it was about. When I posted that it sounded like a good idea, and it'd be interesting to read more about the design, an unregistered Slashdot user provided me with a link to the original paper, which made it much more clear what the whole thing was about.
But nobody else can see that link, because Slashdot's moderation system assumes that unregistered users are less trustworthy, so the comment with the link to the original paper is invisible to the idiots that keep ranting back and forth to each other about what a dumb idea this is, without having any idea what they're talking about.
So, go here, and read the original paper, especially if you've ever had the experience of running into one of these errata before. Hopefully, the folks at AMD, Intel, and IBM will see this, and it'll make it's way into newer designs.
Monday, March 12, 2007
Daylight Saving Time is here...
Daylight Saving Time is here again in the good 'ole US of A. This year, the dates for the switch to and from summer time have changed. Naturally, despite the fact that this change has been known about for nearly two years, there was a last-minute scramble by various companies to get "patches" for their software out in time for the new Daylight Saving switchover.
And some of them didn't, in fact, get ready on time. Despite the fact that I (and the rest of my company) installed Microsoft's patches for Windows, this morning all of our meetings in our Outlook calendars are shifted by one hour. And we're not the only ones - I see in the news that this is causing problems all over. If someone's late for an appointment with you today, cut them some slack - it's probably Microsoft's fault.
Tuesday, February 27, 2007
What was that?
The project I was working on was the motion control system for a robot. Now, I should probably clarify that, so you don't get the wrong idea - we're not talking about C3PO or R2D2, here. This was an industrial robot, used for 3-d imaging. The robot itself was made out of slabs of cast iron bolted together, was probably 8 feet tall, 4 feet wide, and ten feet long, and weighed in the neighborhood of 4 tons.
The problem I was having was that the motion control was somewhat unresponsive - you'd move the joystick, and the translation table or the optical head would slowly start to move, and when you got to where you wanted to go, it'd keep on moving for a little while before coming to a stop.
As I was looking at the code, I found what I thought was the problem - I had simply put the wrong coefficient in for one of the control equations, so we weren't getting the proper exponential factor applied to the requested motion. A quick edit and recompile, and I was ready to test the new code.
I started the system up, and very slightly moved the joystick. The translation table started to creep forward. I then pushed the stick over a little farther, and the table accellerated. So far so good.
And then, something very bad happened. When I put the stick back to the rest position, the table didn't slow down. In fact, it kept speeding up. I tried pulling back on the stick, but that didn't seem to have any effect. I managed to turn off the power to the motors just before the table hit the hard stops at the end of its travel.
So this 600 pound cast-iron table slams into the rubber bumpers at the back of the machine, going something like 30 feet per second. The whole machine rings like a gong, and all work in the entire shop grinds to a halt as everybody looks over to see this multi-ton machine gently rocking back and forth. I was really worried that I'd wrecked at least part of a very expensive machine, but a later calibration run showed that the mechanical parts of the robot were just fine.
It turns out that there were two problems in the system - the incorrect exponential on the input side that I'd corrected, and an additional incorrect damping factor on the output side. The upshot of all this is that once the system was up to speed, it took a very long time to slow down, but the bug on the input side ensured that it never got up to more than a tiny fraction of the maximum speed.
The next time I needed to make a change in those calculations, I did a "dry run" with the motors disconnected first...
Monday, February 12, 2007
Just barely better than no backup at all...
That kind of sucks, but at least I have relatively recent backups to restore from. Or, maybe I actually don't. I've been using the .Mac Backup program to do backups for the last year or so - prior to that, I was just bulk-copying stuff by hand to an external hard drive. The Backup program is a lot more convenient, and makes much better use of the space on the external drive.
I figured that surely, now that Backup is at version 3.1, it'll be rock-solid reliable, right? I mean, once they fixed that awful crashing bug I reported back in the 1.0 days, I hadn't noticed any problems, so everything is OK, right? Well, as it turns out, Backup 3.1 is no more reliable than the old Backup - it just has different bugs. Now, instead of crashing on backing up large numbers of files, it crashes when trying to restore them. If I was given a choice between these two behaviors, which do you think I would have chosen?
The one saving grace is that inside the broken Backup file package is a more-or-less standard Mac OS X disk image file. So, I can mount those files (one from the last full backup, and one from each of the incrementals), and hand-copy the files over from them. Let's hear it for unreliable backup software...