Sunday, October 28, 2007

The C++ FQA (frequently questioned answers)

No updates for 6 months, then two in a day...
Via a discussion on Joel On Software, I got directed to this:
The C++ FQA

It's a response, of sorts, to the C++ FAQ. You can read moreabout it on the site, but he basically goes through the questions in the C++ FAQ, and explores what it is about C++ that makes those questions "frequently asked". There is some sarcasm, and some rather insightful commentary on why C++ is so very hard to develop real expertise in.

It would be neat to see something like this done for Java and C#. I think the idea of looking at a language from the standpoint of "why are these areas confusing to so many users?" is an interesting approach.

I have always felt that my resistance to really learning C++ was a failure on my part, but after reading "Effective C++", and with the backing of the C++ FQA, I feel a little better about taking the position that C++ is really far too complex for the good of the people who need to work with it.

Wednesday, October 24, 2007

You're probably using "unsigned" incorrectly

You're probably using "unsigned" incorrectly, and that makes me sad.

Chances are that if you write code in C (or related languages like Java, C#, or C++), then you've come across the "unsigned" type, and its relatives "unsigned long" and "unsigned short". If you've written code that uses unsigned types, it's also quite likely that you've used them incorrectly, at least by my standards.

Misuse of "unsigned" in C is one of those things that I keep seeing over and over, with different developers, even folks who really ought to know better. I find it immensely frustrating. If I had to pick one aspect of C that was responsible for more stupid bugs than anything else, this'd be one of the top candidates. Probably not the top candidate - the string-handling functions in the standard library probably win that handily.

Here are my simple rules for the use of unsigned integer types:
  • Don't use unsigned just because "that value should never be less than zero"
  • Always compile your code with all warnings enabled
  • Avoid mixing the use of signed and unsigned integers in the same calculation
  • Do use unsigned when modelling hardware registers that hold unsigned values
  • Do used unsigned when performing bit-wise arithmetic
Okay, back to the subject at hand, and let's take a look at those rules, shall we?

Don't use unsigned just because "that value should never be less than zero"

This is by far the most common abuse of unsigned types that I see on a regular basis. It's not even a bad idea, as far as it goes. A majority of the values in a typical program are going to be non-negative by design - sizes, screen coordinates, loop counters, etc, etc. The problem really isn't unsigned values per se, it's how unsigned and signed values interact.

Part of the problem is that constant values in C are signed by default, which means that signed values will creep into your program unless you make a concerted attempt to avoid them. When you compare signed and unsigned values, the results will often not be what you expect. For example:
unsigned four = 4; 
int neg_one = -1;
if (neg_one < four)
{
printf("true\n");
}
else
{
printf("false\n");
}
Looking at this code, it's pretty obvious what the programmer intended, but in fact the comparison "neg_one < four" evaluates to false in this case. This is because the signed value will be "promoted" to unsigned, turning it from a small negative number to a very large positive number, before the comparison is made.

In actual cases of this problem in the wild, the declarations will typically be a long way away from the comparison, and it won't be at all obvious what the cause of the problem actually is. I've seen experienced programmers stare at the debugger in disbelief when it seems to be showing them that their program thinks that -1 is greater than 4. An additional complication is that constants in C are signed by default, so you can replace the "neg_one" variable in the example with the constant "-1", and you'll get the same behavior.

A related problem comes with the handling of sizes and lengths. A size is typically going to bea non-zero value, so it "makes sense" to use unsigned variables. The problem is that sizes are often calculated by subtracting one value from another. If you accidentally subtract a larger value from a smaller one with signed variables, you get a negative size, which you can at least detect and handle (with an assert(), if nothing else). If you're using unsigned math, you just get a huge bogus "size", which may or may not be immediately obvious.

Always compile your code with all warnings enabled

Admittedly, this rule is more general, rather than specifically tied to problems with using "unsigned" correctly. Most C and C++ compilers have an option to warn on comparisons between signed and unsigned values, when there's a chance the comparison will be interpreted incorrectly. It's even more frustrating to debug one of these issues when compiling with warnings enabled would have produced a warning message that points to exactly where the problem is, but some yutz has that particular warning disabled.

Of course, they have it disabled because enabling warnings on comparisons between signed and unsigned tends to generate zillions of bogus warnings. That's just a good reason to avoid using unsigned variable, where possible - it obscures the actual problem areas with bogus warnings.

Avoid mixing the use of signed and unsigned integers in the same calculation

Given the example above of a simple comparison going wrong, it ought to be obvious that anything more complex is at least as likely to go subtly wrong in some way. Again, the real problem arises because the declarations of the variables (and constants) will be far far away from the point of the errant calculation.

So, when is it okay to used unsigned types?

Do use unsigned when modelling hardware registers that hold unsigned values

This is most likely how unsigned types got into C in the first place. If you're writing low-level OS or driver code that talks to the hardware, you'll often find that the unsigned int type exactly matches what the hardware is using. This also segues nicely into the next rule...

Do used unsigned when performing bit-wise arithmetic

If you're doing something with fancy binary arithmetic, like an encryption algorithm, or something else where you're using an integer as a sollection of bits, unsigned types are probably what you want. I'd put something in here about using "unsigned" with bitfields, but the bitfield construct in C is pretty useless (and a topic for another rant), so I'll just mention that it's worth thinking about whether you want an unsigned or signed bitfield, if you ever use them.

Unfortunately, you actually can't avoid "unsigned" values

As it turns out, there are types that the standard library uses that are usually unsigned, for example size_t. So, your inteactions with the standard library will occsionally force unsigned values to creep into your program. Still, that's no reason for you to make it any harder on yourself.