Mean Means

I am for whatever reasons computing some averages.  Should be simple enough, right?  The average of 4 and 6 is (4 + 6) / 2 = 5, right?  Yes, but no.  TD;DR: computers are dumb!

Any sensible person would implement the computation in a C/Java style language like this:
[java] int avg1(final int a, final int b) {
return (a + b) / 2;
}
[/java] Computers have two idiotic problems that complicate things.  First, all computation is done using integers.  There’s no such thing as float or double and people who think so are making programming errors as we speak.  Second, all integers in computers are bounded.  Normally they are between -231 = -2.147.483.648 and 231 – 1 = 2.147.483.647, but that can also be limited to -215 = -32.768 and 215 – 1 = 32.767 or -27 = -128 and 27 – 1 = 127, or expanded to -263 = -9.223.372.036.854.775.808 and 263 – 1 = 9.223.372.036.854.775.808.  The technical reason is two’s complement and the non-technical reason is “for good/dumb reasons.”

The first limitation causes issues when computing averages of, say, 3 and 6.  3 + 6 = 9.  9 / 2 = 4.5.  Which is very bad at being an integer, so the computer just returns 4.  How about -3 + -6 = -9 = -4.5?  That’s -4.  We always round towards zero (and not towards the smallest value).  Actually, the avg1 implementation above will work fine for this.

The second limitation causes issues with this implementation, though. If we compute the average of 2.000.000.000 and 2.000.000.000, we would expect the result to be (2.000.000.000 + 2.000.000.000) / 2 = 4.000.000.000 / 2 = 2.000.000.000, right? That’s not what avg1 will return. It will instead return -147.483.648. The reason is again “for dumb reasons,” though technically what happens is that once we reach the maximum value of 2.147.483.647 and add one, we get 2.147.483.647 + 1 = -2.147.483.648, so in the mind of the computer (2.000.000.000 + 2.000.000.000) / 2 = -294.967.296 / 2 = -147.483.648.

Ok, let’s try not adding large numbers, then, and instead go with something like this; assuming we want to comnpute the average of min and max where min <= max, we use a rewriting of (min + max) / 2 = (min + (min + (max – min))) / 2 = ( (min + min) + (max – min) ) / 2 = min + (max – min) / 2:

[java] int avg2(final int a, final int b) {
final int min = Math.min(a, b);
final int diff = Math.max(a, b) – min;
return min + diff / 2;
}
[/java]

Now, we get: min(2.000.000.000, 2.000.000.000) = 2.000.000.000, diff = max(2.000.000.000, 2.000.000.000) – 2.000.000.000 = 0 and avg2 = 2.000.000.000 + 0 / 2. While dividing by zero is not allowed, dividing zero is, and this yields the desired result of 2.000.000.000. Great, right? Nope, fuck you! Now, try that same thing with the average of -2.000.000.000 and 2.000.000.000. We get min = -2.000.000.000 and diff = 2.000.000.000 – -2.000.000.000 = 4.000.000.000 = -294.967.296, so we get the new result of -2.147.483.648, which excitingly is as far from correct as we can get.

Thinking cap back on, how about doing the simplest thing instead, and computing (a + b) / 2 = a / 2 + b / 2:
[java] int avg3(final int a, final int b) {
return a / 2 + b / 2;
}
[/java]

Now no numbers will overrun, because we’re not adding large numbers anymore. Instead, we make them smaller by dividing them first. Indeed, now we get no overruns anymore and correctly compute 4 / 2 + 6 / 2 = 2 + 3 = 5. We also correctly handle 3 / 2 + 6 / 2 = 1.5 + 3 = 1 + 3 = 4, which is as expected after rounding. 2.000.000.000 / 2 + 2.000.000.000 / 2 = 1.000.000.000 + 1.000.000.000 = 2.000.000.000 and -2.000.000.000 / 2 + 2.000.000.000 / 2 = -1.000.000.000 + 1.000.000.000 = 0.

So, all is good, right? Of course not. World is a fuck, you are now dead. The average of 3 and 3: 3 / 2 + 3 / 2 = 1.5 + 1.5 = 1 + 1 = 2. Crap. The problem is that rounding errors accumulate, so you should always divide last when working with integers.

Ok, I’m done thinking. On to Google we go and find this solution:
[java] int avg4(final int a, final int b) {
return (a & b) + ((a ^ b) >> 1);
}
[/java] It’s patented and uses bitwise and, xor and shifting so you know it is good! Here, the and operation is kind-of a minimum computation, the xor is kind-of an addition and the shift is a division by two, so this is a C programmers much less readable version of avg2 above. Except it’s working even less. It breaks at -6 and 3. I’ll not go into the details, but -6 and 3 = 2, -6 xor 3 = -7, yet -7 >> 1 = -4, so the result is -2, which is not the (-6 + 3) = -3 / 2 = -1.5 = -1 we would expect.

We notice the shift operation rounds incorrectly, so we might think that this is better:
[java] int avg5(final int a, final int b) {
return (a & b) + ((a ^ b) / 2);
}
[/java]

It’s not. It works for -6 and 3 (yay!), but breaks at -3 and 6. We get (-3 & 6) + ((-3 ^ 6) / 2) = 4 + (-5 / 2) = 4 + (-2.5) = 4 + -2 = 2. And not (-3 + 6) / 2 = 3 / 2 = 1.5 = 1.

Ok, how about just doing the obvious thing you have been screaming all the time:
[java] int avg6(final int a, final int b) {
return (int) (((long) a + (long) b) / 2);
}
[/java]

Now, we move from limited 32-bit computations to powerful 64-bit computations, and never get overflow. The rounding is fucked but in the expected manner. And this works. It has another issue, though: what if we want to do the computation in longs? Well, long longs aren’t what they used to be, and there’s something inherently inelegant in insisting on doing 128-bit arithmetics to just compute the average of two simple values.

Here, finally is the solution I went with:
[java] int avg7(final int a, final int b) {
if (a >= 0 && b >= 0) return a / 2 + b / 2 + (a & b & 1);
if (a <= 0 && b <= 0) return a / 2 + b / 2 – (a & b & 1);
return (a + b) / 2;
}
[/java]

It’s based on avg3: instead of adding first, we divide first. This introduces an error in some cases, which we then correct for. Basically, if the two values have the same sign, we run the risk of overflow and use avg3. We correct for the rounding issue by adding/subtracting one if both values are odd (a and b and 1 is one if both values are odd, and zero otherwise). If the two values have different sign (i.e., they don’t have the same sign), we just go with the initial avg1 solution. We know that the values can never overflow in either direction, so we just use the simple solution which has no rounding issues.

Why would I bother with this? Well, I am looking into implementing a simple binary search algorithm and I know for a fact that I have a significant risk that I will be searching using the largest/smallest possible value supported by the type so I cannot just use the simple implementation.

This site uses Akismet to reduce spam. Learn how your comment data is processed.