Why I Love ECMAScript 4: Real Decimals

By Chris Pine

Introduction

In this article, I explore ECMAScript 4's ability to use real decimal floating points, in addition to the binary floating points traditionally used by most programming languages. Languages based on ECMAScript 3, such as ActionScript 2.x and JavaScript 1.x don't have this ability - to make use of decimal floating points, you'll need a language based on ECMAScript 4, such as ActionScript 3 (available in Adobe Flash 9 and Flex 2/3) or JavaScript 2 (supported in future versions of Opera, but not right at the moment.)

What's the problem with binary floats anyway?

So, let's get started with a simple experiment - copy the following line, paste it into your URL field, and hit return:

javascript:alert (0.1 + 0.1);

Actually, don't bother. You don't need a computer to tell you the answer to this one, right? You figured out that it was 0.2 faster than you could copy-n-paste it. For advanced calculations, though, there's really no way to do it in your head. Advanced calculations like this one:

javascript:alert (0.1 + 0.2);

(...drum roll, please...)

Results in 0.30000000000000004

Seriously.

And if we double the numbers we're adding, we get double the result of the last equation, right?

javascript:alert (0.2 + 0.4);

Yeah, maybe in your antiquated math class, but not here - the above equation oddly gives us 0.6000000000000001

So what's going on? Well, first off, it's not a mistake. Your computer is not capable of making mistakes (despite mountains of evidence to the contrary). No, your computer is correctly doing binary floating-point arithmetic. The problem is that you are typing numbers in decimal, but your computer works in binary, and the two don't always match up very well.

You know how one-third is not 100% accurately represented in decimal? 0.33333... It's a lot like that, but in this case the problem is that one-fifth (and, more to the point, one-tenth) cannot be accurately represented in binary. So you write 0.1, but ECMAScript 3 doesn't see that as one-tenth. It converts and rounds it to a binary floating-point number that is extremely close to one-tenth. When you add two of these numbers, it's all done in binary to produce a new binary floating-point number. When you actually want to see the answer, though, it has to be converted back into decimal. Sometimes the errors introduced by these conversions and the subsequent rounding just happen to work:

javascript:alert (0.6 + 0.2);

Gives us the result 0.8

But other times they don't:

javascript:alert (0.7 + 0.1);

Gives us 0.7999999999999999

And this isn't just ECMAScript, of course - you'll see the same thing in any language using binary floating points. To get around these gotchas, ECMAScript 4 gives us the option of using actual decimal floating points.

Using real decimals in ECMAScript 4

The easiest way to use the new decimals is simply to write use decimal at the top of your script. This will cause all numeric literals to be interpreted as decimal literals. In many cases, this is all you want, and you're done.

However, if your are doing lots of calculations, you'll start to notice that decimal floats aren't as fast as binary floats, and eat up twice as much memory. Computers are binary beasts, after all (for the moment, anyway - some manufacturers are committing to supporting decimal floats in hardware.) So sometimes you really only want certain parts of your code to use decimal (maybe you're using a library that depends on fast floats or something). Fortunately, the use decimal pragma is lexically scoped, so you can limit its effects to a block:

{
  use decimal;
  
  var a = 0.1;    //  a is a decimal
  var b = 0.2;    //  b is a decimal
  var c = a + b;  //  c is a decimal (0.3)
}
  
  var d = 0.1 + 0.2;  //  d is a double (0.30000000000000004)

Or you could turn it around, and have your code use decimals everywhere except for the blocks where you need speed. For even finer granularity, you can use decimals outside of a use decimal pragma with the decimal literal syntax:

var a = 0.1m;   //  a is a decimal
var b = 0.2m;   //  b is a decimal
var c = a + b;  //  c == 0.3m

m is borrowed from C# (which also has decimal floats) and stands for "money" (another use case where you REALLY want 0.1 to mean 0.1).

Summary

So that's a wrap - short and sweet and to the point. If you feel you need to, you can read more about the specifics of conversions and rounding on the ECMAScript 4 committee wiki.

Most languages don't have a built-in decimal type (exceptions are C#, REXX...), and it's kind of a pain to implement, and binary floats are sort of similar (and always faster,) so I can see why language designers don't bother. Sort of. But on the other hand, it's about time.

This article is licensed under a Creative Commons Attribution, Non Commercial - Share Alike 2.5 license.

Comments

The forum archive of this article is still available on My Opera.

No new comments accepted.