Interval Arithmetic – Or Why Numbers Don’t Look After Themselves

In SICP there’s a fairly long extended exercise in interval arithmetic. I strongly recommend going and reading, both for the maths and the lisp.
Besides being good for the soul, a question opens up as to whether interval arithmetic is a good idea, and if it is, then why isn’t it generally implemented?

We’ve all seen it – a bug which we’ve had to chalk up to rounding error.

Floating points are all fine and dandy, but in the modern age, couldn’t we get around it? Besides the ad-hoc interval arithmetic in SICP, there is a formal proposal for such a system called Unums – so why aren’t we using interval arithmetic generally?


1 There is No ‘Perfect’ Representation

You’re thinking that floating point algebra is just an approximation to something better. You’re sick of typing in one divided by two and seeing 0.499999999. If I have to approximate, then why not track the approximation?

For one thing – you can track fractional numbers perfectly. Store them as pairs of integers. If you’re worried about  the size of your integers – most languages have a arbitrary size integer implementation. Done – perfect fractions. The whole of the rational numbers bend to your will.

Except life isn’t rational.

There’s √2, π, e all waiting on the next street corner to beat you up.

Fine. Add in all the algebraic numbers (anything that’s the solution of a polynomial expression) – that covers all the √’s.

Add π, e, and all it’s combinations.

You haven’t even begun to cover the irrational numbers.

They’re literally uncountable – you can’t even start to list them.

The best you can do is model ones that you’re interested in, which should be enough for most. What numbers are important to you?

2 Intervals Don’t Behave

Add two intervals – you add the lower bounds, you add the upper bounds. Sorted.

  • Add (1 +/- 0.05) to (2 +/- 0.01)
  • Answer is (3 +/- 0.06)

So far so good. Subtraction, makes sense, as does division and multiplication. In each case your looking for the range of the answer, what’s the biggest the answer could be, what’s the smallest the answer could be.

Awesome sauce.

Except – what about division again?

  • Divide (2 +/- 0.1) by (0.1 +/- 0.2)
  • Naively, looking at the endpoints:
    • At one end we have 2.1 / 0.3 = 7.0
    • At the other end we have 1.9 / (-0.1) = -19.0
    • Which is a respectable interval : -19 to +7.0

But look again:

  • Let’s pick some other endpoints:
    • 2.1 / (-0.1) = -21.0 – Outside our first naive interval.
  • It gets worse – let’s not even try an end-point
    • 2.0 is within the first interval.
    • 0.01 is within the second interval.
    • 2.0 / 0.01 = 200.0 – well outside the naive answer!
  • It gets even worse
    •  The second interval contains zero.
    • Hence the division explodes to infinity over the interval.

SICP recognizes this, and suggests that division is modified so that it throws an error if it covers a zero. Which is fine.

The problem in general is whether it’s easy to recognize these cases.

  • Suppose you have a function f(x) = ( 1 / (1-2x) )
    • Is it obvious what a ‘bad’ value for x is?
  • Suppose you have a function g(x) = (1 / 0.2 + (sin 4x))
    • Is it obvious what a bad value for x is?
  • Suppose you have a function h(x) = tan (1 + 7x)
    • Is it obvious what a bad value for x is?

Your interval arithmetic library is probably going to throw an exception somewhere – but it will probably be up to you to find out where the problem is.

3 Intervals Aren’t Psychic

Solve this:

  •  5 – 5 = ?

In normal arithmetic, the answer is fairly neatly zero. Suppose we have an interval. Solve:

  • (5 +/- 0.01) – (5 +/- 0.01) = ?

There are at least two answers:

  • Exactly 0
  • 0 +/- 0.02

Before you dismiss one, consider the case in the original SICP problem set, where the numbers are resistors with some tolerance.

  • What is the difference between two different 5 Ohm resistors, each with +/- 0.01 Ohm tolerance?
    • (5.0 +/- 0.01) – (5.0 +/- 0.01) = 0 +/- 0.02 Ohms
    • In the extreme case one is 5.01, and the other is 0.49.
  • If I have a circuit with a 5 Ohm resistor, and then I remove the 5 Ohm resistor, what is the remaining resistance?
    • (5.0 +/- 0.01) – (5.0 +/- 0.01) = 0 Ohms exactly
    • Regardless of the tolerance of the resistor, the resistor has been removed – it doesn’t leave a ‘tolerance hole‘!

So two different situations, both sensibly represented by the same expression, but having different meanings.

As a more complex but practical example, I worked for some time with a company that attempted to predict the timing properties of silicon chips. The path any signal takes through a chip has some variation, a particular path in the circuit will sometimes be slightly faster, sometimes slightly slower, based on what’s going on in the circuit. A key question becomes then:

  • If I need two signals at a certain point for a calculation, what’s the maximum time that I’ll need to wait from the first signal for the second to arrive?
    • Let’s say that they set off at the same time.
    • Ideally I’d like them to arrive at the same time. But lets be pessimistic, lets assume one signal runs as fast as it could, and the other as slow as it could. That’ll give me a good bound for the difference in arrival between the two signals.

So far so good – but sometimes the signals shared part of their paths. And it’s easy to observe that a path can’t be both fast and slow at the same time. i.e. For those common parts of path we shouldn’t include a variation. And this correction was significant for the calculations being performed, a lot of effort was spent trying to eliminate this ‘extra’ error that was incorrectly added.


In summary, my point is that interval arithmetic cannot be a drop in solution.

  • For almost all cases, floating point arithmetic is good enough.
  • In cases where it’s not, you need to be careful about edge cases  – and as you move away from trivial operations the edge cases become more and more complex.
  • If you’re using something other that than a scalar number, you probably need to decide why and what it means.

In the cases where it’s not enough, the programmer needs to understand the problem domain in order to handle errors in a meaningful fashion. An interval library should not be a drop in solution, but the engineer needs to understand the limit of the model.

  • What do interval operations mean? What are they modelling?
  • How important is it to keep intervals ‘tight’?
  • How important is it to detect divide by zero errors?

As always, spend some hammock time, understand the problem, and then write something awesome.

Journey to Clojure II

I left the previous article with a question. Why did I come back to Clojure last year, and decide that it was probably my favourite general purpose programming language?

I’ll be clean up front – although I love the language, it’s an open relationship. It’s not my language at work. It’s not my only language in my spare time. But it is the language that I reach for when coding up something new, or investigating something.

So why do I keep coming back to Clojure? It’s not the perfect language, but it does have the X-factor for me. For this post, I’ll focus on two features, which aren’t completely technical.

  1. The community.
  2. The culture.

The Community

For a small project, I’ve been taken by how vibrant, smart, and productive the Clojure community is.

I’m lucky enough to work in Toronto, which has a vibrant tech community. And, naturally, it’s Javascript that gets the big crowds – but Clojure does well. There are regularly 10+ people at a monthly talk night. For a supposedly niche language, with small industry penetration and exotic syntax, that’s really good.

This isn’t an elitist community. There’s a genuine feeling that the Clojure idioms are no harder than others, and the problem is more often one of familiarity. There are excellent efforts, such as Clojure Bridge, which are demonstrating that Clojure is an excellent teaching language for computer science concepts.

This isn’t a closed community. Because the majority of people involved will be doing something else in their day jobs, there’s a rich cross pollination.

This is a community that people are excited to be involved in.

The Culture

This is a really interesting one, and something that is not readily apparent on first entering the language. It’s not apparent in the terse description of Clojure as a lisp for the JVM.

It’s not apparent when first wading through Clojure syntax in a basic tutorial. Maybe you skimmed over Rich Hickey’s rationale, and picked out ‘lisp’, ‘JVM’, ‘concurrency’, check, check, check.

Clojure is a well thought out language.

It has a winning mixture of practicality, and trail blazing when it looks like there’s a genuine benefit.


  • It’s a hosted language, with excellent interoperability with the host.
  • The libraries and build tools are excellent – it’s not a second class citizen.
  • There’s a continual focus on this being a practical language, for real problems, and compromises are made for this.

Trail Blazing

I’m not sure how significant the last two will be, or how widely adopted they will be, but it’s exciting to see these new ideas being rolled out as part of a language.

The team developing Clojure have obviously had a lot of hammock time.

Journey to Clojure I

For the past year, I’ve been using Clojure as my go-to hobby language.


I tried thinking back to my first lisp experiences. It must have been eight years ago, finally sitting down and doing some Lisp programming.

I’d seen this XKCD before, or maybe this one, but read passing comments connecting it with AI. It had parentheses. It was an old language that refused to die. People were passionate about it. And it had parentheses. And more parentheses.

I was a C++ programmer at an electronics startup at the time. I’d been there around a year, I’d settled on Emacs as my editor of choice. I’d learnt some Tcl, because that was a standard language in the electronics design industry at the time. And then I decided that I should probably learn what this Lisp thing was.

What had I learnt?

That Lisp at its root is simple. That it is powerful with unique macro features. That many modern implementations were weird and sometimes arcane. That there were a few famous examples of it’s use, such as Emacs, Autocad, Paul Graham.

That the Space-Cadet keyboard was a-fricking-mazing.

That languages didn’t have to be essentially un-parseable, like C++.

That people who studied Computer Science at university generally suffered through a term of lisp, (or maybe even SICP), before joyfully pursuing Java, or C, or C++, or Python. I’d studied Mathematics, I’d missed out on that.

So I moved on from the Lisps, the books went back to the libraries. I toyed with the idea of one day writing a DSL around a scheme interpreter. I came back, briefly, to Racket when I was looking for alternatives to PowerPoint. But generally I put Lisps out of my mind as intriguing but impractical.

I knuckled down on the C++.

Years later I moved onto a C# job. I kept learning other languages, because if there’s one thing that I think a professional programmer needs to be, it’s multi-lingual. Not to widen your job pool, but the vast benefit to be gained from appreciating the different idioms in different communities.

I played with Haskell, and Javascript, and Elixir (skimming over Erlang). I read about but then skipped PHP and Perl. I learnt SQL from necessity. I did some useful work in F#. I dipped into many others.

I learnt my weaknesses, and then worked on my weaknesses as a programmer. I learnt the naivety of believing that a language is a silver bullet. And I learnt that a choice is still important.

And I rediscovered Clojure as one of my favourite languages. What had changed?

(Continue to Part 2.)