This past weekend, I needed to find a good fraction to approximate a given value. Along the way, I learned about continued fractions, and emerged with a neat, efficient algorithm. The only trouble is, I can't figure out why it generates the correct answer!
I have a data compression application in mind, based on linear interpolation between two points. Since Vena has already developed a library for unlimited-precision fractions, I figured I'd use that to pre-compute the slope between the points, and then I'd interpolate the "rise" by multiplying the "run" by the slope.
Our fraction class has two implementations: one called
BigFraction using Java's
BigInteger for the numerator and denominator, and a much faster one called
SmallFraction based on
long for reasonably small fractions. For performance reasons, I'd like to use
SmallFraction whenever possible.
Since the interpolation can tolerate small errors, there's no point in using an enormously precise slope fraction. We can limit the numerator and denominator to, say, six digits; that way, even when we multiply by the "run" (which is limited to be a 32-bit
int), the result still fits in a
Now, how should we approximate the slope with a fraction having its numerator and denominator limited to six digits? Here's one straightforward way:
- Take the numerator (or denominator, if that's larger)
- If it's under one million, stop
- Otherwise, scale it to 999,999, and scale the denominator (or numerator) accordingly
For example, with this technique, pi would become 999999/318309 which is off by about two parts per million.
Simple, right? But on the weekend, I started to wonder whether this was the best I could do.
It turns out the problem I'm trying to solve is called the best rational approximation problem, which is closely related to a very cool concept called continued fractions. I don't have the space in this blog post to describe continued fractions in detail, but the Wikipedia page goes a very good job of explaining them.
Here's an example of a continued fraction representing pi:
This is a pretty cumbersome way to write a fraction, so instead, they write it using a special notation, with the whole number part followed by a semicolon, followed by a comma-separated list of the numbers you'd see in the denominator. For pi, we'd write:
[3; 7, 15, 1, 292, 1, 1, 1, …]
The reason this interests me is that wherever you get tired of writing a continued fraction and decide to stop, you'll have a new record for best approximation so far. For pi, the best integer approximation is 3. The next-best approximation† is the continued fraction
[3; 7], which is 3+1/7, or 22/7. The next-best is
[3; 7, 15], which is 3+1/(7+1/15), or 333/106; followed by
[3; 7, 15, 1] which is 355/113. (This last fraction has half as many digits as the 12-digit monster we computed before, yet it's 20 times closer to the right value!)
In general, the continued fraction written
[a; b, c, d, e] would correspond to this expression:
a + 1/(b+1/(c+1/(d+1/(e))))
All those parentheses mean that, in effect, continued fractions are evaluated from right to left. This is going to pose a serious problem very shortly.
Since continued fractions are so good at approximating things, they lend themselves to a straightforward approximation algorithm, which goes something like this:
Cto the continued fraction representation of the ideal slope.
- Check whether the fraction representation of
Chas numerator and denominator under the limit. If so, return this fraction.
- Else, chop off the last element of
Cand try again.
This is a pretty crummy algorithm. It has two major problems:
Problem 1: Step 1 starts by computing the whole continued-fraction representation of the given slope, even though we don't need it all, which can be very inefficient; in fact, for irrational numbers, the continued fraction never terminates.
To solve this problem, we'd like our algorithm to interleave the generation of the continued fraction and the limit check. That leads us to a new algorithm that looks more like this:
Fto the integer portion of the ideal slope, and
Cto the corresponding integer-only continued-fraction representation.
- Compute the next term in
C(as described on Wikipedia) and the corresponding fraction
- If numerator or denominator of
Gis past the limit, we've gone too far. Return the previous best fraction
- Otherwise, set
However, this doesn't solve the second major problem with this approach.
Problem 2: Step 2 computes the fraction representation of a given continued fraction from scratch each time. Since this is an O(n) operation, the algorithm is O(n²).
This is a tough one. As long as we're forced to evaluate continued fractions from right to left, it means that partial results from previous loop iterations don't help us. If you want to compute
3+1/(7+1/(15+1/(1+1/(292)))), it doesn't do you much good to know that the previous answer,
3+1/(7+1/(15+1/(1))), happens to be 355/113.
Or does it?
Let's call it empirical mathematics
This is the point where I left the path of deductive reasoning, and started messing around with examples until I noticed a pattern.
Suppose we want to compute [0; a1, a2, ..., an], and let's call this Cn. The problem is the following: how do we compute Cn given an, and information about all previous C calculations?
Here's the answer I found. Define two sequences Nn and Dn as follows:
Nn = anNn-1 + Nn-2
Dn = anDn-1 + Dn-2
These two equations look very much like a generalized form of the Fibonacci sequence, where Fn = Fn-1 + Fn-2. They're simple and efficient to calculate.
But here's the thing: Initialize these sequences properly, and lo and behold, you'll find that Cn = Nn/Dn. I subsequently found this formula on Wolfram MathWorld, so at least I'm able to confirm that it always works.
What we've got here is a simple, efficient way to compute each Cn in constant time and constant space‡. The algorithm now looks like this:
- Initialize N0 = 0, D0 = 1, N1 = 1, D1 = a1
- Compute the next continued fraction term, an, as described on Wikipedia (starting with n = 2)
- Set N2 = anN1 + N0 and D2 = anD1 + D0
- If N2 or D2 is past the limit, we've gone too far; return N1 / D1
- Otherwise, shift everything down (so D2 becomes D1 and so forth), and repeat
These calculations do a fixed amount of work each time around the loop; they operate entirely on small integers that can fit in machine registers; and they're simple and fast. It's a great algorithm, and I'm altogether quite impressed with myself for having (re-)invented it.
Now, could someone please tell me how it works?
†There are actually a few other pretty good approximations in between these "best" ones, but we'll ignore those for simplicity. It's all on the Wikipedia page.
‡Assuming a machine model in which integers occupy a fixed amount of space, and multiplies and divides occur in constant time. This is a pretty loose interpretation of asymptotic complexity, but we can get away with it here if we assume that the algorithm to terminates while all the numbers are small enough to make these assumptions correct (which it provably always does).
Discuss on Hacker News.