## finite precision techniques

From Visual Basic to GNU C, this is the place to talk programming.

Moderators: SecretSquirrel, just brew it!

### finite precision techniques

I'm starting a project where I need better precision. I think the limits of 64-bit pecision will be a big limitation if I'm careless. Is there a good book that people like for this topic? Ideally the book will be good at explaining similar techniques, and also provide context of when they're helpful/hurtful.

Also, there are two techniques that I can't remember their name. Does either of these sound familar to anyone:

Technique 1) When adding a long sequence of numbers, this technique keeps track of (a) the typical sum and (b) a remainder-like tally, that represents the info that was truncated from the finite precision additions. At the end of the sequences these two values are summed.

Technique 2) An equation like " a + b*z + c*z*z + d*z*z*z ", is reexpressed something like " a+z*(b + z*(c + d*z)) ".

(Edit: the first one is 'Kahan summation' and the second seems to be commonly called 'nested multiplication'.)
Last edited by wibeasley on Mon Mar 21, 2011 4:14 pm, edited 1 time in total.
wibeasley
Gerbil Elite

Posts: 952
Joined: Sat Mar 29, 2008 2:19 pm
Location: Norman OK

### Re: finite precision techniques

I remember seeing this problem decades ago when systems didn't have 64-bit precision. If you don't mind using some processor cycles, you can catch the overflow before it happens and put it into another variable representing the higher range. It's cumbersome, but it can be made to work. You end up with a library function to do the math operation that takes four variables instead of two, since each extended precision number is contained in a pair of variables.

In the extreme, you can program things to execute the same way that you'd do it with pencil and paper. It was 28 or 29 years ago that I wrote a program to do long division (to hundreds of decimal places) on a Commodore 64.
JustAnEngineer
His Holy Gerbilness

Posts: 14255
Joined: Sat Jan 26, 2002 6:00 pm
Location: The Heart of Dixie

### Re: finite precision techniques

You should probably do a little searching around on "bignum" and "arbitrary precision arithmetic" -- there are commercial and open source libraries for this, and I'm sure that will turn up plenty of references to the techniques involved. Like JAE, the last time I worried about doing this myself was in the 8 or 16 bit days (hoo-boy, BCD for the win!)
UberGerbil
Gerbil Khan

Posts: 9836
Joined: Thu Jun 19, 2003 2:11 pm

### Re: finite precision techniques

Thanks, I should have said I need more precision for small floating point numbers. From what I see, things like JAE's overflow thing and bignum apply to values that would have overflowed the 2^63 limit, or am I overlooking something?

The name for that adding (with a remainder) technique is called Kahan summation. Does that multiplication technique ring any bells for anyone?

I'm finding the roots of six functions and six unknowns. I'm guessing that the Newton-Raphson-like method (technically it's a Broyden's method) is getting tripped up calculating the derivative when the coefficients (ie, c1-c6) are getting close to zero. When they're drop below .001, I'm sure they're buggered encountering terms like 977816385000*c5^2*c6^4 + 1907724656250*c6^6 + 180*c4^4*(16082*c5^2 + 345905*c6^2). It's working fine in Mathematica's FindRoot method, but I'm having trouble with typical languages. Here's one of the six functions:
Code: Select all
`    moments[6] <- 120*(32*c3^6 + 3456*c3^5*c5 + 6*c2^5*c6 +     3*c2^4*(9*c4^2 + 16*c3*c5 + 168*c5^2 + 330*c4*c6 + 2850*c6^2) + 720*c3^4*(15*c4^2 + 224*c5^2 + 490*c4*c6 + 4410*c6^2) + 6048*c3^3*c5*(125*c4^2 + 704*c5^2 + 4480*c4*c6 + 44100*c6^2) +     12*c2^3*(4*c3^2*(3*c4 + 50*c6) + 60*c3*c5*(7*c4 + 114*c6) + 3*(24*c4^3 + 1192*c4*c5^2 + 1170*c4^2*c6 + 20440*c5^2*c6 + 20150*c4*c6^2 + 124875*c6^3)) +     216*c3^2*(945*c4^4 + 67620*c4^3*c6 + 560*c4^2*(178*c5^2 + 3555*c6^2) + 315*c4*(12416*c5^2*c6 + 90375*c6^3) + 6*(52224*c5^4 + 6993000*c5^2*c6^2 + 27671875*c6^4)) +     6*c2^2*(8*c3^4 + 480*c3^3*c5 + 180*c3^2*(4*c4^2 + 64*c5^2 + 125*c4*c6 + 1050*c6^2) + 72*c3*c5*(327*c4^2 + 1848*c5^2 + 10955*c4*c6 + 99900*c6^2) +        9*(225*c4^4 + 22824*c4^2*c5^2 + 69632*c5^4 + 15090*c4^3*c6 + 830240*c4*c5^2*c6 + 412925*c4^2*c6^2 + 8239800*c5^2*c6^2 + 5475750*c4*c6^3 + 29636250*c6^4)) +     1296*c3*c5*(5910*c4^4 + 462735*c4^3*c6 + c4^2*(228240*c5^2 + 14851375*c6^2) + 175*c4*(55808*c5^2*c6 + 1316865*c6^3) + 3*(158464*c5^4 + 37899400*c5^2*c6^2 + 483826875*c6^4)) +     27*(9945*c4^6 + 92930048*c5^6 + 1166130*c4^5*c6 + 35724729600*c5^4*c6^2 + 977816385000*c5^2*c6^4 + 1907724656250*c6^6 +        180*c4^4*(16082*c5^2 + 345905*c6^2) + 140*c4^3*(1765608*c5^2*c6 + 13775375*c6^3) + 15*c4^2*(4076032*c5^4 + 574146160*c5^2*c6^2 +          2424667875*c6^4) + 210*c4*(13526272*c5^4*c6 + 687499200*c5^2*c6^3 + 1876468125*c6^5)) +     18*c2*(80*c3^4*(c4 + 15*c6) + 160*c3^3*c5*(32*c4 + 525*c6) + 12*c3^2*(225*c4^3 + 11088*c4*c5^2 + 11130*c4^2*c6 + 199360*c5^2*c6 + 200900*c4*c6^2 + 1323000*c6^3) +        24*c3*c5*(3885*c4^3 + 69632*c4*c5^2 + 209800*c4^2*c6 + 1369440*c5^2*c6 + 4134375*c4*c6^2 + 29636250*c6^3) +        9*(540*c4^5 + 48585*c4^4*c6 + 20*c4^3*(4856*c5^2 + 95655*c6^2) + 80*c4^2*(71597*c5^2*c6 + 513625*c6^3) +           4*c4*(237696*c5^4 + 30726500*c5^2*c6^2 + 119844375*c6^4) + 5*c6*(4076032*c5^4 + 191074800*c5^2*c6^2 + 483826875*c6^4))));`
wibeasley
Gerbil Elite

Posts: 952
Joined: Sat Mar 29, 2008 2:19 pm
Location: Norman OK

### Re: finite precision techniques

wibeasley wrote:Thanks, I should have said I need more precision for small floating point numbers. From what I see, things like JAE's overflow thing and bignum apply to values that would have overflowed the 2^63 limit, or am I overlooking something?

Well bignum looks like it has been used to calculate pi to a billion+ digits, so I don't see why it would not work in your case. I am not familiar with exactly how it works but it should work. Here is the pi program I was talking about. ftp://ftp.gmplib.org/pub/misc/gmp-chudnovsky.c

Also do you have MATLAB? That might work a lot faster and more efficient than Mathematica because its based on C and I have always had an easier time programming with it than Mathematica. (not sure what Mathematica is based on though)

***Edit: Have you considered Python as well because python is not bounded with overflow like other programs (can be stored in memory instead if desired). They have a mpmath library that is extremely easy to use. (I got bored at work today and had python doing precisions out to 100000 places.
To Start Press Any Key'. Where's the ANY key?
If something's hard to do, then it's not worth doing
You know, boys, a nuclear reactor is a lot like a woman. You just have to read the manual and press the right buttons.
mmmmmdonuts21
Gerbil Elite

Posts: 586
Joined: Wed Jul 16, 2008 8:09 am

### Re: finite precision techniques

This afternoon I got it working more robustly. The root finding function optionally accepts an explicit Jacobian matrix (ie, so it's guided by analytic derivatives instead of approximated/secant derivatives). I used Mathematica's 'D' function to find the analytic derivative (it's nice to let it do the calculues and spit out 258 lines of code needed for the 6x6 equations). So far, our routine isn't temperamental about strange starting values.

Thanks donuts. I had looked at GMP before, but ruled it out because I didn't see a multivariate root finding algorithm. Of course you were suggesting using it at a lower level. I'm hesitant to do that for this project because this is for an R package. We want to distribute it easily to a bunch of statisticians (who aren't professional programmers). You can embed C code into R packages, but that gets tricky, and I'd like to avoid portability issues. I'd would haven't used GMP if I wasn't able to get it working otherwise.

I'm actually going to a free Matlab workshop in two weeks. From the little I know, I agree that it seems more easy/natural to create programs. But there are some complicated things that are so easy with Mathematica's symbolic expression & computation. And for this use, speed isn't a concern.

Even though the routine is working with the test cases so far, I'm sure its accuracy has more room for improvement. I'm still willing to read a book or article with general advice on this topic, if there's one that people like. I've read most of What Every Computer Scientist Should Know About Floating-Point Arithmetic. I liked the theory, but I wouldn't mind something with more practical advice or rules of thumb.
wibeasley
Gerbil Elite

Posts: 952
Joined: Sat Mar 29, 2008 2:19 pm
Location: Norman OK

### Re: finite precision techniques

wibeasley wrote:This afternoon I got it working more robustly. The root finding function optionally accepts an explicit Jacobian matrix (ie, so it's guided by analytic derivatives instead of approximated/secant derivatives). I used mathmatica's 'D' function to find the analytic derivative (it's nice to let it do the calculues and spit out 258 lines of code needed for the 6x6 equations). So far, our routine isn't temperamental about strange starting values.

Thanks donuts. I had looked at GMP before, but ruled it out because I didn't see a multivariate root finding algorithm. Of course you were suggesting using it at a lower level. I'm hesitant to do that for this project because this is for an R package. We want to distribute it easily to a bunch of statisticians (who aren't professional programmers). You can embed C code into R packages, but that gets tricky, and I'd like to avoid portability issues. I'd would haven't used GMP if I wasn't able to get it working otherwise.

I'm actually going to a free Matlab workshop in two weeks. From the little I know, I agree that it seems more easy/natural to create programs. But there are some complicated things that are so easy with Mathematica's symbolic expression & computation. And for this use, speed isn't a concern.

Glad you got it working. I wasn't sure exactly what you were doing but I knew you needed small values. Once you start MATLAB I don't think you will ever be able to go back. It even includes a symbolic equation editor now, but I am not sure if its up to part with Mathematica or not. If you ever need help in MATLAB, just let me know. Matrices is where MATLAB thrives, so the Jacobian would have been very easy to calculate.

**Edit: If you don't mind me asking were your original equations given to you like that with those variables? I am just curious how they were given to you or if you have an example?
To Start Press Any Key'. Where's the ANY key?
If something's hard to do, then it's not worth doing
You know, boys, a nuclear reactor is a lot like a woman. You just have to read the manual and press the right buttons.
mmmmmdonuts21
Gerbil Elite

Posts: 586
Joined: Wed Jul 16, 2008 8:09 am

### Re: finite precision techniques

From files accompanying a book and an article. It's all available from http://www.jstatsoft.org/v19/i03, in the PowerMethod3 function of the PowerMethod.nb.

I doubt Matlab will replace the open source R anytime soon, for me. For statisticians, it's becoming the default vehicle for distributing new methods along side your publications. Right now, I'm using R with projects that need portability (85%), C++/MKL for projects that need speed (10%), and Mathematica for complex or novel equations (5%). But I'd like to add MatLab as a new tool; thanks for your offer.
wibeasley
Gerbil Elite

Posts: 952
Joined: Sat Mar 29, 2008 2:19 pm
Location: Norman OK

### Re: finite precision techniques

Thanks for the links. Very, very interesting. I was just curious because I have an economic/math/actuarial background and this seemed to hit home. I remember using a very little bit of R (in conjunction with Stata) in an Econometrics class in college but it was very, very basic.
To Start Press Any Key'. Where's the ANY key?
If something's hard to do, then it's not worth doing
You know, boys, a nuclear reactor is a lot like a woman. You just have to read the manual and press the right buttons.
mmmmmdonuts21
Gerbil Elite

Posts: 586
Joined: Wed Jul 16, 2008 8:09 am