This is the mail archive of the
libc-alpha@sources.redhat.com
mailing list for the glibc project.
Re: glibc conditioning
- To: Jinsong Zhao <zhaojs at cadence dot com>
- Subject: Re: glibc conditioning
- From: Stephen L Moshier <moshier at mediaone dot net>
- Date: Tue, 28 Aug 2001 08:35:23 -0400 (EDT)
- cc: stevew at srware dot com, libc-alpha at sources dot redhat dot com
- Reply-To: moshier at moshier dot ne dot mediaone dot net
> http://www.srware.com/linux_numerics.txt
Your test case
main()
{
char *a = "0.3";
double d = atof(a);
int i = (int)(1000*d);
if (d != 300)
printf("This must be Linux!\n");
else
printf("Correct result\n");
}
probably has a typo in it. Instead of "if (d != 300)" you mean
"if (i != 300)" and the fixed program then reveals the extra-precise
floating point register bug-feature on Intel x86 machines.
This statement
int i = (int)(1000*d);
means first multiply 1000 times d in floating point. The product
would be rounded off to double precision on most machines. The
correctly rounded product, in double precision, is 300.0. On Intel
the extra precise product is slightly less than 300.0. Then the
statement says convert that floating point product to an int.
That is done by rounding toward zero.
You could try setting the x86 coprocessor to double precision rounding
(see /usr/include/fpu_control.h on linux). If you do that, you should
be a little careful about using the system math functions.
The point should be well taken, that your test program is not portable.
It is a careless design.