This is the mail archive of the libc-alpha@sources.redhat.com mailing list for the glibc project.


Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]

Re: glibc conditioning


> http://www.srware.com/linux_numerics.txt
Your test case
  main()
  {
   char *a = "0.3";
      double d = atof(a);
      int i = (int)(1000*d);
      if (d != 300)
          printf("This must be Linux!\n");
      else
          printf("Correct result\n");
  }
probably has a typo in it.  Instead of "if (d != 300)" you mean
"if (i != 300)" and the fixed program then reveals the extra-precise
floating point register bug-feature on Intel x86 machines.

This statement
      int i = (int)(1000*d);
means first multiply 1000 times d in floating point.  The product
would be rounded off to double precision on most machines.  The
correctly rounded product, in double precision, is 300.0.  On Intel
the extra precise product is slightly less than 300.0.  Then the
statement says convert that floating point product to an int.
That is done by rounding toward zero.

You could try setting the x86 coprocessor to double precision rounding
(see /usr/include/fpu_control.h on linux).  If you do that, you should
be a little careful about using the system math functions.

The point should be well taken, that your test program is not portable.
It is a careless design.


Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]