This is the mail archive of the gsl-discuss@sources.redhat.com mailing list for the GSL project.


Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]
Other format: [Raw text]

Re: specfunc error analysis - generated and propagated errors.


On Thu, Sep 09, 2004 at 06:00:39PM -0600, Gerard Jungman was heard to remark:
> I hope this thread is not getting too tiresome. But I'm
> interested in this, so here are a few more comments.

?  The discussions on this list are languid and tio-the-point
compared to any other mailing list I'm on... let the bonfire 
of electrons commence.

> On Wed, 2004-09-08 at 14:33, Linas Vepstas wrote:
> 
> > backward error analysis might be just the right thing for the 
> > "gold-plated" algorithms that Gerard refered to.   However,
> > it makes me nervous in other cases e.g. the mentioned incomplete 
> > gamma, where errors can go from tiny to huge for some small 
> > changes in input parameters.
> 
> Yes. I  tried to pick an example which was not one of
> the standard functions with oscillatory behavior, i.e.
> all those obvious ones defined by 2nd order equations.

Well, I jumped on it because I actually fooled with it 
not too long ago (or was it binomial coeffs for complex 
arguments?)  I had implemented my own 'dumb' algo based 
on a naive reading of abramowitz & stegun, and found
that it agreed with gsl down to the last digit in 'most'
places.  However, the regions I was interested in were 
within 1e-6 of the poles (of the gamma fn for very large 
negative values) , where I found my simple algo differed 
by a few percent, or even a factor of 2 or more from the 
gsl result.  After analysis, I concluded that the gsl result 
was more correct than my own.  

Moral of the story: even if an algo provides very precise,
very accurate values for most 'normal' arguments,  
(e.g. gamma for small positive values), its not
necessarily obviously correct for 'rare' or 'unusual'
locations where no one ever goes (e.g. sums and differences
of many gammas, all near negative poles).  As it happens,
gsl did well in this case; however, a good backwards-error 
analysis needs to be 'entire' and that strikes me as hard/harder 
than the algo itself.

> How about a concrete proposal. First, can we come up with a
> useful scheme for backward analysis of modified Bessel functions
> for arbitrary order and argument? I pick these because they
> do not oscillate and therefore should be easier to understand,
> and there are two real input parameters. Second, is a similar
> analysis of Bessel functions J(nu,x) and Y(nu,x) possible? Would
> it really be different from that for I(nu,x) and K(nu,x)? In
> other words, is my feeling that the oscillation adds complication
> correct or misguided?

Huh?  Oscillation is in the eye of the beholder!  Although
J(nu,x) is an oscillatory funtion of x, the recursion relation
to compute it is not, and is a marvelously convergent algo,
which can yeild answers to essentially full IEEE preceision
(for 'most' 'normal' values of nu,x).  (I'm thinking of 
J(nu-1,x) = (2nu/x)J(nu,x) - J(nu+1,x))  For nu=1/2, the
spherical bessel, the normalization is trivial, its just
sin(x), which can also be computed with very teeny errors 
very easily.   (disclaimer: I have no idea of what to do 
for nu != 1/2 or how gsl actually does this.)

Moral of that story: some 'oscillatory' functions e.g.
sphereical bessel, can be very easily computed accurately,
because the algo side-steps oscillation.  Maybe this is 
the exception, not the rule?

Again, what makes me nervous here are behaviour near 
'exceptional' values of nu, x, such as near nu=0, or
x very large, where a traditional algo might break down
and a a simple asymptotic expansion would provide better
results.

> > is a sure-fire recipie for disaster, since users often explore paramter
> > values that the authors didn't test for ... I still remember being burned
> > by the NAG libraries, on bessel functions of high order, for which the 
> > the output of the NAG algos resembled swiss cheese... 
> 
> Errors may occasionally be catastrophic,
> but in principle they are smooth, so it should be possible
> to use continuity to check large regions of parameter
> space by appropriate sampling. 

Ah, but this misses on two points.

-- The NAG algo was literally swiss-cheese: it returned exactly 
zero for certain 'reasonable' values of the arguments (e.g. 
I vaguely remember x=52.1, which is not a big or weird parameter
value for a bessel).  It was not at all smooth, it was discontinuous.

-- "large region of the parameter space" is what I call the
"normal" or "usual" paramter values above.  But often
one is interested in the values for a very specific, teeny 
or 'unusual' part of parameter space.   In this case, I couldn't 
care less if the algo is correct in the rest of the paramter 
space:  I want to know if its correct in the region that 
I'm interested in.  In particular, my teeny region of paramter
space may be exactly a region where the algo has trouble
(e.g. the gamma-near-negative-poles example).


> > I don't get it ... there is a 'simple' technical fix to this, 
> > by using some c pre-prossesor macro magic.  You can have
> > your cake and eat it too.  Compile functions with and without
> > error esitmates from the same source code base, and then use
> > macro magic to define two different subroutine names, one for each
> > version.
> 
> Yeah, this is not crazy. If I can distill what you are saying
> to one point, it is that there is no reason for the "natural prototype"
> functions to compute the error. Currently they are implemented in terms
> of the _e functions, but that is just an "accidental" circumstance.
> The implementation could be (and probably should be) factored.
> 
> The preprocessor stuff inside the implementation is still a
> maintenance problem; at least it seems nasty to me. 

Hmm. I dunno, I thought the point of my example was to show
that, with some pre-processor tricks, one could still keep
the source code eminently readable, easy to debug, etc. 
That, for example, the inline error estimates could be
visually made to look like a kind of inline comment, which,
oh-by-the-way could actually compute a value when called upon.

--linas


Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]