This is the mail archive of the gsl-discuss@sources.redhat.com mailing list for the GSL project.


Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]
Other format: [Raw text]

Re: specfunc error analysis - generated and propagated errors.


I hope this thread is not getting too tiresome. But I'm
interested in this, so here are a few more comments.


On Wed, 2004-09-08 at 14:33, Linas Vepstas wrote:

> backward error analysis might be just the right thing for the 
> "gold-plated" algorithms that Gerard refered to.   However,
> it makes me nervous in other cases e.g. the mentioned incomplete 
> gamma, where errors can go from tiny to huge for some small 
> changes in input parameters.

Yes. I  tried to pick an example which was not one of
the standard functions with oscillatory behavior, i.e.
all those obvious ones defined by 2nd order equations.

My thought was that if the function is defined by an
integral (with non-oscillatory integrand), then it might
enjoy some kind of monotonicity which allowed you to
control the error, with some hope of uniformity.
I don't claim to have a theory of that sort
on hand, but it smells like it is possible.


> Besides, I'm guessing that *if* a good backward-analysis can 
> be done for an algo, then the algo can probably be fixed to 
> provide the 'gold-plated' answers.   :)

Yes, I agree. It is important to define exactly what we would
achieve with something like a backward analysis. But even if it
amounted to only an a-posteriori proof of correctness, that in
itself would be satisfying.

How about a concrete proposal. First, can we come up with a
useful scheme for backward analysis of modified Bessel functions
for arbitrary order and argument? I pick these because they
do not oscillate and therefore should be easier to understand,
and there are two real input parameters. Second, is a similar
analysis of Bessel functions J(nu,x) and Y(nu,x) possible? Would
it really be different from that for I(nu,x) and K(nu,x)? In
other words, is my feeling that the oscillation adds complication
correct or misguided?


> Besides, "comparing known values of the functions to computed values" 
> is a sure-fire recipie for disaster, since users often explore paramter
> values that the authors didn't test for ... I still remember being burned
> by the NAG libraries, on bessel functions of high order, for which the 
> the output of the NAG algos resembled swiss cheese... 

Agreed. Some mathematics would be needed to at least
argue, if not prove, that the testing was sufficient.

After all, there are only a finite number of input values,
in principle we can check them all... ok, I'm mostly joking.
But in some cases this is exactly what people do. For example,
brute force can be used to prove that the result of sin(x)
for some implementation (typically in hardware, where the
cost of mistakes is high) is correctly rounded for all
values. Something like rounding is a difficult problem
because the values of those sub-leading bits are, for
all practical purposes, random. But precision is another
kind of beast. Errors may occasionally be catastrophic,
but in principle they are smooth, so it should be possible
to use continuity to check large regions of parameter
space by appropriate sampling. At least I wouldn't rule
it out immediately. So it is good to think about these
things.


> > > returned by the _e functions. It would make for a much cleaner design of 
> > > the specfun code, and also remove the runtime computational overhead of 
> > > (most-times often unused) the forward error analysis.
> 
> I don't get it ... there is a 'simple' technical fix to this, 
> by using some c pre-prossesor macro magic.  You can have
> your cake and eat it too.  Compile functions with and without
> error esitmates from the same source code base, and then use
> macro magic to define two different subroutine names, one for each
> version.

Yeah, this is not crazy. If I can distill what you are saying
to one point, it is that there is no reason for the "natural prototype"
functions to compute the error. Currently they are implemented in terms
of the _e functions, but that is just an "accidental" circumstance.
The implementation could be (and probably should be) factored.

The preprocessor stuff inside the implementation is still a
maintenance problem; at least it seems nasty to me. But if we
intend to keep the error estimates in the long run, then something
like this might be workable. I think I would personally like to
just get rid of them. Simplicity will have to win in the end.

-- 
Gerard Jungman <jungman@lanl.gov>
Los Alamos National Laboratory


Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]