This is the mail archive of the gsl-discuss@sourceware.cygnus.com mailing list for the GSL project.


Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]

Re: multidimensional optimization


I've started to implement the multidimensional optimization (gradient based
methods) because I cannot manage to discuss anymore the implementation without
something real to think about.

I would like to know if it's ok to post patches to the mailing list (don't
except something right now!).

Currently my code does not even compile, but I've started to face some small
but annoying problems. 

- parameters:

As descent methods are based on one dimensional minimization, a description of
a descent method must include parameters for this part of the algorithm. This
means precision for the stoping criterion (I would say that relative precision
only is needed), maximum iteration number, and the algorithm itself. I need
also parameters for the initial bracketing algorithm (which is used to feed
the one dimensional optimization method): maximum number of iteration at
least. 

Is it better to have a big structure that discribes the parameters of the one
dimensional part (as that is going to be contained in the
gsl_min_fX_G_minimizer structure that contains the base state of the descent
algorithm), or to pass these parameters to the iterate function (that runs one
iteration of the descent algorithm)?

- iteration:

If I focus on descent algorithms, I think that I don't need a different
iterate function for each algorithm, basically because the only think that
really differs between two such algorithms is the way the descent direction is
calculated. So I think it's better to have a direction function that computes
a new descent direction. I don't know any algorithm that need something else
than the value of the function and of its gradient at the current estimate of
the minimum and previous descent directions. I guess that I don't need to
provide the function itself to the algorithm as long as I give it the current
gradient and value. Any objection?

Fabrice Rossi

Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]