This is the mail archive of the gsl-discuss@sources.redhat.com mailing list for the GSL project.


Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]
Other format: [Raw text]

Re: Help: working with large gsl_vector


On Wed, 15 Dec 2004, Fabrice Rossi wrote:

> Toan T Nguyen wrote:
> > Hi, thank you for the analysis. Apparently, my analytical calculation is able 
> > to guess the minimum reasonably good, so I need only 20000 iterations to find 
> > the minimum. 
> 
> Nice!
> 
> > However, the strange thing is if I take the final minimized vector and run it 
> > through the conjugate gradient routine again, the energy (my function to 
> > minimize) keeps going down for a few more 1000s iterations (albeit it change 
> > only the 4th significant digit). Do you know what's wrong?
> 
> It might be related to the restarting of the algorithm. A conjugate 
> gradient calculates is descent direction thanks to the gradient at the 
> current point and to the previously used direction. The first descent 
> direction is (minus) the gradient alone. When you restart the algorithm, 
> it will use the gradient rather than the descent direction it should 
> have used.

A second possibility is that if you save the final minimized vector at
some precision and reload it, that you are beginning a gradient decent
from a new point in the neighborhood of the old point.  This is, in
fact, how deterministic chaos was discovered in a manner of speaking.

If the problem is one with high complexity, this may be enough to push
it to a different local minimum in the particular well where the first
local minimum is found.

Note that I say local minimum.  Unless you have a really sound
theoretical basis for assuming a single, smooth global minimum in any
problem with high dimensionality, you should presume as a default that
the topology of the minimization surface is neither monotonic nor
smooth, and that multiple local minima exist.  In fact, local minima may
even be "dense" (or nearly so) on the underlying space so that nearly
ANY minimum you find by merely going downhill is a local minimum, and
may not even be a "good" minimum.

Think rugged landscape.  Gradient decent methods basically go downhill
from whereever they are started.  From many/most starting points, this
will take you down to the nearest footprint in the sand, to a nearby
hole in the ground, to the bottom of a pond.  It is relatively unlikely
to take you to the bottom of the deepest ocean trench.  And if you move
the starting point over just a little bit, you're as likely to go down
into a DIFFERENT footprint in the sand as back into the same one -- it's
still a local minimum, just a different one.

There are various easy ways to test for this sort of behavior.  The
easiest one is to just run the code lots of times with completely random
starting data and see if you go down to the same minimum every time.
Also look at the distribution of minima that you get -- are they all
about the same quality or is there one that is occassionally much better
than the others.

If the latter is the case, you'll have to resort to much more powerful
and complex methods to get a "good" minimum on your space.  Most likely
genetic algorithms and/or simulated annealing.

   rgb

> 
> > I follow the example in the GSL document and set my success criteria as
> > 
> > status = gsl_multimin_test_gradient (s->gradient, 1e-20);
> 
> 1e-20 is very very small, maybe to close to machine precision to give 
> reliable results. But I'm far from being an expert in this domain. What 
> look strange is that if the algorithm stops because of this criteria, I 
> don't know why it can continue again after. It would mean that the norm 
> of the gradient is going up after having reached 1e-20 once. This might 
> be related to numerical errors.
> 
> > The initial setup of the vector is
> > 
> > T = gsl_multimin_fdfminimizer_conjugate_pr;
> > s = gsl_multimin_fdfminimizer_alloc (T, 3*nparts);
> > gsl_multimin_fdfminimizer_set (s, &energy, x, 0.001, 1e-12);
> 
> 1e-12 is again quite small.
> 
> > For clarity, I used dimensionless units so ALL the nearest neighbor distances 
> > and the coupling constants are of the order of unity. nparts is the number of 
> > particles which is about 100000.
> > 
> > Could you give some advises? Thanks a lot in advance.
> > 
> > Toan
> 
> I think that you are basically puting the code under some pressure, 
> because of the size of your problem and of the very low limits you are 
> using. To my knowledge, it has not been tested under such constraints 
> and it's therefore difficult to give sensible advice.
> 
> Fabrice
> 
> 

-- 
Robert G. Brown	                       http://www.phy.duke.edu/~rgb/
Duke University Dept. of Physics, Box 90305
Durham, N.C. 27708-0305
Phone: 1-919-660-2567  Fax: 919-660-2525     email:rgb@phy.duke.edu



Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]