This is the mail archive of the
gsl-discuss@sources.redhat.com
mailing list for the GSL project.
RE: High-dimensional Minimization without analytical derivatives
- From: "Anatoliy Belaygorod" <belaygorod at wustl dot edu>
- To: "Brian Gough" <bjg at network-theory dot co dot uk>
- Cc: <gsl-discuss at sources dot redhat dot com>
- Date: Fri, 3 Sep 2004 10:22:48 -0500
- Subject: RE: High-dimensional Minimization without analytical derivatives
My understanding is that 'in general' in high-dimensional cases with rough surface, the Simulated Annealing (SA) method is better tuned for finding a GLOBAL maximum , than Gradient-based methods, because the latter are better tuned for 'zeroing in' the local maximums.
In that regard, is Simplex Method closer to SA, or Gradient-based methods?
________________________________
From: Brian Gough [mailto:bjg@network-theory.co.uk]
Sent: Fri 9/3/2004 10:14 AM
To: Anatoliy Belaygorod
Cc: gsl-discuss@sources.redhat.com
Subject: RE: High-dimensional Minimization without analytical derivatives
Anatoliy Belaygorod writes:
> But which one is better?
The computational effort always depends on the details of the function
and the suitability of the starting point(s), as well as the algorithm.
In the absence of any theoretical guidance, one has to try numerical
derivatives and the simplex method, and see which is faster.
--
Brian Gough
Network Theory Ltd,
Commercial support for GSL --- http://www.network-theory.co.uk/gsl/