This is the mail archive of the gsl-discuss@sources.redhat.com mailing list for the GSL project.


Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]
Other format: [Raw text]

RE: High-dimensional Minimization without analytical derivatives


My understanding is that 'in general' in high-dimensional cases with rough surface, the Simulated Annealing (SA) method is better tuned for finding a GLOBAL maximum , than Gradient-based methods, because the latter are better tuned for 'zeroing in' the local maximums.
In that regard, is Simplex Method closer to SA, or Gradient-based methods?
 

________________________________

From: Brian Gough [mailto:bjg@network-theory.co.uk]
Sent: Fri 9/3/2004 10:14 AM
To: Anatoliy Belaygorod
Cc: gsl-discuss@sources.redhat.com
Subject: RE: High-dimensional Minimization without analytical derivatives



Anatoliy Belaygorod writes:
 > But which one is better?

The computational effort always depends on the details of the function
and the suitability of the starting point(s), as well as the algorithm.

In the absence of any theoretical guidance, one has to try numerical
derivatives and the simplex method, and see which is faster.

--
Brian Gough

Network Theory Ltd,
Commercial support for GSL --- http://www.network-theory.co.uk/gsl/



Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]