This is the mail archive of the gsl-discuss@sources.redhat.com mailing list for the GSL project.


Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]
Other format: [Raw text]

simplex minimization


I've been using gsl (1.3) to minimize a seven parameter function against 
some sample data I have, and it seems to give me good results based on fdf 
minimization (any of the fr, pr or bfgs algorithms yield similar 
numbers):

iteration: 6
params:
 -0.00131 gradient: -5610116.24709
  0.02920 gradient: -8297325.73283
 -0.05581 gradient: -184748.86988
 -0.00000 gradient:   0.00000
  0.00000 gradient:   0.00000
  0.00000 gradient:   0.00000
  0.20521 gradient: 48147814.97896
f(x): 1491335.000000 tot. gradient: 34055624.129164

these numbers are around what I expect (based on comparison with other
programs that carry out a similar form of minimization) even though the
gradients may seem large (the 4th-6th parameters are forced to zero in the
example I'm testing).  Out of curiousity, I tried to compare these values
with those determined by the simplex algorithm.  However, the minimization
always stops and claims to have succeeded before performing any iterations
(with a size stopping point of 0.01):

iteration: 1
params:
  0.00000
  0.00000
  0.00000
  0.00000
  0.00000
  0.00000
  0.20521
 f(x): 1566284.000000 tot. size: 0.000000000000

and no matter the starting values, this always seems the case - i.e. the
simplex size is always 0.00.  Might it have something to do with forcing
some of the parameters in the minimization to zero?  If code would be
helpful in nailing this problem down, let me know.

	Regards,
	Tim F


Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]