SeparableModelResult

class SeparableModelResult(model, initial_parameter, nnls, equality_constraints, *args, nan_policy='raise', **kwargs)[source]

Bases: lmfit.minimizer.Minimizer

Attributes Summary

fitresult The lmfit.MinimizerResult returned by the minimization.
values Return Parameter values in a simple dictionary.

Methods Summary

ampgo Find the global minimum of a multivariate function using AMPGO.
basinhopping Use the basinhopping algorithm to find the global minimum of a function.
brute Use the brute method to find the global minimum of a function.
c_matrix
e_matrix
emcee Bayesian sampling of the posterior distribution using emcee.
eval
final_residual
final_residual_svd
fit
get_model
least_squares Least-squares minimization using scipy.optimize.least_squares.
leastsq Use Levenberg-Marquardt minimization to perform a fit.
minimize Perform the minimization.
penalty Penalty function for scalar minimizers.
prepare_fit Prepare parameters for fitting.
scalar_minimize Scalar minimization using scipy.optimize.minimize.
unprepare_fit Clean fit state, so that subsequent fits need to call prepare_fit().

Methods Documentation

ampgo(params=None, **kws)

Find the global minimum of a multivariate function using AMPGO.

AMPGO stands for ‘Adaptive Memory Programming for Global Optimization’ and is an efficient algorithm to find the global minimum.

Parameters:
  • params (Parameters, optional) – Contains the Parameters for the model. If None, then the Parameters used to initialize the Minimizer object are used.
  • **kws (dict, optional) –

    Minimizer options to pass to the ampgo algorithm, the options are listed below:

    local: str (default is 'L-BFGS-B')
        Name of the local minimization method. Valid options are:
        - 'L-BFGS-B'
        - 'Nelder-Mead'
        - 'Powell'
        - 'TNC'
        - 'SLSQP'
    local_opts: dict (default is None)
        Options to pass to the local minimizer.
    maxfunevals: int (default is None)
        Maximum number of function evaluations. If None, the optimization will stop
        after `totaliter` number of iterations.
    totaliter: int (default is 20)
        Maximum number of global iterations.
    maxiter: int (default is 5)
        Maximum number of `Tabu Tunneling` iterations during each global iteration.
    glbtol: float (default is 1e-5)
        Tolerance whether or not to accept a solution after a tunneling phase.
    eps1: float (default is 0.02)
        Constant used to define an aspiration value for the objective function during
        the Tunneling phase.
    eps2: float (default is 0.1)
        Perturbation factor used to move away from the latest local minimum at the
        start of a Tunneling phase.
    tabulistsize: int (default is 5)
        Size of the (circular) tabu search list.
    tabustrategy: str (default is 'farthest')
        Strategy to use when the size of the tabu list exceeds `tabulistsize`. It
        can be 'oldest' to drop the oldest point from the tabu list or 'farthest'
        to drop the element farthest from the last local minimum found.
    disp: bool (default is False)
        Set to True to print convergence messages.
    
Returns:

Object containing the parameters from the ampgo method, with fit parameters, statistics and such. The return values (x0, fval, eval, msg, tunnel) are stored as ampgo_<parname> attributes.

Return type:

MinimizerResult

New in version 0.9.10.

Notes

The Python implementation was written by Andrea Gavana in 2014 (http://infinity77.net/global_optimization/index.html).

The details of the AMPGO algorithm are described in the paper “Adaptive Memory Programming for Constrained Global Optimization” located here:

http://leeds-faculty.colorado.edu/glover/fred%20pubs/416%20-%20AMP%20(TS)%20for%20Constrained%20Global%20Opt%20w%20Lasdon%20et%20al%20.pdf

basinhopping(params=None, **kws)

Use the basinhopping algorithm to find the global minimum of a function.

This method calls scipy.optimize.basinhopping using the default arguments. The default minimizer is BFGS, but since lmfit supports parameter bounds for all minimizers, the user can choose any of the solvers present in scipy.optimize.minimize.

Parameters:params (Parameters object, optional) – Contains the Parameters for the model. If None, then the Parameters used to initialize the Minimizer object are used.
Returns:Object containing the optimization results from the basinhopping algorithm.
Return type:MinimizerResult

New in version 0.9.10.

brute(params=None, Ns=20, keep=50)

Use the brute method to find the global minimum of a function.

The following parameters are passed to scipy.optimize.brute and cannot be changed:

brute() arg Value Description
full_output 1 Return the evaluation grid and the objective function’s values on it.
finish None No “polishing” function is to be used after the grid search.
disp False Do not print convergence messages (when finish is not None).

It assumes that the input Parameters have been initialized, and a function to minimize has been properly set up.

Parameters:
  • params (Parameters, optional) – Contains the Parameters for the model. If None, then the Parameters used to initialize the Minimizer object are used.
  • Ns (int, optional) – Number of grid points along the axes, if not otherwise specified (see Notes).
  • keep (int, optional) – Number of best candidates from the brute force method that are stored in the candidates attribute. If ‘all’, then all grid points from scipy.optimize.brute are stored as candidates.
Returns:

Object containing the parameters from the brute force method. The return values (x0, fval, grid, Jout) from scipy.optimize.brute are stored as brute_<parname> attributes. The MinimizerResult also contains the candidates attribute and show_candidates() method. The candidates attribute contains the parameters and chisqr from the brute force method as a namedtuple, (‘Candidate’, [‘params’, ‘score’]), sorted on the (lowest) chisqr value. To access the values for a particular candidate one can use result.candidate[#].params or result.candidate[#].score, where a lower # represents a better candidate. The show_candidates(#) uses the pretty_print() method to show a specific candidate-# or all candidates when no number is specified.

Return type:

MinimizerResult

New in version 0.9.6.

Notes

The brute() method evalutes the function at each point of a multidimensional grid of points. The grid points are generated from the parameter ranges using Ns and (optional) brute_step. The implementation in scipy.optimize.brute requires finite bounds and the range is specified as a two-tuple (min, max) or slice-object (min, max, brute_step). A slice-object is used directly, whereas a two-tuple is converted to a slice object that interpolates Ns points from min to max, inclusive.

In addition, the brute() method in lmfit, handles three other scenarios given below with their respective slice-object:

  • lower bound (min) and brute_step are specified:
    range = (min, min + Ns * brute_step, brute_step).
  • upper bound (max) and brute_step are specified:
    range = (max - Ns * brute_step, max, brute_step).
  • numerical value (value) and brute_step are specified:
    range = (value - (Ns//2) * brute_step, value + (Ns//2) * brute_step, brute_step).
c_matrix(*args, **kwargs)[source]
e_matrix(*args, **kwargs)[source]
emcee(params=None, steps=1000, nwalkers=100, burn=0, thin=1, ntemps=1, pos=None, reuse_sampler=False, workers=1, float_behavior='posterior', is_weighted=True, seed=None, progress=True)

Bayesian sampling of the posterior distribution using emcee.

Bayesian sampling of the posterior distribution for the parameters using the emcee Markov Chain Monte Carlo package. The method assumes that the prior is Uniform. You need to have emcee installed to use this method.

Parameters:
  • params (Parameters, optional) – Parameters to use as starting point. If this is not specified then the Parameters used to initialize the Minimizer object are used.
  • steps (int, optional) – How many samples you would like to draw from the posterior distribution for each of the walkers?
  • nwalkers (int, optional) – Should be set so \(nwalkers >> nvarys\), where nvarys are the number of parameters being varied during the fit. “Walkers are the members of the ensemble. They are almost like separate Metropolis-Hastings chains but, of course, the proposal distribution for a given walker depends on the positions of all the other walkers in the ensemble.” - from the emcee webpage.
  • burn (int, optional) – Discard this many samples from the start of the sampling regime.
  • thin (int, optional) – Only accept 1 in every thin samples.
  • ntemps (int, optional) – If ntemps > 1 perform a Parallel Tempering.
  • pos (numpy.ndarray, optional) – Specify the initial positions for the sampler. If ntemps == 1 then pos.shape should be (nwalkers, nvarys). Otherwise, (ntemps, nwalkers, nvarys). You can also initialise using a previous chain that had the same ntemps, nwalkers and nvarys. Note that nvarys may be one larger than you expect it to be if your userfcn returns an array and is_weighted is False.
  • reuse_sampler (bool, optional) – If you have already run emcee on a given Minimizer object then it possesses an internal sampler attribute. You can continue to draw from the same sampler (retaining the chain history) if you set this option to True. Otherwise a new sampler is created. The nwalkers, ntemps, pos, and params keywords are ignored with this option. Important: the Parameters used to create the sampler must not change in-between calls to emcee. Alteration of Parameters would include changed min, max, vary and expr attributes. This may happen, for example, if you use an altered Parameters object and call the minimize method in-between calls to emcee.
  • workers (Pool-like or int, optional) – For parallelization of sampling. It can be any Pool-like object with a map method that follows the same calling sequence as the built-in map function. If int is given as the argument, then a multiprocessing-based pool is spawned internally with the corresponding number of parallel processes. ‘mpi4py’-based parallelization and ‘joblib’-based parallelization pools can also be used here. Note: because of multiprocessing overhead it may only be worth parallelising if the objective function is expensive to calculate, or if there are a large number of objective evaluations per step (ntemps * nwalkers * nvarys).
  • float_behavior (str, optional) –

    Specifies meaning of the objective function output if it returns a float. One of:

    • ’posterior’ - objective function returns a log-posterior probability
    • ’chi2’ - objective function returns \(\chi^2\)

    See Notes for further details.

  • is_weighted (bool, optional) – Has your objective function been weighted by measurement uncertainties? If is_weighted is True then your objective function is assumed to return residuals that have been divided by the true measurement uncertainty (data - model) / sigma. If is_weighted is False then the objective function is assumed to return unweighted residuals, data - model. In this case emcee will employ a positive measurement uncertainty during the sampling. This measurement uncertainty will be present in the output params and output chain with the name __lnsigma. A side effect of this is that you cannot use this parameter name yourself. Important this parameter only has any effect if your objective function returns an array. If your objective function returns a float, then this parameter is ignored. See Notes for more details.
  • seed (int or numpy.random.RandomState, optional) – If seed is an int, a new numpy.random.RandomState instance is used, seeded with seed. If seed is already a numpy.random.RandomState instance, then that numpy.random.RandomState instance is used. Specify seed for repeatable minimizations.
Returns:

MinimizerResult object containing updated params, statistics, etc. The updated params represent the median (50th percentile) of all the samples, whilst the parameter uncertainties are half of the difference between the 15.87 and 84.13 percentiles. The MinimizerResult also contains the chain, flatchain and lnprob attributes. The chain and flatchain attributes contain the samples and have the shape (nwalkers, (steps - burn) // thin, nvarys) or (ntemps, nwalkers, (steps - burn) // thin, nvarys), depending on whether Parallel tempering was used or not. nvarys is the number of parameters that are allowed to vary. The flatchain attribute is a pandas.DataFrame of the flattened chain, chain.reshape(-1, nvarys). To access flattened chain values for a particular parameter use result.flatchain[parname]. The lnprob attribute contains the log probability for each sample in chain. The sample with the highest probability corresponds to the maximum likelihood estimate.

Return type:

MinimizerResult

Notes

This method samples the posterior distribution of the parameters using Markov Chain Monte Carlo. To do so it needs to calculate the log-posterior probability of the model parameters, F, given the data, D, \(\ln p(F_{true} | D)\). This ‘posterior probability’ is calculated as:

\[\ln p(F_{true} | D) \propto \ln p(D | F_{true}) + \ln p(F_{true})\]

where \(\ln p(D | F_{true})\) is the ‘log-likelihood’ and \(\ln p(F_{true})\) is the ‘log-prior’. The default log-prior encodes prior information already known about the model. This method assumes that the log-prior probability is -numpy.inf (impossible) if the one of the parameters is outside its limits. The log-prior probability term is zero if all the parameters are inside their bounds (known as a uniform prior). The log-likelihood function is given by [1]:

\[\ln p(D|F_{true}) = -\frac{1}{2}\sum_n \left[\frac{(g_n(F_{true}) - D_n)^2}{s_n^2}+\ln (2\pi s_n^2)\right]\]

The first summand in the square brackets represents the residual for a given datapoint (\(g\) being the generative model, \(D_n\) the data and \(s_n\) the standard deviation, or measurement uncertainty, of the datapoint). This term represents \(\chi^2\) when summed over all data points. Ideally the objective function used to create lmfit.Minimizer should return the log-posterior probability, \(\ln p(F_{true} | D)\). However, since the in-built log-prior term is zero, the objective function can also just return the log-likelihood, unless you wish to create a non-uniform prior.

If a float value is returned by the objective function then this value is assumed by default to be the log-posterior probability, i.e. float_behavior is ‘posterior’. If your objective function returns \(\chi^2\), then you should use a value of ‘chi2’ for float_behavior. emcee will then multiply your \(\chi^2\) value by -0.5 to obtain the posterior probability.

However, the default behaviour of many objective functions is to return a vector of (possibly weighted) residuals. Therefore, if your objective function returns a vector, res, then the vector is assumed to contain the residuals. If is_weighted is True then your residuals are assumed to be correctly weighted by the standard deviation (measurement uncertainty) of the data points (res = (data - model) / sigma) and the log-likelihood (and log-posterior probability) is calculated as: -0.5 * numpy.sum(res**2). This ignores the second summand in the square brackets. Consequently, in order to calculate a fully correct log-posterior probability value your objective function should return a single value. If is_weighted is False then the data uncertainty, s_n, will be treated as a nuisance parameter and will be marginalized out. This is achieved by employing a strictly positive uncertainty (homoscedasticity) for each data point, \(s_n = \exp(\_\_lnsigma)\). __lnsigma will be present in MinimizerResult.params, as well as Minimizer.chain, nvarys will also be increased by one.

References

[1]http://dan.iel.fm/emcee/current/user/line/
eval(*args, **kwargs)[source]
final_residual(*args, **kwargs)[source]
final_residual_svd(*args, **kwargs)[source]
fit(*args, **kwargs)[source]
fitresult

The lmfit.MinimizerResult returned by the minimization.

get_model()[source]
least_squares(params=None, **kws)

Least-squares minimization using scipy.optimize.least_squares.

This method wraps scipy.optimize.least_squares, which has inbuilt support for bounds and robust loss functions. By default it uses the Trust Region Reflective algorithm with a linear loss function (i.e., the standard least-squares problem).

Parameters:
  • params (Parameters, optional) – Parameters to use as starting point.
  • **kws (dict, optional) – Minimizer options to pass to scipy.optimize.least_squares.
Returns:

Object containing the optimized parameter and several goodness-of-fit statistics.

Return type:

MinimizerResult

Changed in version 0.9.0: Return value changed to MinimizerResult.

leastsq(params=None, **kws)

Use Levenberg-Marquardt minimization to perform a fit.

It assumes that the input Parameters have been initialized, and a function to minimize has been properly set up. When possible, this calculates the estimated uncertainties and variable correlations from the covariance matrix.

This method calls scipy.optimize.leastsq. By default, numerical derivatives are used, and the following arguments are set:

leastsq() arg Default Value Description
xtol 1.e-7 Relative error in the approximate solution
ftol 1.e-7 Relative error in the desired sum of squares
maxfev 2000*(nvar+1) Maximum number of function calls (nvar= # of variables)
Dfun None Function to call for Jacobian calculation
Parameters:
  • params (Parameters, optional) – Parameters to use as starting point.
  • **kws (dict, optional) – Minimizer options to pass to scipy.optimize.leastsq.
Returns:

Object containing the optimized parameter and several goodness-of-fit statistics.

Return type:

MinimizerResult

Changed in version 0.9.0: Return value changed to MinimizerResult.

minimize(method='leastsq', params=None, **kws)

Perform the minimization.

Parameters:
  • method (str, optional) –

    Name of the fitting method to use. Valid values are:

    • ’leastsq’: Levenberg-Marquardt (default)
    • ’least_squares’: Least-Squares minimization, using Trust Region Reflective method
    • ’differential_evolution’: differential evolution
    • ’brute’: brute force method
    • ’basinhopping’: basinhopping
    • ’ampgo’: Adaptive Memory Programming for Global Optimization
    • nelder’: Nelder-Mead
    • ’lbfgsb’: L-BFGS-B
    • ’powell’: Powell
    • ’cg’: Conjugate-Gradient
    • ’newton’: Newton-CG
    • ’cobyla’: Cobyla
    • ’bfgs’: BFGS
    • ’tnc’: Truncated Newton
    • ’trust-ncg’: Newton-CG trust-region
    • ’trust-exact’: nearly exact trust-region (SciPy >= 1.0)
    • ’trust-krylov’: Newton GLTR trust-region (SciPy >= 1.0)
    • ’trust-constr’: trust-region for constrained optimization (SciPy >= 1.1)
    • ’dogleg’: Dog-leg trust-region
    • ’slsqp’: Sequential Linear Squares Programming
    • ’emcee’: Maximum likelihood via Monte-Carlo Markov Chain

    In most cases, these methods wrap and use the method with the same name from scipy.optimize, or use scipy.optimize.minimize with the same method argument. Thus ‘leastsq’ will use scipy.optimize.leastsq, while ‘powell’ will use scipy.optimize.minimizer(…, method=’powell’)

    For more details on the fitting methods please refer to the SciPy docs.

  • params (Parameters, optional) – Parameters of the model to use as starting values.
  • **kws (optional) – Additional arguments are passed to the underlying minimization method.
Returns:

Object containing the optimized parameter and several goodness-of-fit statistics.

Return type:

MinimizerResult

Changed in version 0.9.0: Return value changed to MinimizerResult.

penalty(fvars)

Penalty function for scalar minimizers.

Parameters:fvars (numpy.ndarray) – Array of values for the variable parameters.
Returns:r – The evaluated user-supplied objective function.

If the objective function is an array of size greater than 1, use the scalar returned by self.reduce_fcn. This defaults to sum-of-squares, but can be replaced by other options.

Return type:float
prepare_fit(params=None)

Prepare parameters for fitting.

Prepares and initializes model and Parameters for subsequent fitting. This routine prepares the conversion of Parameters into fit variables, organizes parameter bounds, and parses, “compiles” and checks constrain expressions. The method also creates and returns a new instance of a MinimizerResult object that contains the copy of the Parameters that will actually be varied in the fit.

Parameters:params (Parameters, optional) – Contains the Parameters for the model; if None, then the Parameters used to initialize the Minimizer object are used.
Returns:
Return type:MinimizerResult

Notes

This method is called directly by the fitting methods, and it is generally not necessary to call this function explicitly.

Changed in version 0.9.0: Return value changed to MinimizerResult.

scalar_minimize(method='Nelder-Mead', params=None, **kws)

Scalar minimization using scipy.optimize.minimize.

Perform fit with any of the scalar minimization algorithms supported by scipy.optimize.minimize. Default argument values are:

scalar_minimize() arg Default Value Description
method Nelder-Mead fitting method
tol 1.e-7 fitting and parameter tolerance
hess None Hessian of objective function
Parameters:
  • method (str, optional) –

    Name of the fitting method to use. One of:

    • ’Nelder-Mead’ (default)
    • ’L-BFGS-B’
    • ’Powell’
    • ’CG’
    • ’Newton-CG’
    • ’COBYLA’
    • ’BFGS’
    • ’TNC’
    • ’trust-ncg’
    • ’trust-exact’ (SciPy >= 1.0)
    • ’trust-krylov’ (SciPy >= 1.0)
    • ’trust-constr’ (SciPy >= 1.1)
    • ’dogleg’
    • ’SLSQP’
    • ’differential_evolution’
  • params (Parameters, optional) – Parameters to use as starting point.
  • **kws (dict, optional) – Minimizer options pass to scipy.optimize.minimize.
Returns:

Object containing the optimized parameter and several goodness-of-fit statistics.

Return type:

MinimizerResult

Changed in version 0.9.0: Return value changed to MinimizerResult.

Notes

If the objective function returns a NumPy array instead of the expected scalar, the sum of squares of the array will be used.

Note that bounds and constraints can be set on Parameters for any of these methods, so are not supported separately for those designed to use bounds. However, if you use the differential_evolution method you must specify finite (min, max) for each varying Parameter.

unprepare_fit()

Clean fit state, so that subsequent fits need to call prepare_fit().

removes AST compilations of constraint expressions.

values

Return Parameter values in a simple dictionary.