Optimizers

ESCAPE provides powerful optimization algorithms for curve fitting - an iterative process of finding parameters of mathematical functions that best fit experimental data points. The optimizer_obj class implements several algorithms:

  • Levenberg-Marquardt [1], [2] - A robust non-linear least-squares algorithm that combines gradient descent and Gauss-Newton methods

  • Differential Evolution [3] - A stochastic evolutionary algorithm that maintains a population of candidate solutions and iteratively improves them through mutation and crossover operations

  • Simulated Annealing - A global optimization algorithm that explores the parameter space via a

    temperature-driven random walk with optional Levenberg-Marquardt polishing

The optimizers support:

  • Asynchronous optimization

  • Custom initialization, iteration and finalization callbacks

  • Progress monitoring and early stopping

  • Parameter constraints and bounds

  • Multiple optimization strategies

  • Automatic convergence detection

escape.core.optimizer.levmar(stack: modelstack_obj | List[model_obj] | model_obj, ftol: float = 1e-10, xtol: float = 1e-10, gtol: float = 1e-10, maxiter: int = 300, maxfev: int = 5000, nupdate: int = 1, status_exc: bool = False, name: str = 'Levenberg-Marquardt', notes: str = '', epsfcn: float = 0.0) optimizer_obj

Creates a Levenberg-Marquardt optimizer.

Args:

stack: Models to optimize (modelstack_obj, list of models or single model) ftol: Relative error tolerance for cost function (default: 1e-10) xtol: Relative error tolerance for parameter values (default: 1e-10) gtol: Orthogonality tolerance between cost function and jacobian (default: 1e-10) maxiter: Maximum iterations allowed (default: 300) maxfev: Maximum function evaluations allowed (default: 5000) nupdate: Iteration callback frequency (default: 1) status_exc: Raise exception on non-zero status code (default: False) name: Optimizer name notes: Optimizer notes epsfcn: Finite-difference step size hint for Jacobian computation.

The actual step is sqrt(max(epsfcn, machine_eps)). Use ~1e-7 when the forward model runs in FP32 (e.g. GPU) to avoid step sizes smaller than float ULP. Default: 0.0 (uses machine epsilon).

Returns:

optimizer_obj instance

escape.core.optimizer.diffevol(objective: modelstack_obj | List[model_obj] | model_obj | functor_obj, popsize: int = 15, strategy: str = 'best1bin', init_strategy: str | np.ndarray = 'random', maxiter: int = 1000, maxfev: int = 5000, ftol: float = 0.001, mutation: float = 0.5, crossover: float = 0.7, polish_candidate_maxiter: int = 0, polish_final_maxiter: int = 300, nupdate: int = 1, status_exc: bool = False, atol: float = 0.0, mutation_max: float | None = None, name: str = 'Differential Evolution', notes: str = '') optimizer_obj | functor_obj

Creates a Differential Evolution optimizer.

The algorithm maintains a population of candidate solutions and evolves them using mutation and crossover operations to find the global minimum. Population size is at least max(6, popsize * num_params); mutation is in [0, 2).

Args:

objective: Models or functor to optimize popsize: Population size multiplier (actual size >= max(6, popsize * num_params)) strategy: Evolution strategy (default: ‘best1bin’)

Supported strategies: - ‘best1bin’/’best1exp’ - ‘rand1bin’/’rand1exp’ - ‘randtobest1bin’/’randtobest1exp’ - ‘currenttobest1bin’/’currenttobest1exp’ - ‘best2bin’/’best2exp’ - ‘rand2bin’/’rand2exp’ - ‘sqg2bin’/’sqg2exp’ through ‘sqg5bin’/’sqg5exp’, ‘sqgabin’/’sqgaexp’

init_strategy: Initialization method (default: ‘random’)
  • ‘random’: Uniform random initialization

  • ‘lhs’: Latin hypercube sampling

  • numpy array: Custom initial population (min 6 rows)

maxiter: Maximum generations (default: 1000) maxfev: Maximum function evaluations (default: 5000) ftol: Relative convergence tolerance (default: 1e-3); stop when std <= atol + ftol*|mean| mutation: Mutation factor F in [0, 2) (default: 0.5) crossover: Crossover probability CR in [0, 1] (default: 0.7) polish_candidate_maxiter: LM polish iterations per candidate (default: 0) polish_final_maxiter: Final LM polish iterations (default: 300) nupdate: Iteration callback frequency (default: 1) status_exc: Raise exception on non-zero status (default: False) atol: Absolute convergence tolerance (default: 0); stop when std <= atol + ftol*|mean| mutation_max: If > mutation, dither F ~ U(mutation, mutation_max) each generation (default: None = off) name: Optimizer name notes: Optimizer notes

Returns:

optimizer_obj for models, functor_obj for functors

escape.core.optimizer.sa(objective: modelstack_obj | List[model_obj] | model_obj | functor_obj, t_initial: float = 0.0, t_min: float = 0.001, cooling_rate: float = 0.9, schedule: str = 'exponential', iterations_per_temp: int = 50, maxiter: int = 1000, maxfev: int = 50000, ftol: float = 1e-05, step_size: float = 0.1, adaptive_step: bool = True, polish_candidate_maxiter: int = 0, polish_maxiter: int = 0, nupdate: int = 1, status_exc: bool = False, name: str = 'Simulated Annealing', notes: str = '') optimizer_obj

Creates a Simulated Annealing optimizer for model stacks.

The algorithm explores the parameter space using a temperature-driven random walk with optional Levenberg-Marquardt polishing of accepted candidates and final solutions.

Args:

stack: Models to optimize (modelstack_obj, list of models or single model) t_initial: Initial temperature (default: 0.0) t_min: Minimum temperature (default: 1e-3) cooling_rate: Cooling rate (default: 0.9) schedule: Cooling schedule (default: “exponential”) iterations_per_temp: Iterations per temperature (default: 50) maxiter: Maximum iterations (default: 1000) maxfev: Maximum function evaluations (default: 50000) ftol: Relative convergence tolerance (default: 1e-5) step_size: Step size (default: 0.1) adaptive_step: Use adaptive step size (default: True) polish_candidate_maxiter: LM polish iterations per candidate (default: 0) polish_maxiter: Final LM polish iterations (default: 0) nupdate: Iteration callback frequency (default: 1) status_exc: Raise exception on non-zero status (default: False) name: Optimizer name notes: Optimizer notes

class escape.core.optimizer.optimizer_obj

Wrapper class for ESCAPE optimizers.

This class provides a Pythonic interface to the C++ optimization algorithms. It handles parameter management, optimization control, and progress monitoring.

best_cost

Best cost value found. For Levenberg-Marquardt this is the final cost. For stochastic methods this is the best cost across all iterations.

cost_history

History of cost values across iterations.

finalization_method

Custom finalization callback.

initial_parameters

Initial parameter values.

initialization_method

Custom initialization callback.

iteration_method

Custom iteration callback.

modelstack

The modelstack being optimized.

name

Optimizer instance name.

num_of_evaluations

Number of objective function evaluations performed.

num_of_iterations

Number of completed optimization iterations.

on_finalized() None

Called when optimization completes.

on_initialized() None

Called when optimization starts.

on_iteration() None

Called after each iteration (frequency controlled by nupdate setting).

parameters

List of optimization parameters.

progress

Optimization progress iter/maxiter

reset_to_initial() None

Reset parameters to their initial values.

shake() None

Randomizes all non-fixed independent parameters.

show(model_configs: Any | None = None, config: Any | None = None, **kwargs) Any

Show optimizer object in a widget.

Args:

model_configs: Model configurations to display, a single ModelConfig instance or a list of ModelConfig instances config: Layout configuration, a LayoutConfig instance **kwargs: Additional keyword arguments

Returns:

OptimizerObjectLayout instance

status_code

Numeric status code indicating optimization outcome.

status_msg

Status message describing the optimization state.

stop() None

Interrupts the optimization process.

wait() None

Blocks until the asynchronous optimization completes. Only applicable when optimizer was started with asynchr=True.

escape.core.optimizer.optimization_status(func: functor_obj) dict

Get optimization status for a functor.

Args:

func: Functor object with minimizer handler

Returns:

Dictionary with status code and function evaluation count