Line searches, trust regions, learning rates¤
optimistix.AbstractSearch
optimistix.AbstractSearch
¤
The abstract base class for all searches. (Which are our generalisation of line searches, trust regions, and learning rates.)
See this documentation for more information.
¤
optimistix.LearningRate (AbstractSearch)
¤
Move downhill by taking a step of the fixed size learning_rate
.
__init__(self, learning_rate: Any)
¤
Arguments:
learning_rate
: The fixed step-size used at each step.
optimistix.BacktrackingArmijo (AbstractSearch)
¤
Perform a backtracking Armijo line search.
__init__(self, decrease_factor: Union[Array, ndarray, numpy.bool, numpy.number, bool, int, float, complex] = 0.5, slope: Union[Array, ndarray, numpy.bool, numpy.number, bool, int, float, complex] = 0.1, step_init: Union[Array, ndarray, numpy.bool, numpy.number, bool, int, float, complex] = 1.0)
¤
Arguments:
decrease_factor
: The rate at which to backtrack, i.e.next_stepsize = decrease_factor * current_stepsize
. Must be between 0 and 1.slope
: The slope of of the linear approximation tof
that the backtracking algorithm must exceed to terminate. Larger means stricter termination criteria. Must be between 0 and 1.step_init
: The firststep_size
the backtracking algorithm will try. Must be greater than 0.
optimistix.ClassicalTrustRegion (AbstractSearch)
¤
The classic trust-region update algorithm which uses a quadratic approximation of the objective function to predict reduction.
Building a quadratic approximation requires an approximation to the Hessian of the
overall minimisation function. This means that trust region is suitable for use with
least-squares algorithms (which make the Gauss--Newton approximation
Hessian~Jac^T J) and for quasi-Newton minimisation algorithms like
optimistix.BFGS
. (An error will be raised if you use this with an incompatible
solver.)
__init__(self, high_cutoff: Union[Array, ndarray, numpy.bool, numpy.number, bool, int, float, complex] = 0.99, low_cutoff: Union[Array, ndarray, numpy.bool, numpy.number, bool, int, float, complex] = 0.01, high_constant: Union[Array, ndarray, numpy.bool, numpy.number, bool, int, float, complex] = 3.5, low_constant: Union[Array, ndarray, numpy.bool, numpy.number, bool, int, float, complex] = 0.25)
¤
In the following, ratio
refers to the ratio
true_reduction/predicted_reduction
.
Arguments:
high_cutoff
: the cutoff such thatratio > high_cutoff
will accept the step and increase the step-size on the next iteration.low_cutoff
: the cutoff such thatratio < low_cutoff
will reject the step and decrease the step-size on the next iteration.high_constant
: whenratio > high_cutoff
, multiply the previous step-size by high_constant`.low_constant
: whenratio < low_cutoff
, multiply the previous step-size by low_constant`.
optimistix.LinearTrustRegion (AbstractSearch)
¤
The trust-region update algorithm which uses a linear approximation of the objective function to predict reduction.
Generally speaking you should prefer optimistix.ClassicalTrustRegion
, unless
you happen to be using a solver (e.g. a non-quasi-Newton minimiser) with which that
is incompatible.
__init__(self, high_cutoff: Union[Array, ndarray, numpy.bool, numpy.number, bool, int, float, complex] = 0.99, low_cutoff: Union[Array, ndarray, numpy.bool, numpy.number, bool, int, float, complex] = 0.01, high_constant: Union[Array, ndarray, numpy.bool, numpy.number, bool, int, float, complex] = 3.5, low_constant: Union[Array, ndarray, numpy.bool, numpy.number, bool, int, float, complex] = 0.25)
¤
In the following, ratio
refers to the ratio
true_reduction/predicted_reduction
.
Arguments:
high_cutoff
: the cutoff such thatratio > high_cutoff
will accept the step and increase the step-size on the next iteration.low_cutoff
: the cutoff such thatratio < low_cutoff
will reject the step and decrease the step-size on the next iteration.high_constant
: whenratio > high_cutoff
, multiply the previous step-size by high_constant`.low_constant
: whenratio < low_cutoff
, multiply the previous step-size by low_constant`.