Function information¤
optimistix.FunctionInfo
¤
Different solvers (BFGS, Levenberg--Marquardt, ...) evaluate different quantities of the objective function. Some may compute gradient information, some may provide approximate Hessian information, etc.
This enumeration-ish object captures the different variants.
Available variants are
optimistix.FunctionInfo.{Eval, EvalGrad, EvalGradHessian, EvalGradHessianInv, Residual, ResidualJac}.
as_min() -> Shaped[Array, '']
¤
For a minimisation problem, returns f(y). For a least-squares problem, returns 0.5*residuals^2 -- i.e. its loss as a minimisation problem.
optimistix.FunctionInfo.Eval(optimistix.FunctionInfo)
¤
Has a .f attribute describing fn(y). Used when no gradient information is
available.
__init__(f: Shaped[Array, ''])
¤
Arguments:
f: the scalar output of a function evaluationfn(y).
optimistix.FunctionInfo.EvalGrad(optimistix.FunctionInfo)
¤
Has a .f attribute as with optimistix.FunctionInfo.Eval. Also has a
.grad attribute describing d(fn)/dy. Used with first-order solvers for
minimisation problems. (E.g. gradient descent; nonlinear CG.)
__init__(f: Shaped[Array, ''], grad: ~Y)
¤
Arguments:
f: the scalar output of a function evaluationfn(y).grad: the output of a gradient evaluationgrad(fn)(y).
optimistix.FunctionInfo.EvalGradHessian(optimistix.FunctionInfo)
¤
Has .f and .grad attributes as with optimistix.FunctionInfo.EvalGrad.
Also has a .hessian attribute describing (an approximation to) the Hessian of
fn at y. Used with quasi-Newton minimisation algorithms, like BFGS.
__init__(f: Shaped[Array, ''], grad: ~Y, hessian: lineax.AbstractLinearOperator)
¤
Arguments:
f: the scalar output of a function evaluationfn(y).grad: the output of a gradient evaluationgrad(fn)(y).hessian: the output of a hessian evaluationhessian(fn)(y).
optimistix.FunctionInfo.EvalGradHessianInv(optimistix.FunctionInfo)
¤
As optimistix.FunctionInfo.EvalGradHessian, but records the (approximate)
inverse-Hessian instead. Has .f and .grad and .hessian_inv attributes.
__init__(f: Shaped[Array, ''], grad: ~Y, hessian_inv: lineax.AbstractLinearOperator)
¤
Arguments:
f: the scalar output of a function evaluationfn(y).grad: the output of a gradient evaluationgrad(fn)(y).hessian_inv: the matrix inverse of a hessian evaluation(hessian(fn)(y))^{-1}.
optimistix.FunctionInfo.Residual(optimistix.FunctionInfo)
¤
Has a .residual attribute describing fn(y). Used with least squares problems,
for which fn returns residuals.
__init__(residual: ~Out)
¤
Arguments:
residual: the vector output of a function evaluationfn(y). When thought of as a minimisation problem the scalar value to minimise is0.5 * residual^T residual.
optimistix.FunctionInfo.ResidualJac(optimistix.FunctionInfo)
¤
Records the Jacobian d(fn)/dy as a linear operator. Used for least squares
problems, for which fn returns residuals. Has .residual and .jac attributes,
where residual = fn(y), jac = d(fn)/dy.
__init__(residual: ~Out, jac: lineax.AbstractLinearOperator)
¤
Arguments:
residual: the vector output of a function evaluationfn(y). When thought of as a minimisation problem the scalar value to minimise is0.5 * residual^T residual.jac: the jacobianjac(fn)(y).