Class IpoptInterface
Defined in File ipopt-iface.hpp
Inheritance Relationships
Base Type
public Ipopt::TNLP
Class Documentation
-
class IpoptInterface : public Ipopt::TNLP
Class for interfacing a crocoddyl::ShootingProblem with IPOPT.
This class implements the pure virtual functions from Ipopt::TNLP to solve the optimal control problem in
problem_using a multiple shooting approach.Ipopt considers its decision variables
xto belong to the Euclidean space. However, Crocoddyl states could lie in a manifold. To ensure that the solution of Ipopt lies in the manifold of the state, we perform the optimization in the tangent space of a given initial state. Finally we retract the Ipopt solution to the manifold. That is:- \[ \begin{aligned} \mathbf{x}^* = \mathbf{x}^0 \oplus \mathbf{\Delta x}^* \end{aligned} \]
where \(\mathbf{x}^*\) is the final solution, \(\mathbf{x}^0\) is the initial guess and \(\mathbf{\Delta x}^*\) is the Ipopt solution in the tangent space of \(\mathbf{x}_0\). Due to this procedure, the computation of the cost function, the dynamic constraint as well as their corresponding derivatives should be properly modified.
The Ipopt decision vector is built as follows: \(x = [ \mathbf{\Delta x}_0^\top, \mathbf{u}_0^\top, \mathbf{\Delta x}_1^\top, \mathbf{u}_1^\top, \dots, \mathbf{\Delta x}_N^\top ]\)
Dynamic constraints are posed as: \((\mathbf{x}^0_{k+1} \oplus \mathbf{\Delta x}_{k+1}) \ominus \mathbf{f}(\mathbf{x}_{k}^0 \oplus \mathbf{\Delta x}_{k}, \mathbf{u}_k) = \mathbf{0}\)
Initial condition: \( \mathbf{x}(0) \ominus (\mathbf{x}_{k}^0 \oplus \mathbf{\Delta x}_{k}) = \mathbf{0}\)
Documentation of the methods has been extracted from Ipopt::TNLP.hpp file
See also
get_nlp_info(),get_bounds_info(),eval_f(),eval_g(),eval_grad_f(),eval_jac_g(),eval_h()Public Functions
Initialize the Ipopt interface.
- Parameters:
problem – [in] Crocoddyl shooting problem
-
virtual ~IpoptInterface()
-
virtual bool get_nlp_info(Ipopt::Index &n, Ipopt::Index &m, Ipopt::Index &nnz_jac_g, Ipopt::Index &nnz_h_lag, IndexStyleEnum &index_style)
Methods to gather information about the NLP.
Ipopt uses this information when allocating the arrays that it will later ask you to fill with values. Be careful in this method since incorrect values will cause memory bugs which may be very difficult to find.
- Parameters:
n – [out] Storage for the number of variables \(x\)
m – [out] Storage for the number of constraints \(g(x)\)
nnz_jac_g – [out] Storage for the number of nonzero entries in the Jacobian
nnz_h_lag – [out] Storage for the number of nonzero entries in the Hessian
index_style – [out] Storage for the index style the numbering style used for row/col entries in the sparse matrix format
-
virtual bool get_bounds_info(Ipopt::Index n, Ipopt::Number *x_l, Ipopt::Number *x_u, Ipopt::Index m, Ipopt::Number *g_l, Ipopt::Number *g_u)
Method to request bounds on the variables and constraints. T.
The values of
nandmthat were specified in IpoptInterface::get_nlp_info are passed here for debug checking. Setting a lower bound to a value less than or equal to the value of the option nlp_lower_bound_inf will cause Ipopt to assume no lower bound. Likewise, specifying the upper bound above or equal to the value of the option nlp_upper_bound_inf will cause Ipopt to assume no upper bound. These options are set to -1019 and 1019, respectively, by default, but may be modified by changing these options.- Parameters:
n – [in] Number of variables \(x\) in the problem
x_l – [out] Lower bounds \(x^L\) for the variables \(x\)
x_u – [out] Upper bounds \(x^U\) for the variables \(x\)
m – [in] Number of constraints \(g(x)\) in the problem
g_l – [out] Lower bounds \(g^L\) for the constraints \(g(x)\)
g_u – [out] Upper bounds \(g^U\) for the constraints \(g(x)\)
- Returns:
true if success, false otherwise.
-
virtual bool get_starting_point(Ipopt::Index n, bool init_x, Ipopt::Number *x, bool init_z, Ipopt::Number *z_L, Ipopt::Number *z_U, Ipopt::Index m, bool init_lambda, Ipopt::Number *lambda)
Method to request the starting point before iterating.
The boolean variables indicate whether the algorithm requires to have x, z_L/z_u, and lambda initialized, respectively. If, for some reason, the algorithm requires initializations that cannot be provided, false should be returned and Ipopt will stop. The default options only require initial values for the primal variables \(x\).
Note, that the initial values for bound multiplier components for absent bounds ( \(x^L_i=-\infty\) or \(x^U_i=\infty\)) are ignored.
- Parameters:
n – [in] Number of variables \(x\) in the problem; it will have the same value that was specified in
IpoptInterface::get_nlp_infoinit_x – [in] If true, this method must provide an initial value for \(x\)
x – [out] Initial values for the primal variables \(x\)
init_z – [in] If true, this method must provide an initial value for the bound multipliers \(z^L\) and \(z^U\)
z_L – [out] Initial values for the bound multipliers \(z^L\)
z_U – [out] Initial values for the bound multipliers \(z^U\)
m – [in] Number of constraints \(g(x)\) in the problem; it will have the same value that was specified in
IpoptInterface::get_nlp_infoinit_lambda – [in] If true, this method must provide an initial value for the constraint multipliers \(\lambda\)
lambda – [out] Initial values for the constraint multipliers, \(\lambda\)
- Returns:
true if success, false otherwise.
-
virtual bool eval_f(Ipopt::Index n, const Ipopt::Number *x, bool new_x, Ipopt::Number &obj_value)
Method to request the value of the objective function.
- Parameters:
n – [in] Number of variables \(x\) in the problem; it will have the same value that was specified in
IpoptInterface::get_nlp_infox – [in] Values for the primal variables \(x\) at which the objective function \(f(x)\) is to be evaluated
new_x – [in] False if any evaluation method (
eval_*) was previously called with the same values in x, true otherwise. This can be helpful when users have efficient implementations that calculate multiple outputs at once. Ipopt internally caches results from the TNLP and generally, this flag can be ignored.obj_value – [out] Storage for the value of the objective function \(f(x)\)
- Returns:
true if success, false otherwise.
-
virtual bool eval_grad_f(Ipopt::Index n, const Ipopt::Number *x, bool new_x, Ipopt::Number *grad_f)
Method to request the gradient of the objective function.
- Parameters:
n – [in] Number of variables \(x\) in the problem; it will have the same value that was specified in
IpoptInterface::get_nlp_infox – [in] Values for the primal variables \(x\) at which the gradient \(\nabla f(x)\) is to be evaluated
new_x – [in] False if any evaluation method (
eval_*) was previously called with the same values in x, true otherwise; see alsoIpoptInterface::eval_fgrad_f – [out] Array to store values of the gradient of the objective function \(\nabla f(x)\). The gradient array is in the same order as the \(x\) variables (i.e., the gradient of the objective with respect to
x[2]should be put ingrad_f[2]).
- Returns:
true if success, false otherwise.
-
virtual bool eval_g(Ipopt::Index n, const Ipopt::Number *x, bool new_x, Ipopt::Index m, Ipopt::Number *g)
Method to request the constraint values.
- Parameters:
n – [in] Number of variables \(x\) in the problem; it will have the same value that was specified in
IpoptInterface::get_nlp_infox – [in] Values for the primal variables \(x\) at which the constraint functions \(g(x)\) are to be evaluated
new_x – [in] False if any evaluation method (
eval_*) was previously called with the same values in x, true otherwise; see alsoIpoptInterface::eval_fm – [in] Number of constraints \(g(x)\) in the problem; it will have the same value that was specified in
IpoptInterface::get_nlp_infog – [out] Array to store constraint function values \(g(x)\), do not add or subtract the bound values \(g^L\) or \(g^U\).
- Returns:
true if success, false otherwise.
-
virtual bool eval_jac_g(Ipopt::Index n, const Ipopt::Number *x, bool new_x, Ipopt::Index m, Ipopt::Index nele_jac, Ipopt::Index *iRow, Ipopt::Index *jCol, Ipopt::Number *values)
Method to request either the sparsity structure or the values of the Jacobian of the constraints.
The Jacobian is the matrix of derivatives where the derivative of constraint function \(g_i\) with respect to variable \(x_j\) is placed in row \(i\) and column \(j\). See TRIPLET for a discussion of the sparse matrix format used in this method.
Note
The arrays iRow and jCol only need to be filled once. If the iRow and jCol arguments are not NULL (first call to this function), then Ipopt expects that the sparsity structure of the Jacobian (the row and column indices only) are written into iRow and jCol. At this call, the arguments
xandvalueswill be NULL. If the argumentsxandvaluesare not NULL, then Ipopt expects that the value of the Jacobian as calculated from arrayxis stored in arrayvalues(using the same order as used when specifying the sparsity structure). At this call, the argumentsiRowandjColwill be NULL.- Parameters:
n – [in] Number of variables \(x\) in the problem; it will have the same value that was specified in
IpoptInterface::get_nlp_infox – [in] First call: NULL; later calls: the values for the primal variables \(x\) at which the constraint Jacobian \(\nabla g(x)^T\) is to be evaluated
new_x – [in] False if any evaluation method (
eval_*) was previously called with the same values in x, true otherwise; see alsoIpoptInterface::eval_fm – [in] Number of constraints \(g(x)\) in the problem; it will have the same value that was specified in
IpoptInterface::get_nlp_infonele_jac – [in] Number of nonzero elements in the Jacobian; it will have the same value that was specified in
IpoptInterface::get_nlp_infoiRow – [out] First call: array of length
nele_jacto store the row indices of entries in the Jacobian f the constraints; later calls: NULLjCol – [out] First call: array of length
nele_jacto store the column indices of entries in the acobian of the constraints; later calls: NULLvalues – [out] First call: NULL; later calls: array of length nele_jac to store the values of the entries in the Jacobian of the constraints
- Returns:
true if success, false otherwise.
-
virtual bool eval_h(Ipopt::Index n, const Ipopt::Number *x, bool new_x, Ipopt::Number obj_factor, Ipopt::Index m, const Ipopt::Number *lambda, bool new_lambda, Ipopt::Index nele_hess, Ipopt::Index *iRow, Ipopt::Index *jCol, Ipopt::Number *values)
Method to request either the sparsity structure or the values of the Hessian of the Lagrangian.
The Hessian matrix that Ipopt uses is
\[ \sigma_f \nabla^2 f(x_k) + \sum_{i=1}^m\lambda_i\nabla^2 g_i(x_k) \]for the given values for \(x\), \(\sigma_f\), and \(\lambda\). See TRIPLET for a discussion of the sparse matrix format used in this method.A default implementation is provided, in case the user wants to set quasi-Newton approximations to estimate the second derivatives and doesn’t not need to implement this method.
- Attention
As this matrix is symmetric, Ipopt expects that only the lower diagonal entries are specified.
Note
The arrays iRow and jCol only need to be filled once. If the iRow and jCol arguments are not NULL (first call to this function), then Ipopt expects that the sparsity structure of the Hessian (the row and column indices only) are written into iRow and jCol. At this call, the arguments
x,lambda, andvalueswill be NULL. If the argumentsx,lambda, andvaluesare not NULL, then Ipopt expects that the value of the Hessian as calculated from arraysxandlambdaare stored in arrayvalues(using the same order as used when specifying the sparsity structure). At this call, the argumentsiRowandjColwill be NULL.- Parameters:
n – [in] Number of variables \(x\) in the problem; it will have the same value that was specified in
IpoptInterface::get_nlp_infox – [in] First call: NULL; later calls: the values for the primal variables \(x\) at which the Hessian is to be evaluated
new_x – [in] False if any evaluation method (
eval_*) was previously called with the same values in x, true otherwise; see also IpoptInterface::eval_fobj_factor – [in] Factor \(\sigma_f\) in front of the objective term in the Hessian
m – [in] Number of constraints \(g(x)\) in the problem; it will have the same value that was specified in
IpoptInterface::get_nlp_infolambda – [in] Values for the constraint multipliers \(\lambda\) at which the Hessian is to be evaluated
new_lambda – [in] False if any evaluation method was previously called with the same values in lambda, true otherwise
nele_hess – [in] Number of nonzero elements in the Hessian; it will have the same value that was specified in
IpoptInterface::get_nlp_infoiRow – [out] First call: array of length nele_hess to store the row indices of entries in the Hessian; later calls: NULL
jCol – [out] First call: array of length nele_hess to store the column indices of entries in the Hessian; later calls: NULL
values – [out] First call: NULL; later calls: array of length nele_hess to store the values of the entries in the Hessian
- Returns:
true if success, false otherwise.
-
virtual void finalize_solution(Ipopt::SolverReturn status, Ipopt::Index n, const Ipopt::Number *x, const Ipopt::Number *z_L, const Ipopt::Number *z_U, Ipopt::Index m, const Ipopt::Number *g, const Ipopt::Number *lambda, Ipopt::Number obj_value, const Ipopt::IpoptData *ip_data, Ipopt::IpoptCalculatedQuantities *ip_cq)
This method is called when the algorithm has finished (successfully or not) so the TNLP can digest the outcome, e.g., store/write the solution, if any.
- Parameters:
status – [in]
gives the status of the algorithm
SUCCESS: Algorithm terminated successfully at a locally optimal point, satisfying the convergence tolerances (can be specified by options).
MAXITER_EXCEEDED: Maximum number of iterations exceeded (can be specified by an option).
CPUTIME_EXCEEDED: Maximum number of CPU seconds exceeded (can be specified by an option).
STOP_AT_TINY_STEP: Algorithm proceeds with very little progress.
STOP_AT_ACCEPTABLE_POINT: Algorithm stopped at a point that was converged, not to “desired” tolerances, but to “acceptable” tolerances (see the acceptable-… options).
LOCAL_INFEASIBILITY: Algorithm converged to a point of local infeasibility. Problem may be infeasible.
USER_REQUESTED_STOP: The user call-back function IpoptInterface::intermediate_callback returned false, i.e., the user code requested a premature termination of the optimization.
DIVERGING_ITERATES: It seems that the iterates diverge.
RESTORATION_FAILURE: Restoration phase failed, algorithm doesn’t know how to proceed.
ERROR_IN_STEP_COMPUTATION: An unrecoverable error occurred while Ipopt tried to compute the search direction.
INVALID_NUMBER_DETECTED: Algorithm received an invalid number (such as NaN or Inf) from the NLP; see also option check_derivatives_for_nan_inf).
INTERNAL_ERROR: An unknown internal error occurred.
n – [in] Number of variables \(x\) in the problem; it will have the same value that was specified in
IpoptInterface::get_nlp_infox – [in] Final values for the primal variables
z_L – [in] Final values for the lower bound multipliers
z_U – [in] Final values for the upper bound multipliers
m – [in] Number of constraints \(g(x)\) in the problem; it will have the same value that was specified in
IpoptInterface::get_nlp_infog – [in] Final values of the constraint functions
lambda – [in] Final values of the constraint multipliers
obj_value – [in] Final value of the objective function
ip_data – [in] Provided for expert users
ip_cq – [in] Provided for expert users
-
bool intermediate_callback(Ipopt::AlgorithmMode mode, Ipopt::Index iter, Ipopt::Number obj_value, Ipopt::Number inf_pr, Ipopt::Number inf_du, Ipopt::Number mu, Ipopt::Number d_norm, Ipopt::Number regularization_size, Ipopt::Number alpha_du, Ipopt::Number alpha_pr, Ipopt::Index ls_trials, const Ipopt::IpoptData *ip_data, Ipopt::IpoptCalculatedQuantities *ip_cq)
Intermediate Callback method for the user.
This method is called once per iteration (during the convergence check), and can be used to obtain information about the optimization status while Ipopt solves the problem, and also to request a premature termination.
The information provided by the entities in the argument list correspond to what Ipopt prints in the iteration summary (see also OUTPUT). Further information can be obtained from the ip_data and ip_cq objects. The current iterate and violations of feasibility and optimality can be accessed via the methods IpoptInterface::get_curr_iterate() and IpoptInterface::get_curr_violations(). These methods translate values for the internal representation of the problem from
ip_dataandip_cqobjects into the TNLP representation.It is not required to implement (overload) this method. The default implementation always returns true.
- Returns:
If this method returns false, Ipopt will terminate with the User_Requested_Stop status.
-
std::shared_ptr<IpoptInterfaceData> createData(const std::size_t nx, const std::size_t ndx, const std::size_t nu)
Create the data structure to store temporary computations.
- Returns:
the IpoptInterface Data
-
void resizeData()
-
std::size_t get_nvar() const
Return the total number of optimization variables (states and controls)
-
std::size_t get_nconst() const
Return the total number of constraints in the NLP.
-
const std::vector<Eigen::VectorXd> &get_xs() const
Return the state vector.
-
const std::vector<Eigen::VectorXd> &get_us() const
Return the control vector.
-
const std::shared_ptr<crocoddyl::ShootingProblem> &get_problem() const
Return the crocoddyl::ShootingProblem to be solved.
-
double get_cost() const
-
void set_xs(const std::vector<Eigen::VectorXd> &xs)
Modify the state vector.
-
void set_us(const std::vector<Eigen::VectorXd> &us)
Modify the control vector.