Class SolverKKT

Inheritance Relationships

Base Type

Class Documentation

class SolverKKT : public crocoddyl::SolverAbstract

Public Functions

explicit EIGEN_MAKE_ALIGNED_OPERATOR_NEW SolverKKT(std::shared_ptr<ShootingProblem> problem)
virtual ~SolverKKT()
virtual bool solve(const std::vector<Eigen::VectorXd> &init_xs = DEFAULT_VECTOR, const std::vector<Eigen::VectorXd> &init_us = DEFAULT_VECTOR, const std::size_t maxiter = 100, const bool is_feasible = false, const double regInit = 1e-9)

Compute the optimal trajectory \(\mathbf{x}^*_s,\mathbf{u}^*_s\) as lists of \(T+1\) and \(T\) terms.

From an initial guess init_xs, init_us (feasible or not), iterate over computeDirection() and tryStep() until stoppingCriteria() is below threshold. It also describes the globalization strategy used during the numerical optimization.

Parameters:
  • init_xs[in] initial guess for state trajectory with \(T+1\) elements (default [])

  • init_us[in] initial guess for control trajectory with \(T\) elements (default [])

  • maxiter[in] maximum allowed number of iterations (default 100)

  • is_feasible[in] true if the init_xs are obtained from integrating the init_us (rollout) (default false)

  • init_reg[in] initial guess for the regularization value. Very low values are typical used with very good guess points (default 1e-9).

Returns:

A boolean that describes if convergence was reached.

virtual void computeDirection(const bool recalc = true)

Compute the search direction \((\delta\mathbf{x}^k,\delta\mathbf{u}^k)\) for the current guess \((\mathbf{x}^k_s,\mathbf{u}^k_s)\).

You must call setCandidate() first in order to define the current guess. A current guess defines a state and control trajectory \((\mathbf{x}^k_s,\mathbf{u}^k_s)\) of \(T+1\) and \(T\) elements, respectively.

Parameters:

recalc[in] true for recalculating the derivatives at current state and control

Returns:

The search direction \((\delta\mathbf{x},\delta\mathbf{u})\) and the dual lambdas as lists of \(T+1\), \(T\) and \(T+1\) lengths, respectively

virtual double tryStep(const double steplength = 1)

Try a predefined step length \(\alpha\) and compute its cost improvement \(dV\).

It uses the search direction found by computeDirection() to try a determined step length \(\alpha\). Therefore, it assumes that we have run computeDirection() first. Additionally, it returns the cost improvement \(dV\) along the predefined step length \(\alpha\).

Parameters:

steplength[in] applied step length ( \(0\leq\alpha\leq1\))

Returns:

the cost improvement

virtual double stoppingCriteria()

Return a positive value that quantifies the algorithm termination.

These values typically represents the gradient norm which tell us that it’s been reached the local minima. The stopping criteria strictly speaking depends on the search direction (calculated by computeDirection()) but it could also depend on the chosen step length, tested by tryStep().

virtual const Eigen::Vector2d &expectedImprovement()

Return the expected improvement \(dV_{exp}\) from a given current search direction \((\delta\mathbf{x}^k,\delta\mathbf{u}^k)\).

For computing the expected improvement, you need to compute the search direction first via computeDirection().

const Eigen::MatrixXd &get_kkt() const
const Eigen::VectorXd &get_kktref() const
const Eigen::VectorXd &get_primaldual() const
const std::vector<Eigen::VectorXd> &get_dxs() const
const std::vector<Eigen::VectorXd> &get_dus() const
const std::vector<Eigen::VectorXd> &get_lambdas() const
std::size_t get_nx() const
std::size_t get_ndx() const
std::size_t get_nu() const

Protected Attributes

double reg_incfactor_
double reg_decfactor_
double reg_min_
double reg_max_
double cost_try_
std::vector<Eigen::VectorXd> xs_try_
std::vector<Eigen::VectorXd> us_try_