Template Class ActionModelLQRTpl

Inheritance Relationships

Base Type

Class Documentation

template<typename _Scalar>
class ActionModelLQRTpl : public crocoddyl::ActionModelAbstractTpl<_Scalar>

Linear-quadratic regulator (LQR) action model.

A linear-quadratic regulator (LQR) action has a transition model of the form

\[ \begin{equation} \mathbf{x}^' = \mathbf{A x + B u + f}. \end{equation} \]
Its cost function is quadratic of the form:
\[\begin{split} \begin{equation} \ell(\mathbf{x},\mathbf{u}) = \begin{bmatrix}1 \\ \mathbf{x} \\ \mathbf{u}\end{bmatrix}^T \begin{bmatrix}0 & \mathbf{q}^T & \mathbf{r}^T \\ \mathbf{q} & \mathbf{Q} & \mathbf{N}^T \\ \mathbf{r} & \mathbf{N} & \mathbf{R}\end{bmatrix} \begin{bmatrix}1 \\ \mathbf{x} \\ \mathbf{u}\end{bmatrix} \end{equation} \end{split}\]
and the linear equality and inequality constraints has the form:
\[\begin{split} \begin{aligned} \mathbf{g(x,u)} = \mathbf{G}\begin{bmatrix} \mathbf{x} \\ \mathbf{u} \end{bmatrix} [x,u] + \mathbf{g} \leq \mathbf{0} &\mathbf{h(x,u)} = \mathbf{H}\begin{bmatrix} \mathbf{x} \\ \mathbf{u} \end{bmatrix} [x,u] + \mathbf{h} \end{aligned} \end{split}\]

Public Types

typedef ActionDataAbstractTpl<Scalar> ActionDataAbstract
typedef ActionModelAbstractTpl<Scalar> Base
typedef ActionDataLQRTpl<Scalar> Data
typedef StateVectorTpl<Scalar> StateVector
typedef MathBaseTpl<Scalar> MathBase
typedef MathBase::VectorXs VectorXs
typedef MathBase::MatrixXs MatrixXs

Public Functions

ActionModelLQRTpl(const MatrixXs &A, const MatrixXs &B, const MatrixXs &Q, const MatrixXs &R, const MatrixXs &N)

Initialize the LQR action model.

Parameters:
  • A[in] State matrix

  • B[in] Input matrix

  • Q[in] State weight matrix

  • R[in] Input weight matrix

  • N[in] State-input weight matrix

ActionModelLQRTpl(const MatrixXs &A, const MatrixXs &B, const MatrixXs &Q, const MatrixXs &R, const MatrixXs &N, const VectorXs &f, const VectorXs &q, const VectorXs &r)

Initialize the LQR action model.

Parameters:
  • A[in] State matrix

  • B[in] Input matrix

  • Q[in] State weight matrix

  • R[in] Input weight matrix

  • N[in] State-input weight matrix

  • f[in] Dynamics drift

  • q[in] State weight vector

  • r[in] Input weight vector

ActionModelLQRTpl(const MatrixXs &A, const MatrixXs &B, const MatrixXs &Q, const MatrixXs &R, const MatrixXs &N, const MatrixXs &G, const MatrixXs &H, const VectorXs &f, const VectorXs &q, const VectorXs &r, const VectorXs &g, const VectorXs &h)

Initialize the LQR action model.

Parameters:
  • A[in] State matrix

  • B[in] Input matrix

  • Q[in] State weight matrix

  • R[in] Input weight matrix

  • N[in] State-input weight matrix

  • G[in] State-input inequality constraint matrix

  • H[in] State-input equality constraint matrix

  • f[in] Dynamics drift

  • q[in] State weight vector

  • r[in] Input weight vector

  • g[in] State-input inequality constraint bias

  • h[in] State-input equality constraint bias

ActionModelLQRTpl(const std::size_t nx, const std::size_t nu, const bool drift_free = true)

Initialize the LQR action model.

Parameters:
  • nx[in] Dimension of state vector

  • nu[in] Dimension of control vector

  • drif_free[in] Enable / disable the bias term of the linear dynamics (default true)

ActionModelLQRTpl(const ActionModelLQRTpl &copy)

Copy constructor.

virtual ~ActionModelLQRTpl() = default
virtual void calc(const std::shared_ptr<ActionDataAbstract> &data, const Eigen::Ref<const VectorXs> &x, const Eigen::Ref<const VectorXs> &u) override

Compute the next state and cost value.

Parameters:
  • data[in] Action data

  • x[in] State point \(\mathbf{x}\in\mathbb{R}^{ndx}\)

  • u[in] Control input \(\mathbf{u}\in\mathbb{R}^{nu}\)

virtual void calc(const std::shared_ptr<ActionDataAbstract> &data, const Eigen::Ref<const VectorXs> &x) override

Compute the total cost value for nodes that depends only on the state.

It updates the total cost and the next state is not computed as it is not expected to change. This function is used in the terminal nodes of an optimal control problem.

Parameters:
  • data[in] Action data

  • x[in] State point \(\mathbf{x}\in\mathbb{R}^{ndx}\)

virtual void calcDiff(const std::shared_ptr<ActionDataAbstract> &data, const Eigen::Ref<const VectorXs> &x, const Eigen::Ref<const VectorXs> &u) override

Compute the derivatives of the dynamics and cost functions.

It computes the partial derivatives of the dynamical system and the cost function. It assumes that calc() has been run first. This function builds a linear-quadratic approximation of the action model (i.e. dynamical system and cost function).

Parameters:
  • data[in] Action data

  • x[in] State point \(\mathbf{x}\in\mathbb{R}^{ndx}\)

  • u[in] Control input \(\mathbf{u}\in\mathbb{R}^{nu}\)

virtual void calcDiff(const std::shared_ptr<ActionDataAbstract> &data, const Eigen::Ref<const VectorXs> &x) override

Compute the derivatives of the cost functions with respect to the state only.

It updates the derivatives of the cost function with respect to the state only. This function is used in the terminal nodes of an optimal control problem.

Parameters:
  • data[in] Action data

  • x[in] State point \(\mathbf{x}\in\mathbb{R}^{ndx}\)

virtual std::shared_ptr<ActionDataAbstract> createData() override

Create the action data.

Returns:

the action data

template<typename NewScalar>
ActionModelLQRTpl<NewScalar> cast() const

Cast the LQR model to a different scalar type.

It is useful for operations requiring different precision or scalar types.

Template Parameters:

NewScalar – The new scalar type to cast to.

Returns:

ActionModelLQRTpl<NewScalar> A action model with the new scalar type.

virtual bool checkData(const std::shared_ptr<ActionDataAbstract> &data) override

Checks that a specific data belongs to this model.

const MatrixXs &get_A() const

Return the state matrix.

const MatrixXs &get_B() const

Return the input matrix.

const VectorXs &get_f() const

Return the dynamics drift.

const MatrixXs &get_Q() const

Return the state weight matrix.

const MatrixXs &get_R() const

Return the input weight matrix.

const MatrixXs &get_N() const

Return the state-input weight matrix.

const MatrixXs &get_G() const

Return the state-input inequality constraint matrix.

const MatrixXs &get_H() const

Return the state-input equality constraint matrix.

const VectorXs &get_q() const

Return the state weight vector.

const VectorXs &get_r() const

Return the input weight vector.

const VectorXs &get_g() const

Return the state-input inequality constraint bias.

const VectorXs &get_h() const

Return the state-input equality constraint bias.

void set_LQR(const MatrixXs &A, const MatrixXs &B, const MatrixXs &Q, const MatrixXs &R, const MatrixXs &N, const MatrixXs &G, const MatrixXs &H, const VectorXs &f, const VectorXs &q, const VectorXs &r, const VectorXs &g, const VectorXs &h)

Modify the LQR action model.

Parameters:
  • A[in] State matrix

  • B[in] Input matrix

  • Q[in] State weight matrix

  • R[in] Input weight matrix

  • N[in] State-input weight matrix

  • G[in] State-input inequality constraint matrix

  • H[in] State-input equality constraint matrix

  • f[in] Dynamics drift

  • q[in] State weight vector

  • r[in] Input weight vector

  • g[in] State-input inequality constraint bias

  • h[in] State-input equality constraint bias

DEPRECATED ("Use get_A", const MatrixXs &get_Fx() const { return get_A();}) DEPRECATED("Use get_B"
inline const MatrixXs &get_Fu() const
DEPRECATED ("Use get_f", const VectorXs &get_f0() const { return get_f();}) DEPRECATED("Use get_q"
inline const VectorXs &get_lx() const
DEPRECATED ("Use get_r", const VectorXs &get_lu() const { return get_r();}) DEPRECATED("Use get_Q"
inline const MatrixXs &get_Lxx() const
DEPRECATED ("Use get_R", const MatrixXs &get_Lxu() const { return get_R();}) DEPRECATED("Use get_N"
inline const MatrixXs &get_Luu() const
DEPRECATED ("Use set_LQR", void set_Fx(const MatrixXs &A) { set_LQR(A, B_, Q_, R_, N_, G_, H_, f_, q_, r_, g_, h_);}) DEPRECATED("Use set_LQR"
inline void set_Fu(const MatrixXs &B)
DEPRECATED ("Use set_LQR", void set_f0(const VectorXs &f) { set_LQR(A_, B_, Q_, R_, N_, G_, H_, f, q_, r_, g_, h_);}) DEPRECATED("Use set_LQR"
inline void set_lx(const VectorXs &q)
DEPRECATED ("Use set_LQR", void set_lu(const VectorXs &r) { set_LQR(A_, B_, Q_, R_, N_, G_, H_, f_, q_, r, g_, h_);}) DEPRECATED("Use set_LQR"
inline void set_Lxx(const MatrixXs &Q)
DEPRECATED ("Use set_LQR", void set_Luu(const MatrixXs &R) { set_LQR(A_, B_, Q_, R, N_, G_, H_, f_, q_, r_, g_, h_);}) DEPRECATED("Use set_LQR"
inline void set_Lxu(const MatrixXs &N)
virtual void print(std::ostream &os) const override

Print relevant information of the LQR model.

Parameters:

os[out] Output stream object

Public Members

EIGEN_MAKE_ALIGNED_OPERATOR_NEW typedef _Scalar Scalar

Public Static Functions

static ActionModelLQRTpl Random(const std::size_t nx, const std::size_t nu, const std::size_t ng = 0, const std::size_t nh = 0)

Create a random LQR model.

Parameters:
  • nx[in] State dimension

  • nu[in] Control dimension

  • ng[in] Inequality constraint dimension (default 0)

  • nh[in] Equality constraint dimension (defaul 0)

Protected Attributes

std::size_t ng_

Number of inequality constraints.

std::size_t nh_

< Equality constraint dimension

std::size_t nu_

< Inequality constraint dimension

std::shared_ptr<StateAbstract> state_

< Control dimension