Template Class ActionModelLQRTpl
Defined in File lqr.hpp
Inheritance Relationships
Base Type
public crocoddyl::ActionModelAbstractTpl< _Scalar >(Template Class ActionModelAbstractTpl)
Class Documentation
-
template<typename _Scalar>
class ActionModelLQRTpl : public crocoddyl::ActionModelAbstractTpl<_Scalar> Linear-quadratic regulator (LQR) action model.
A linear-quadratic regulator (LQR) action has a transition model of the form
\[ \begin{equation} \mathbf{x}^' = \mathbf{A x + B u + f}. \end{equation} \]Its cost function is quadratic of the form:\[\begin{split} \begin{equation} \ell(\mathbf{x},\mathbf{u}) = \begin{bmatrix}1 \\ \mathbf{x} \\ \mathbf{u}\end{bmatrix}^T \begin{bmatrix}0 & \mathbf{q}^T & \mathbf{r}^T \\ \mathbf{q} & \mathbf{Q} & \mathbf{N}^T \\ \mathbf{r} & \mathbf{N} & \mathbf{R}\end{bmatrix} \begin{bmatrix}1 \\ \mathbf{x} \\ \mathbf{u}\end{bmatrix} \end{equation} \end{split}\]and the linear equality and inequality constraints has the form:\[\begin{split} \begin{aligned} \mathbf{g(x,u)} = \mathbf{G}\begin{bmatrix} \mathbf{x} \\ \mathbf{u} \end{bmatrix} [x,u] + \mathbf{g} \leq \mathbf{0} &\mathbf{h(x,u)} = \mathbf{H}\begin{bmatrix} \mathbf{x} \\ \mathbf{u} \end{bmatrix} [x,u] + \mathbf{h} \end{aligned} \end{split}\]Public Types
-
typedef ActionDataAbstractTpl<Scalar> ActionDataAbstract
-
typedef ActionModelAbstractTpl<Scalar> Base
-
typedef ActionDataLQRTpl<Scalar> Data
-
typedef StateVectorTpl<Scalar> StateVector
-
typedef MathBaseTpl<Scalar> MathBase
Public Functions
-
ActionModelLQRTpl(const MatrixXs &A, const MatrixXs &B, const MatrixXs &Q, const MatrixXs &R, const MatrixXs &N)
Initialize the LQR action model.
- Parameters:
A – [in] State matrix
B – [in] Input matrix
Q – [in] State weight matrix
R – [in] Input weight matrix
N – [in] State-input weight matrix
-
ActionModelLQRTpl(const MatrixXs &A, const MatrixXs &B, const MatrixXs &Q, const MatrixXs &R, const MatrixXs &N, const VectorXs &f, const VectorXs &q, const VectorXs &r)
Initialize the LQR action model.
- Parameters:
A – [in] State matrix
B – [in] Input matrix
Q – [in] State weight matrix
R – [in] Input weight matrix
N – [in] State-input weight matrix
f – [in] Dynamics drift
q – [in] State weight vector
r – [in] Input weight vector
-
ActionModelLQRTpl(const MatrixXs &A, const MatrixXs &B, const MatrixXs &Q, const MatrixXs &R, const MatrixXs &N, const MatrixXs &G, const MatrixXs &H, const VectorXs &f, const VectorXs &q, const VectorXs &r, const VectorXs &g, const VectorXs &h)
Initialize the LQR action model.
- Parameters:
A – [in] State matrix
B – [in] Input matrix
Q – [in] State weight matrix
R – [in] Input weight matrix
N – [in] State-input weight matrix
G – [in] State-input inequality constraint matrix
H – [in] State-input equality constraint matrix
f – [in] Dynamics drift
q – [in] State weight vector
r – [in] Input weight vector
g – [in] State-input inequality constraint bias
h – [in] State-input equality constraint bias
-
ActionModelLQRTpl(const std::size_t nx, const std::size_t nu, const bool drift_free = true)
Initialize the LQR action model.
- Parameters:
nx – [in] Dimension of state vector
nu – [in] Dimension of control vector
drif_free – [in] Enable / disable the bias term of the linear dynamics (default true)
-
ActionModelLQRTpl(const ActionModelLQRTpl ©)
Copy constructor.
-
virtual ~ActionModelLQRTpl() = default
Compute the next state and cost value.
- Parameters:
data – [in] Action data
x – [in] State point \(\mathbf{x}\in\mathbb{R}^{ndx}\)
u – [in] Control input \(\mathbf{u}\in\mathbb{R}^{nu}\)
Compute the total cost value for nodes that depends only on the state.
It updates the total cost and the next state is not computed as it is not expected to change. This function is used in the terminal nodes of an optimal control problem.
- Parameters:
data – [in] Action data
x – [in] State point \(\mathbf{x}\in\mathbb{R}^{ndx}\)
Compute the derivatives of the dynamics and cost functions.
It computes the partial derivatives of the dynamical system and the cost function. It assumes that
calc()has been run first. This function builds a linear-quadratic approximation of the action model (i.e. dynamical system and cost function).- Parameters:
data – [in] Action data
x – [in] State point \(\mathbf{x}\in\mathbb{R}^{ndx}\)
u – [in] Control input \(\mathbf{u}\in\mathbb{R}^{nu}\)
Compute the derivatives of the cost functions with respect to the state only.
It updates the derivatives of the cost function with respect to the state only. This function is used in the terminal nodes of an optimal control problem.
- Parameters:
data – [in] Action data
x – [in] State point \(\mathbf{x}\in\mathbb{R}^{ndx}\)
-
virtual std::shared_ptr<ActionDataAbstract> createData() override
Create the action data.
- Returns:
the action data
-
template<typename NewScalar>
ActionModelLQRTpl<NewScalar> cast() const Cast the LQR model to a different scalar type.
It is useful for operations requiring different precision or scalar types.
- Template Parameters:
NewScalar – The new scalar type to cast to.
- Returns:
ActionModelLQRTpl<NewScalar> A action model with the new scalar type.
Checks that a specific data belongs to this model.
-
void set_LQR(const MatrixXs &A, const MatrixXs &B, const MatrixXs &Q, const MatrixXs &R, const MatrixXs &N, const MatrixXs &G, const MatrixXs &H, const VectorXs &f, const VectorXs &q, const VectorXs &r, const VectorXs &g, const VectorXs &h)
Modify the LQR action model.
- Parameters:
A – [in] State matrix
B – [in] Input matrix
Q – [in] State weight matrix
R – [in] Input weight matrix
N – [in] State-input weight matrix
G – [in] State-input inequality constraint matrix
H – [in] State-input equality constraint matrix
f – [in] Dynamics drift
q – [in] State weight vector
r – [in] Input weight vector
g – [in] State-input inequality constraint bias
h – [in] State-input equality constraint bias
- DEPRECATED ("Use get_A", const MatrixXs &get_Fx() const { return get_A();}) DEPRECATED("Use get_B"
- DEPRECATED ("Use get_f", const VectorXs &get_f0() const { return get_f();}) DEPRECATED("Use get_q"
- DEPRECATED ("Use get_r", const VectorXs &get_lu() const { return get_r();}) DEPRECATED("Use get_Q"
- DEPRECATED ("Use get_R", const MatrixXs &get_Lxu() const { return get_R();}) DEPRECATED("Use get_N"
- DEPRECATED ("Use set_LQR", void set_Fx(const MatrixXs &A) { set_LQR(A, B_, Q_, R_, N_, G_, H_, f_, q_, r_, g_, h_);}) DEPRECATED("Use set_LQR"
- DEPRECATED ("Use set_LQR", void set_f0(const VectorXs &f) { set_LQR(A_, B_, Q_, R_, N_, G_, H_, f, q_, r_, g_, h_);}) DEPRECATED("Use set_LQR"
- DEPRECATED ("Use set_LQR", void set_lu(const VectorXs &r) { set_LQR(A_, B_, Q_, R_, N_, G_, H_, f_, q_, r, g_, h_);}) DEPRECATED("Use set_LQR"
- DEPRECATED ("Use set_LQR", void set_Luu(const MatrixXs &R) { set_LQR(A_, B_, Q_, R, N_, G_, H_, f_, q_, r_, g_, h_);}) DEPRECATED("Use set_LQR"
-
virtual void print(std::ostream &os) const override
Print relevant information of the LQR model.
- Parameters:
os – [out] Output stream object
Public Members
- EIGEN_MAKE_ALIGNED_OPERATOR_NEW typedef _Scalar Scalar
Public Static Functions
-
static ActionModelLQRTpl Random(const std::size_t nx, const std::size_t nu, const std::size_t ng = 0, const std::size_t nh = 0)
Create a random LQR model.
- Parameters:
nx – [in] State dimension
nu – [in] Control dimension
ng – [in] Inequality constraint dimension (default 0)
nh – [in] Equality constraint dimension (defaul 0)
Protected Attributes
-
std::size_t ng_
Number of inequality constraints.
-
std::size_t nh_
< Equality constraint dimension
-
std::size_t nu_
< Inequality constraint dimension
-
std::shared_ptr<StateAbstract> state_
< Control dimension
-
typedef ActionDataAbstractTpl<Scalar> ActionDataAbstract