The Induced Dimension Reduction method (IDR(s)) is a short-recurrences Krylov method for sparse square problems. More...
#include <IDRS.h>
Public Types | |
typedef _MatrixType | MatrixType |
typedef _Preconditioner | Preconditioner |
typedef MatrixType::RealScalar | RealScalar |
typedef MatrixType::Scalar | Scalar |
Public Member Functions | |
template<typename Rhs , typename Dest > | |
void | _solve_vector_with_guess_impl (const Rhs &b, Dest &x) const |
IDRS () | |
template<typename MatrixDerived > | |
IDRS (const EigenBase< MatrixDerived > &A) | |
void | setAngle (RealScalar angle) |
void | setResidualUpdate (bool update) |
void | setS (Index S) |
void | setSmoothing (bool smoothing) |
Private Types | |
typedef IterativeSolverBase< IDRS > | Base |
Private Member Functions | |
const ActualMatrixType & | matrix () const |
Private Attributes | |
RealScalar | m_angle |
RealScalar | m_error |
ComputationInfo | m_info |
bool | m_isInitialized |
Index | m_iterations |
bool | m_residual |
Index | m_S |
bool | m_smoothing |
The Induced Dimension Reduction method (IDR(s)) is a short-recurrences Krylov method for sparse square problems.
This class allows to solve for A.x = b sparse linear problems. The vectors x and b can be either dense or sparse. he Induced Dimension Reduction method, IDR(), is a robust and efficient short-recurrence Krylov subspace method for solving large nonsymmetric systems of linear equations.
For indefinite systems IDR(S) outperforms both BiCGStab and BiCGStab(L). Additionally, IDR(S) can handle matrices with complex eigenvalues more efficiently than BiCGStab.
Many problems that do not converge for BiCGSTAB converge for IDR(s) (for larger values of s). And if both methods converge the convergence for IDR(s) is typically much faster for difficult systems (for example indefinite problems).
IDR(s) is a limited memory finite termination method. In exact arithmetic it converges in at most N+N/s iterations, with N the system size. It uses a fixed number of 4+3s vector. In comparison, BiCGSTAB terminates in 2N iterations and uses 7 vectors. GMRES terminates in at most N iterations, and uses I+3 vectors, with I the number of iterations. Restarting GMRES limits the memory consumption, but destroys the finite termination property.
_MatrixType | the type of the sparse matrix A, can be a dense or a sparse matrix. |
_Preconditioner | the type of the preconditioner. Default is DiagonalPreconditioner |
\implsparsesolverconcept
The maximal number of iterations and tolerance value can be controlled via the setMaxIterations() and setTolerance() methods. The defaults are the size of the problem for the maximal number of iterations and NumTraits<Scalar>::epsilon() for the tolerance.
The tolerance corresponds to the relative residual error: |Ax-b|/|b|
Performance: when using sparse matrices, best performance is achied for a row-major sparse matrix format. Moreover, in this case multi-threading can be exploited if the user code is compiled with OpenMP enabled. See Eigen and multi-threading for details.
By default the iterations start with x=0 as an initial guess of the solution. One can control the start using the solveWithGuess() method.
IDR(s) can also be used in a matrix-free context, see the following example .
|
private |
typedef _MatrixType Eigen::IDRS< _MatrixType, _Preconditioner >::MatrixType |
typedef _Preconditioner Eigen::IDRS< _MatrixType, _Preconditioner >::Preconditioner |
typedef MatrixType::RealScalar Eigen::IDRS< _MatrixType, _Preconditioner >::RealScalar |
typedef MatrixType::Scalar Eigen::IDRS< _MatrixType, _Preconditioner >::Scalar |
|
inline |
|
inlineexplicit |
Initialize the solver with matrix A for further Ax=b
solving.
This constructor is a shortcut for the default constructor followed by a call to compute().
|
inline |
|
inlineprivate |
Definition at line 419 of file IterativeSolverBase.h.
|
inline |
The angle must be a real scalar. In IDR(s), a value for the iteration parameter omega must be chosen in every s+1th step. The most natural choice is to select a value to minimize the norm of the next residual. This corresponds to the parameter omega = 0. In practice, this may lead to values of omega that are so small that the other iteration parameters cannot be computed with sufficient accuracy. In such cases it is better to increase the value of omega sufficiently such that a compromise is reached between accurate computations and reduction of the residual norm. The parameter angle =0.7 (”maintaining the convergence strategy”) results in such a compromise.
|
inline |
|
inline |
|
inline |
Switches off and on smoothing. Residual smoothing results in monotonically decreasing residual norms at the expense of two extra vectors of storage and a few extra vector operations. Although monotonic decrease of the residual norms is a desirable property, the rate of convergence of the unsmoothed process and the smoothed process is basically the same. Default is off
|
private |
|
mutableprivate |
Definition at line 436 of file IterativeSolverBase.h.
|
mutableprivate |
Definition at line 438 of file IterativeSolverBase.h.
|
mutableprivate |
Definition at line 119 of file SparseSolverBase.h.
|
mutableprivate |
Definition at line 437 of file IterativeSolverBase.h.
|
private |
|
private |
|
private |