(Difference between revisions)
<math>Ku = f</math>
<math>Ku = f</math>
| || |
where neither K or f depend on the unknown. In [[Kratos]] these problems are formulated as in nonlinear problems, for which they are only a particular case. The reason for this lays in the code design, for
this allows for a natural generalization of the SolvingStrategy. The same type of reformulation yieds: |+|
where neither K or f depend on the unknown. In [[Kratos]] these problems are formulated as in nonlinear problems, for which they are only a particular case. The reason for this lays in the code design, for allows for a natural generalization of the SolvingStrategy. The same type of reformulation yieds:
| || |
<math>K\Delta u = f - Ku_0</math>
<math>K\Delta u = f - Ku_0</math>
Revision as of 15:27, 2 August 2013
The SolvingStrategy class has been conceived as an abstraction of the outermost structure of the numerical algorithm's operations in stationary problems or, in the context of transient problems, those involved in the complete evolution of the system to the next time step. These operations are typically a sequence of build-and-solve steps, each consisting in:
- Building a system
- Solving it approximately within a certain tolerance
Incidentally, a SolvingStrategy instance combines simpler structures that in turn are abstractions of component (sub)algorithms. These structures can belong to any of the following classes: Scheme, LinearSolver, BuilderAndSolver, ConvergenceCriteria and even SolvingStrategy. The role of each of these should be clarified in the following sections. Nonetheless it is important to understand all of these complex structures to be able to fully grasp the more complex SolvingStrategy, so further reading is recommended (see the 'HOW TOs' list). With a reasonable degree of generality, the sequence of operations that are (sometimes trivially) performed by a SolvingStrategy instance can be summarized as follows:
They are employed in problems in which, having applied the particular time discretization (which is implemented in the Scheme class instance) and introduced the prescribed boundary conditions, produce systems of nonlinear equations of the form:
K(u)u = f(u)
Being u the vector of unknowns, K the LHS matrix and f(u) the RHS vector, both possibly depending on the solution u. In Kratos this problem is reformulated as:
K(u)(u0 + Δu) = f(u)
and rearranged as:
K(u)Δu = f(u) − K(u)u0
where u0 is an initial value or guess of the solution and du its correction. Because u is unknown, this system has to be in general replaced by another:
K'Δu' = f' − K'u'0
such that its solution Δu' is an approximation of Δu. This system is built according to the design of the specific Element and Scheme classes that have been chosen. An iterative strategy (e.g. Newton-Raphson) is in general applied to create a succession of approximate systems
K'nΔu'n = f'n − K'nu'n = Rn
u'n = u'n − 1 + Δu'n − 1
and a convergence criterion placed on the norm of Δ'n, since it must tend to 0 when the residual Rn tends to 0 (and the original system is recovered 'in the limit'). Each approximate linear system can then be solved by means of a LinearSolver instance.
They are used for problems that produce systems of linear equations of the form:
Ku = f
where neither K or f depend on the unknown. In Kratos these problems are formulated as in nonlinear problems, for which they are only a particular case. The reason for this lays in the code design, for it allows for a natural generalization of the SolvingStrategy. The same type of reformulation yieds:
KΔu = f − Ku0
Taking u0 = 0, K'0 = K and f'0 = f, the approximate system coincides with the original system and the solution is reached in one iteration.
In this section we interpret the SolvingStrategy in a more concise way by referring to its actual implementation in the code. Therefore, here we will discuss the SolvingStrategy C++ defined class and some of its children to explain how to effectively use them and maybe facilitate the task of programming a new one in Kratos.
The strategy pattern is designed to allow users to implement a new SolvingStrategy and add it to Kratos easily, which increases the extendibility of Kratos. It also allows them to easily select a particular strategy and use it instead of another in order to change the solving algorithm in a straight forward way, which increases the flexibility of the code.
On the other hand, a composite pattern is used to let users combine different strategies in one. For example, a fractional step strategy can be implemented by combining different strategies used for each step in one composite strategy. As in the case of the Process class (see General Structure), the interface for changing the children of the composite strategy is considered to be too sophisticated and is removed from the SolverStrategy. Therefore a composite structure can be constructed by giving all its components at the constructing time and then it can be used but without changing its sub algorithms. In the same spirit, all the system matrices and vectors in the systems to be solved will be stored in the strategy. This permits dealing with multiple LHS and RHS.
Structure of the base class
Let us look at the constructors definition:
class TLinearSolver //= LinearSolver<TSparseSpace,TDenseSpace>
ModelPart& model_part, bool MoveMeshFlag = false
TSparseSpace, TDenseSpace and TLinearSolver are classes that define particular sparse matrix container types, dense matrix container types and the associated LinearSolver. This allows for different linear system solving algorithms to be used without changing the strategy. By looking at the constructor's parameters it is seen that any SolvingStrategy instance will take up a
- A ModelPart instance, containing the mesh data and the boundary conditions information. It contains a set of elements that discretize a domain which corresponds to a certain part of the whole model and in which a finite element discretization is to be performed (e.g. there could be more than one model parts as parameters, like a structure_model_part and a fluid_model_part in a FSI application)
- The flag 'MoveMeshFlag', which indicates if the mesh nodes are to be moved with the calculated solution (e.g. if nodal displacements are computed) to be modified from inside the SolverStrategy or not. Note that both parameters are stored as member variables, thus linking a SolverStrategy to a particular ModelPart instance.
These methods are typically accessed through the Python strategy interface or from inside of a bigger containing SolverStrategy instance. The most important ones are next listed below. They are meant to be rewritten in a children strategy:
virtual void Predict()
It is empty by default. It is used to produce a guess for the solution. If it is not called a trivial predictor is used in which the values of the solution step of interest are assumed equal to the old values.
virtual double Solve()
It only returns the value of the norm of the solution correction (0.0 by default). This method typically encapsulates the greatest amount of computations of all the methods in the SolverStrategy. It contains the iterative loop that implements the sequence of approximate solutions by building the system by assembling local components (by means of a BuilderAndSolver instance, maybe not at each step), Solving it, updating the nodal values a method.
virtual void Clear()
It is empty by default. It can be used to clear internal storage.
virtual bool IsConverged()
It only returns true by default. It should be considered as a "post solution" convergence check which is useful for coupled analysis. The convergence criteria that is used is the one used inside the 'Solve()' step.
virtual void CalculateOutputData()
This method is used when nontrivial results (e.g. stresses) need to be calculated from the solution. This mothod should be called only when needed (e.g. before printing), as it can involve a non negligible cost.
This method is not a virtual function, so it is not meant to be rewritten in derived classes. It simply changes the meshes coordinates with the calculated DISPLACEMENTS (raising an error if the variable DISPLACEMENT is not being solved for) if MoveMeshFlag is set to true.
virtual int Check()
This method is meant to perform expensive checks. It is designed to be called once to verify that the input is correct. By default, it checks weather the DISPLACEMENT variable is needed and raises an error in case it is but the variable is not added to the node and it loops over the elements and conditions of the model part, calling their respective Check methods:
The return integer is to be interpreted as a flag used to inform the user. It is 0 by default.
In this section ResidualBasedNewtonRaphsonStretegy strategies is analysed in some detail as an example of a SolverStrategy derived class and some comments are made on possible variations for alternative strategies.
typename TSchemeType::Pointer pScheme,
typename TLinearSolver::Pointer pNewLinearSolver,
typename TConvergenceCriteriaType::Pointer pNewConvergenceCriteria,
int MaxIterations = 30,
bool CalculateReactions = false,
bool ReformDofSetAtEachStep = false,
bool MoveMeshFlag = false
: SolvingStrategy<TSparseSpace, TDenseSpace, TLinearSolver>(model_part, MoveMeshFlag)
Let us look at the different arguments:
- The first argument is the model_part, used as explained in the previous section.
- The second argument is a pointer to a Scheme instance. It defines the time integration scheme. (e.g. Newmark) #The next argument is a pointer to a LinearSolver instance, which defines the linear system solver (e.g. a Conjugate Gradient solver). In this particular case it is used for the solution of the linear system arising at every iteration of Newton-Raphson.
- The next argument is a pointer to a ConvergenceCriteria instance. It defines the convergence criterion for the Newton-Raphson procedure. It can be the norm of the residual or something else (e.g. the energy norm)
- The next argument is MaxIterations. It is the cut of criterion for the iterative procedure. If the convergence is not achieved within the allowed number of iterations, the solution terminates and the value of variable of interest achieved at the last iteration is taken as the result, though a message appears that the solution did not converge.
- The next parameter is CalculateReactions, wich activates the CalculateOutputData method when set to true.
- ReformDofSetAtEachStep should be set to true if nodes or elements are erased or added during the solution of the problem.
- MoveMeshFlag should be set to true if use a non-Eulerian approach (the mesh is moved).
The last two flags are therefore important when choosing between Eulerian and Lagrangian frameworks
Let us look at the member variables of the ResidualBasedNewtonRaphsonStrategy class:
typename TSchemeType::Pointer mpScheme;
typename TLinearSolver::Pointer mpLinearSolver;
typename TBuilderAndSolverType::Pointer mpBuilderAndSolver;
typename TConvergenceCriteriaType::Pointer mpConvergenceCriteria;
unsigned int mMaxIterationNumber;
The first four variables are pointers to structures that carry out a great part of the computations. These are instances of classes Scheme, LinearSolver, BuilderAndSolver and ConvergenceCriteria, the role of which has been briefly outlined in the previous section.
The next three variables correspond to pointers to the system matrices K (mpA), Δu (mpDx) and f (mpb). Their respective types are defined in the base class template argument classes TSparseSpace and TDenseSpace that have been described in the description of the base class' constructor, providing the desired flexibility to the selection of a corresponding LinearSolver.
The next three variables are flags indicative the status of the resolution process. They are used to control the internal workflow.
The rest of the variables are customization flags:
- mKeepSystemConstantDuringIterations It indicates weather or not the system matrices are to be modified at each iteration (e.g. as in the complete Newton-Raphson method). Setting it to true will drop the convergence rate but could result in an efficient method in some applications.
- mReformDofSetAtEachStep It is set to true only when the connectivity changes in each time step (e.g. there is remeshing at each step). This operation involves requiring the DOF set to each element and rebuilding the system matrices at each time step, which is expensive. Therefore, it should be used only when strictly necessary. Otherwise it is only called at the begining of the calculation.
- mMaxIterationNumber Its meaning has already been explained in the description of the class' constructor.
Here we discuss in some detail the specific implementation of this derived class' public methods.
It calls the scheme's 'Predict' method, moving the mesh if needed:
GetScheme()->Predict(BaseType::GetModelPart(), rDofSet, mA, mDx, mb);
if (this->MoveMeshFlag() == true) BaseType::MoveMesh();
It contains the iterative loop of the Newton-Raphson method. The needed elemental matrices are calculated by a Scheme instance and the system matrices are assembled by the BuilderAndSolver that takes it as a parameter and can deal with the particular container structures and linear solver of the SolverStrategy because it too takes them as template arguments. The flow of operations is as follows:
A first iteration is initiated by checking if convergence is already achieved by the actual state:
is_converged = mpConvergenceCriteria->PreCriteria(BaseType::GetModelPart(), rDofSet, mA, mDx, mb);
If the base type member variable mRebuldLevel has been set to 0, just the RHS is rebuild after each time step:
if (BaseType::mRebuildLevel > 1 || BaseType::mStiffnessMatrixIsBuilt == false)
pBuilderAndSolver->BuildAndSolve(pScheme, BaseType::GetModelPart(), mA, mDx, mb);
which performes one iteration, that is, it builds the system and solves for mDx.
For most applications, though, a higher level is set and the following method is called instead:
pBuilderAndSolver->BuildRHSAndSolve(pScheme, BaseType::GetModelPart(), mA, mDx, mb);
Next the problem variables are updated with the obtained results. This is performed by the scheme:
pScheme->FinalizeNonLinIteration(BaseType::GetModelPart(), mA, mDx, mb);
Additinally, the mesh is moved if needed:
if (BaseType::MoveMeshFlag() == true) BaseType::MoveMesh();
Now the 'PostCriteria' convergence check is performed only if the 'PreCriteria' method in step 1 had returned 'true'. Otherwise the algorithm simply continues. This method may require updating the RHS:
if (mpConvergenceCriteria->GetActualizeRHSflag() == true)
pBuilderAndSolver->BuildRHS(pScheme, BaseType::GetModelPart(), mb);
is_converged = mpConvergenceCriteria->PostCriteria(BaseType::GetModelPart(), rDofSet, mA, mDx, mb);
The iterative loop is initiatied:
while (is_converged == false && iteration_number++ < mMaxIterationNumber)
Just like in Step 1. 'pre' convergence criteria are assessed.
Only if needed, Step 2 is repeated:
if (BaseType::mRebuildLevel > 1 || BaseType::mStiffnessMatrixIsBuilt == false ):
//Step 2 is performed
Step 3 is repeated
Step 4 is repeated
Once the loop is finished, reactions are calculated if required:
if (mCalculateReactionsFlag == true)
pBuilderAndSolver->CalculateReactions(pScheme, BaseType::GetModelPart(), mA, mDx, mb);
Finally the scheme's and builder and solver's 'FinalizeSolutionStep' method are called and some other clearing methods as required.
It calls special methods defined in the base class template argument classes TSparseSpace and TDenseSpace to clear and resize the system matrices (mpA, mpDx and mpb) to 0. It also calls the builder and solver's and the scheme's respective 'Clear' methods, since they in turn also contain matrices. In order to make sure that the DOFs are recalculated, DofSetIsInitializedFlag is set to false.
It calls the builder and solver's method 'BuildRHS' if an actualized RHS vector is needed for the particular ConvergenceCriteria class that is used.
if (mpConvergenceCriteria->mActualizeRHSIsNeeded == true)
GetBuilderAndSolver()->BuildRHS(GetScheme(), BaseType::GetModelPart(), mb);
Then it calls ConvergenceCriteria's 'PostCriteria' method, which applies the particular criteria to its input and returns its output (true or false):
return mpConvergenceCriteria->PostCriteria(BaseType::GetModelPart(), rDofSet, mA, mDx, mb);
It calls the correspondent scheme's method:
GetScheme()->CalculateOutputData(BaseType::GetModelPart(), rDofSet, mA, mDx, mb);
Kratos applications are usually designed to be used through a Python interface. Therefore, the objects described in this page are usually created in a python script that we refer to as Strategy python (see How to construct the "solving strategies"). Similarly, the public methods of SolverStrategy will typically be called from the main script, which is usually also python based (see Python Script Tutorial: Using Kratos Solvers)