How to Use Solving Strategy

From KratosWiki
Revision as of 17:19, 1 August 2013 by Gcasas (Talk | contribs)
Jump to: navigation, search



The SolvingStrategy class has been conceived as an abstraction of the outermost structure of the numerical algorithm's operations in stationary problems or, in the context of transient problems, those involved in the complete evolution of the system to the next time step. These operations are typically a sequence of build-and-solve steps, each consisting in:

  1. Building a system
  2. Solving it approximately with a certain tolerance

Incidentally, a SolvingStrategy instance combines simpler structures that in turn are abstractions of component (sub)algorithms. These structures can belong to any of the following classes: Scheme, LinearSolver, BuilderAndSolver, ConvergenceCriteria and even SolvingStrategy. The role of each of these will be clarified through examples in the following sections. It is important to understand all of these complex structures to be able to fully grasp the more complex SolvingStrategy. With a reasonable level of generality, the sequence of the different operations that are (sometimes trivially) performed by a SolvingStrategy instance can be summarized as follows:

Nonlinear Strategies

They are used for problems that, after applying the time discretization implemented in the Scheme class and introducing the boundary conditions in the system, produce systems of nonlinear equations of the form:

K(u)u = f(u)

Being u the vector of unknowns K the LHS matrix and f(u) the RHS vector, both possibly depending on the solution u. In Kratos this problem is reformulated as:

K(u)(u0 + Δu) = f(u)

and rearranged as:

K(uu = f(u) − K(u)u0

where u0 is an initial value or guess of the solution and du its correction. Because u is unknown, this system has to be in general replaced by another:

Ku' = f' − K'u'0

such that its solution Δu' is an approximation of Δu. This system is built according to the design of the specific Element and Scheme classes that have been chosen. An iterative strategy (e.g. Newton-Raphson) is in general applied to create a succession of approximate systems

K'nΔu'n = f'nK'nu'n = Rn

u'n = u'n − 1 + Δu'n − 1

and a convergence criterion placed on the norm of Δ'n, since it must tend to 0 when the residual Rn tends to 0 (and the original system is recovered 'in the limit'). Each approximate (linear) system, therefore, has to be solved by means of a LinearSolver.

Linear Strategies

They are used for problems that produce systems of linear equations of the form:

Ku = f

where neither K or f depend on the unknown. In Kratos these problems are formulated as in nonlinear problems, for which they are only a particular case. The reason for this falls upon the code implementation, in order to allow for a natural generalization of the SolvingStrategy. That is:

KΔu = fKu0

Taking u0 = 0, K'0 = K and f'0 = f, the approximate system coincides with the original system and the solution is reached in one iteration.

Object Description

In this section we interpret the SolvingStrategy in a more concise way by referring to the actual code that implements them. Therefore, here we will discuss the SolvingStrategy C++ defined class and some of its children to explain how to effectively use them and maybe facilitate the task of programming a new one in Kratos.

The strategy pattern lets users implement a new SolvingStrategy and add it to Kratos easily, which increases the extendibility of Kratos. It also allows them to easily select a particular strategy and use it instead of another in order to change the solving algorithm, which increases the flexibility of Kratos. Also, all the system matrices and vectors in the systems to be solved will be stored in the strategy. This allows to deal with multiple LHS and RHS.

A composite pattern is used to let users combine different strategies in one. For example, a fractional step strategy can be implemented by combining different strategies used for each step in one composite strategy. Like for Process, the interface for changing the children of the composite strategy is considered to be too sophisticated and is removed from the SolverStrategy. So a composite structure can be constructed by giving all its components at the constructing time and then it can be used but without changing its sub algorithms.

Structure of the base class


Let us look at the constructors definition:

   template<class TSparseSpace,
            class TDenseSpace,
            class TLinearSolver //= LinearSolver<TSparseSpace,TDenseSpace>
       ModelPart& model_part, bool MoveMeshFlag = false
       : mr_model_part(model_part)

TSparseSpace, TDenseSpace and TLinearSolver are classes that define particular sparse matrix container types, dense matrix container types and the associated LinearSolver. This allows for different linear system solving algorithms to be used without changing the strategy. By looking at the constructor's parameters it is seen that any SolvingStrategy instance will take up a

  1. ModelPart instance, containing the mesh data and the boundary conditions information. It contains a set of elements that discretize a domain which corresponds to a certain part of the whole model and in which a finite element discretization is to be performed (e.g. there could be more than one model parts as parameters, like a structure_model_part and a fluid_model_part in a FSI application)
  2. The flag 'MoveMeshFlag', which indicates if the mesh nodes are to be moved with the calculated solution (e.g. if nodal displacements are computed) to be modified from inside the SolverStrategy or not. Note that both parameters are stored as member variables, thus linking a SolverStrategy to a particular ModelPart instance.

Public Methods

These methods are typically accessed through the Python strategy interface or from inside of a bigger containing SolverStrategy instance. The most important ones are next listed below. They are meant to be rewritten in a children strategy:

   virtual void Predict()

It is empty by default. It is used to produce a guess for the solution. If it is not called a trivial predictor is used in which the values of the solution step of interest are assumed equal to the old values.

   virtual double Solve()

It only returns the value of the norm of the solution correction (0.0 by default). This method typically encapsulates the greatest amount of computations of all the methods in the SolverStrategy. It contains the iterative loop that implements the sequence of approximate solutions by building the system by assembling local components (by means of a BuilderAndSolver instance, maybe not at each step), Solving it, updating the nodal values a method.

   virtual void Clear()

It is empty by default. It can be used to clear internal storage.

   virtual bool IsConverged()

It only returns true by default. It should be considered as a "post solution" convergence check which is useful for coupled analysis. The convergence criteria that is used is the one used inside the 'Solve()' step.

   virtual void CalculateOutputData()

This method is used when nontrivial results (e.g. stresses) need to be calculated from the solution. This mothod should be called only when needed (e.g. before printing), as it can involve a non negligible cost.

   void MoveMesh()

This method is not a virtual function, so it is not meant to be rewritten in derived classes. It simply changes the meshes coordinates with the calculated DISPLACEMENTS (raising an error if the variable DISPLACEMENT is not being solved for) if MoveMeshFlag is set to true.

   virtual int Check()

This method is meant to perform expensive checks. It is designed to be called once to verify that the input is correct.

Example: ResidualBasedNewtonRaphsonStretegy

In this section ResidualBasedNewtonRaphsonStretegy strategies is analysed in some detail as an example of a SolverStrategy derived class and some comments are made on possible variations for alternative strategies


     ModelPart& model_part, 
     typename TSchemeType::Pointer pScheme,
     typename TLinearSolver::Pointer pNewLinearSolver,
     typename TConvergenceCriteriaType::Pointer pNewConvergenceCriteria,
     int MaxIterations = 30,
     bool CalculateReactions = false,
     bool ReformDofSetAtEachStep = false,
     bool MoveMeshFlag = false
    : SolvingStrategy<TSparseSpace, TDenseSpace, TLinearSolver>(model_part, MoveMeshFlag)

Let us look at the different arguments:

  1. The first argument is the model_part, used as explained in the previous section.
  2. The second argument is a pointer to a Scheme instance. It defines the time integration scheme. (e.g. Newmark) #The next argument is a pointer to a LinearSolver instance, which defines the linear system solver (e.g. a Conjugate Gradient solver). In this particular case it is used for the solution of the linear system arising at every iteration of Newton-Raphson.
  3. The next argument is a pointer to a ConvergenceCriteria instance. It defines the convergence criterion for the Newton-Raphson procedure. It can be the norm of the residual or something else (e.g. the energy norm)
  4. The next argument is MaxIterations. It is the cut of criterion for the iterative procedure. If the convergence is not achieved within the allowed number of iterations, the solution terminates and the value of variable of interest achieved at the last iteration is taken as the result, though a message appears that the solution did not converge.
  5. The next parameter is CalculateReactions, wich activates the CalculateOutputData method when set to true.
  6. ReformDofSetAtEachStep should be set to true if nodes or elements are erased or added during the solution of the problem.
  7. MoveMeshFlag should be set to true if use a non-Eulerian approach (the mesh is moved).

The last two flags are therefore important when choosing between Eulerian and Lagrangian frameworks

Personal tools