# How to Use Solving Strategy

## Contents |

# Introduction

The SolvingStrategy class has been conceived as an abstraction of the outermost structure of the numerical algorithm's operations in stationary problems or, in the context of transient problems, those involved in the complete evolution of the system to the next time step. These operations are typically a sequence of build-and-solve steps, each consisting in:

- Building a system
- Solving it approximately with a certain tolerance

Incidentally, a SolvingStrategy instance combines simpler structures that in turn are abstractions of component (sub)algorithms. These structures can belong to any of the following classes: Scheme, LinearSolver, BuilderAndSolver, ConvergenceCriteria and even SolvingStrategy. The role of each of these will be clarified through examples in the following sections. It is important to understand all of these complex structures to be able to fully grasp the more complex SolvingStrategy. With a reasonable level of generality, the sequence of the different operations that are (sometimes trivially) performed by a SolvingStrategy instance can be summarized as follows:

## Nonlinear Strategies

They are used for problems that, after applying the time discretization implemented in the Scheme class and introducing the boundary conditions in the system, produce systems of nonlinear equations of the form:

*K*(*u*)*u* = *f*(*u*)

Being u the vector of unknowns K the LHS matrix and f(u) the RHS vector, both possibly depending on the solution u. In Kratos this problem is reformulated as:

*K*(*u*)(*u*_{0} + Δ*u*) = *f*(*u*)

and rearranged as:

*K*(*u*)Δ*u* = *f*(*u*) − *K*(*u*)*u*_{0}

where u_{0} is an initial value or guess of the solution and du its correction. Because u is unknown, this system has to be in general replaced by another:

*K*'Δ*u*' = *f*' − *K*'*u*'_{0}

such that its solution *Δ*u' is an approximation of *Δ*u. This system is built according to the design of the specific Element and Scheme classes that have been chosen. An iterative strategy (e.g. Newton-Raphson) is in general applied to create a succession of approximate systems

*K*'_{n}Δ*u*'_{n} = *f*'_{n} − *K*'_{n}*u*'_{n} = *R*_{n}

*u*'_{n} = *u*'_{n − 1} + Δ*u*'_{n − 1}

and a convergence criterion placed on the norm of *Δ'*_{n}, since it must tend to 0 when the residual R_{n} tends to 0 (and the original system is recovered 'in the limit'). Each approximate (linear) system, therefore, has to be solved by means of a LinearSolver.

## Linear Strategies

They are used for problems that produce systems of linear equations of the form:

*K**u* = *f*

where neither K or f depend on the unknown. In Kratos these problems are formulated as in nonlinear problems, for which they are only a particular case. The reason for this falls upon the code implementation, in order to allow for a natural generalization of the SolvingStrategy. That is:

*K*Δ*u* = *f* − *K**u*_{0}

Taking u_{0} = 0, K'_{0} = K and f'_{0} = f, the approximate system coincides with the original system and the solution is reached in one iteration.

# Object Description

In this section we interpret the SolvingStrategy in a more concise way by referring to the actual code that implements them. Therefore, here we will discuss the SolvingStrategy C++ defined class and some of its children to explain how to effectively use them and maybe facilitate the task of programming a new one in Kratos.

The strategy pattern lets users implement a new SolvingStrategy and add it to Kratos easily, which increases the extendibility of Kratos. It also allows them to easily select a particular strategy and use it instead of another in order to change the solving algorithm, which increases the flexibility of Kratos. Also, all the system matrices and vectors in the systems to be solved will be stored in the strategy. This allows to deal with multiple LHS and RHS.

A composite pattern is used to let users combine different strategies in one. For example, a fractional step strategy can be implemented by combining different strategies used for each step in one composite strategy. Like for Process, the interface for changing the children of the composite strategy is considered to be too sophisticated and is removed from the SolverStrategy. So a composite structure can be constructed by giving all its components at the constructing time and then it can be used but without changing its sub algorithms.

## Structure of the base class

### Constructor

Let us look at the constructors definition:

SolvingStrategy( ModelPart& model_part, bool MoveMeshFlag = false ) : mr_model_part(model_part) { SetMoveMeshFlag(MoveMeshFlag); }

Therefore, any SolvingStrategy instance will take up a ModelPart instance, containing the mesh data and the boundary conditions information, and the flag 'MoveMeshFlag' which indicates if the mesh nodes are to be moved with the calculated solution (e.g. if nodal displacements are computed) to be modified from inside the SolverStrategy or not. Note that both parameters are stored as member variables, thus linking a SolverStrategy to a particular ModelPart instance.

### Public Methods

These methods are typically accessed through the Python strategy interface or from inside of a bigger containing SolverStrategy instance. The most important ones are next listed below. They are meant to be rewritten in a children strategy:

virtual void Predict()

It is empty by default. It is used to produce a guess for the solution. If it is not called a trivial predictor is used in which the values of the solution step of interest are assumed equal to the old values.

virtual double Solve()

It only returns the value of the norm of the solution correction (0.0 by default). This method typically encapsulates the greatest amount of computations of all the methods in the SolverStrategy. It contains the iterative loop that implements the sequence of approximate solutions by building the system by assembling local components (by means of a BuilderAndSolver instance, maybe not at each step), Solving it, updating the nodal values a method.

virtual void Clear()

It is empty by default. It can be used to clear internal storage.

virtual bool IsConverged()

It only returns true by default. It should be considered as a "post solution" convergence check which is useful for coupled analysis. The convergence criteria that is used is the one used inside the 'Solve()' step.

virtual void CalculateOutputData()

This method is used when nontrivial results (e.g. stresses) need to be calculated from the solution. This mothod should be called only when needed (e.g. before printing), as it can involve a non negligible cost.

void MoveMesh()

This method is not a virtual function, so it is not meant to be rewritten in derived classes. It simply changes the meshes coordinates with the calculated DISPLACEMENTS (raising an error if the variable DISPLACEMENT is not being solved for) if MoveMeshFlag is set to true.

virtual int Check()

This method is meant to perform expensive checks. It is designed to be called once to verify that the input is correct.

## Example: ResidualBasedNewtonRaphsonStretegy

### Constructor

ResidualBasedNewtonRaphsonStrategy( ModelPart& model_part, typename TSchemeType::Pointer pScheme, typename TLinearSolver::Pointer pNewLinearSolver, typename TConvergenceCriteriaType::Pointer pNewConvergenceCriteria, int MaxIterations = 30, bool CalculateReactions = false, bool ReformDofSetAtEachStep = false, bool MoveMeshFlag = false )

first argument is the model_part, which is the union of elements thought of as to belong to one entity - what we call a model part. E.g. this could be structure_model_part and fluid_model_part in an FSI application.

pScheme - defines the time integration scheme. E.g. Newmark. Linear solver - the solver which will be (in this case) used for the solution of the linear system arising at every iteration of Newton-Raphson. It could be e.g. a Conjugate Gradient solver. Convergence criterion - is the criterion for the Newton-Raphson(in this case) procedure to be converged. It could be a norm of the residual or something else - like the energy norm. MaxIterations is a cut of criterion for the Newton-Raphson (in this case) - if the convergence is not achieved within the allowed number of iterations, the solution terminates and the value of variable of interest achieved at the last iteration is taken as the result, though a message appears that the solution did not converge.

Last two flags are important when choosing between Eulerian and Lagrangian frameworks - if we erase or add nodes or elements during the solution of the problem, we need to set the ReformDofSetAtEachStep to true, and if use non-Eulerian approach, the mesh is also moved - so set the MoveMeshFlag to true in this case.