# How to Use Solving Strategy

m (→Python interface) |
m (→Introduction) |
||

(60 intermediate revisions by one user not shown) | |||

Line 11: | Line 11: | ||

#Building a system | #Building a system | ||

− | #Solving it approximately | + | #Solving it approximately within a certain tolerance |

− | Incidentally, a SolvingStrategy instance combines simpler structures that in turn are abstractions of component (sub)algorithms. These structures can belong to any of the following classes: [[Kratos Structure: Strategies and Processes|Scheme, LinearSolver, BuilderAndSolver, ConvergenceCriteria]] and even [[Kratos Structure: Strategies and Processes|SolvingStrategy]]. The role of each of these | + | Incidentally, a SolvingStrategy instance combines simpler structures that in turn are abstractions of component (sub)algorithms. These structures can belong to any of the following classes: [[Kratos Structure: Strategies and Processes|Scheme, LinearSolver, BuilderAndSolver, ConvergenceCriteria]] and even [[Kratos Structure: Strategies and Processes|SolvingStrategy]]. The role of each of these should be clarified in the following sections. Nonetheless it is important to understand all of these complex structures to be able to fully grasp the more complex SolvingStrategy, so further reading is recommended (see the '[[HOW TOs]]' list). With a reasonable degree of generality, the sequence of operations that are (sometimes trivially) performed by a SolvingStrategy are described in the following sections: |

==Nonlinear Strategies== | ==Nonlinear Strategies== | ||

− | They are | + | They are employed in problems in which, having applied the particular time discretization (which is implemented in the [[Kratos Structure: Strategies and Processes|Scheme]] class instance) and introduced the prescribed boundary conditions, produce systems of nonlinear equations of the form: |

<math>K(u)u = f(u)</math> | <math>K(u)u = f(u)</math> | ||

− | + | Where u represents the vector of unknowns, K the 'stiffness matrix', and f the force vector, both K and f possibly depending on the solution u. | |

− | + | Because u is unknown, this system is then approximately solved by some iterative algorithm that, in [[Kratos]], takes the following form: | |

− | + | <math>A_n\Delta u_n = b_n</math> | |

− | <math> | + | <math>u_n = u_{n-1} + \Delta u_{n-1}</math> |

− | + | So that u<sub>n</sub> is an approximation of u and A<sub>n</sub> and b<sub>n</sub> can be calculated from previous solutions. In Kratos each system is built according to the design of the specific [[Element]] and [[Kratos Structure: Strategies and Processes|Scheme]] classes that have been chosen and then solved by means of a [[Kratos Structure: Strategies and Processes|LinearSolver]] instance. A certain convergence criterion is often placed on the norm of ''Δ''u<sub>n</sub> (that should tend to 0), although other criteria are possible (e.g. energy norm). These criteria are implemented in a particular instance of the ConvergenceCriteria class. | |

− | < | + | A fixed point strategy is generally applied to create the sequence of systems. For example, in the Newton-Raphson method, b<sub>n</sub> is equal to the minus the residual (f-Ku) |

− | + | and A<sub>n</sub> to the tangent matrix (d(f-Ku)/du), both evaluated at u<sub>n-1</sub>. In the context of Kratos, A<sub>n</sub> is regarded as the LHS and b<sub>n</sub> as the RHS. | |

− | + | ||

− | + | ||

− | < | + | |

− | + | ||

− | < | + | |

− | + | ||

− | + | ||

==Linear Strategies== | ==Linear Strategies== | ||

Line 47: | Line 40: | ||

<math>Ku = f</math> | <math>Ku = f</math> | ||

− | where neither K or f depend on the unknown. In [[Kratos]] these problems are formulated as | + | where neither K or f depend on the unknown. In [[Kratos]] these problems are formulated as nonlinear problems, of which they are then only a particular case. The reason for this lies in code design, since in this way it is easier to obtain a natural generalization of the SolvingStrategy class. Therefore, also in this context, the increment Δu is solved for. Then taking u<sub>0</sub> = 0, A<sub>0</sub> = K and b<sub>0</sub> = f, the approximate system coincides with the original system and the solution is reached in one iteration. |

− | + | ||

− | + | ||

− | + | ||

− | + | ||

=Object Description= | =Object Description= | ||

Line 93: | Line 82: | ||

virtual double Solve() | virtual double Solve() | ||

− | It only returns the value of the norm of the solution correction (0.0 by default). This method typically encapsulates the greatest amount of computations of all the methods in the SolverStrategy. It contains the iterative loop that implements the sequence of approximate solutions by building the system by assembling local components (by means of a BuilderAndSolver instance, maybe not at each step), | + | It only returns the value of the norm of the solution correction (0.0 by default). This method typically encapsulates the greatest amount of computations of all the methods in the SolverStrategy. It contains the iterative loop that implements the sequence of approximate solutions by building the system by assembling local components (by means of a BuilderAndSolver instance, maybe not at each step), solving it and updating the nodal values. |

virtual void Clear() | virtual void Clear() | ||

Line 119: | Line 108: | ||

The return integer is to be interpreted as a flag used to inform the user. It is 0 by default. | The return integer is to be interpreted as a flag used to inform the user. It is 0 by default. | ||

− | ==Example: | + | ==Example: ResidualBasedNewtonRaphsonStrategy== |

− | In this section | + | In this section ResidualBasedNewtonRaphsonStrategy strategy is analysed in some detail as an example of a SolverStrategy derived class. |

===Constructor=== | ===Constructor=== | ||

Line 139: | Line 128: | ||

Let us look at the different arguments: | Let us look at the different arguments: | ||

#The first argument is the model_part, used as explained in the previous section. | #The first argument is the model_part, used as explained in the previous section. | ||

− | #The second argument is a pointer to a Scheme instance. It defines the time integration scheme. (e.g. Newmark) #The next argument is a pointer to a LinearSolver instance, which defines the linear system solver (e.g. a Conjugate Gradient solver). In this particular case it is used for the solution of the linear system arising at every iteration of Newton-Raphson. | + | #The second argument is a pointer to a Scheme instance. It defines the time integration scheme. (e.g. Newmark) |

+ | #The next argument is a pointer to a LinearSolver instance, which defines the linear system solver (e.g. a Conjugate Gradient solver). In this particular case it is used for the solution of the linear system arising at every iteration of Newton-Raphson. | ||

#The next argument is a pointer to a ConvergenceCriteria instance. It defines the convergence criterion for the Newton-Raphson procedure. It can be the norm of the residual or something else (e.g. the energy norm) | #The next argument is a pointer to a ConvergenceCriteria instance. It defines the convergence criterion for the Newton-Raphson procedure. It can be the norm of the residual or something else (e.g. the energy norm) | ||

− | #The next argument is MaxIterations. It is the cut of criterion for the iterative procedure. If the convergence is not achieved within the allowed number of iterations, the solution terminates and the value of variable of interest achieved at the last iteration is taken as the result, though a message appears that the solution did not converge. | + | #The next argument is MaxIterations. It is the cut of criterion for the iterative procedure. If the convergence is not achieved within the allowed number of iterations, the solution terminates and the value of variable of interest achieved at the last iteration is taken as the result, though a message appears noting that the solution did not converge. |

#The next parameter is CalculateReactions, wich activates the CalculateOutputData method when set to true. | #The next parameter is CalculateReactions, wich activates the CalculateOutputData method when set to true. | ||

#ReformDofSetAtEachStep should be set to true if nodes or elements are erased or added during the solution of the problem. | #ReformDofSetAtEachStep should be set to true if nodes or elements are erased or added during the solution of the problem. | ||

Line 187: | Line 177: | ||

void Predict() | void Predict() | ||

− | It calls the scheme's 'Predict' method, moving the mesh if needed: | + | It calls the scheme's 'Predict' method ([[see How to Use Scheme]]), moving the mesh if needed: |

GetScheme()->Predict(BaseType::GetModelPart(), rDofSet, mA, mDx, mb); | GetScheme()->Predict(BaseType::GetModelPart(), rDofSet, mA, mDx, mb); | ||

Line 274: | Line 264: | ||

=====Step 7===== | =====Step 7===== | ||

− | Finally the scheme's and builder and solver's 'FinalizeSolutionStep' method are called | + | Finally the scheme's and builder and solver's 'FinalizeSolutionStep' method are called as well as some other clearing methods if required. |

====Clear==== | ====Clear==== | ||

Line 286: | Line 276: | ||

bool IsConverged() | bool IsConverged() | ||

− | It calls the builder and solver's method 'BuildRHS' if an actualized RHS vector is needed for the particular ConvergenceCriteria class that is used. | + | It calls the builder and solver's method 'BuildRHS' (see [[How to use Builder And Solver]]) if an actualized RHS vector is needed for the particular ConvergenceCriteria class that is used. |

if (mpConvergenceCriteria->mActualizeRHSIsNeeded == true) | if (mpConvergenceCriteria->mActualizeRHSIsNeeded == true) | ||

Line 301: | Line 291: | ||

void CalculateOutputData() | void CalculateOutputData() | ||

− | It calls the | + | It calls the scheme's corresponding method: |

GetScheme()->CalculateOutputData(BaseType::GetModelPart(), rDofSet, mA, mDx, mb); | GetScheme()->CalculateOutputData(BaseType::GetModelPart(), rDofSet, mA, mDx, mb); | ||

=Python interface= | =Python interface= | ||

− | Kratos [[applications]] are usually designed to be used through a Python interface. Therefore, the objects described in this page are | + | Kratos [[applications]] are usually designed to be used through a Python interface. Therefore, the objects described in this page are often created in a python script that we refer to as [[Strategy python]] (see [[How to construct the "solving strategies"]]). Similarly, the public methods of SolverStrategy will typically be called from the main script, which is usually also python based (see [[Python Script Tutorial: Using Kratos Solvers]]) |

## Latest revision as of 16:05, 29 December 2014

## Contents |

# Introduction

The SolvingStrategy class has been conceived as an abstraction of the outermost structure of the numerical algorithm's operations in stationary problems or, in the context of transient problems, those involved in the complete evolution of the system to the next time step. These operations are typically a sequence of build-and-solve steps, each consisting in:

- Building a system
- Solving it approximately within a certain tolerance

Incidentally, a SolvingStrategy instance combines simpler structures that in turn are abstractions of component (sub)algorithms. These structures can belong to any of the following classes: Scheme, LinearSolver, BuilderAndSolver, ConvergenceCriteria and even SolvingStrategy. The role of each of these should be clarified in the following sections. Nonetheless it is important to understand all of these complex structures to be able to fully grasp the more complex SolvingStrategy, so further reading is recommended (see the 'HOW TOs' list). With a reasonable degree of generality, the sequence of operations that are (sometimes trivially) performed by a SolvingStrategy are described in the following sections:

## Nonlinear Strategies

They are employed in problems in which, having applied the particular time discretization (which is implemented in the Scheme class instance) and introduced the prescribed boundary conditions, produce systems of nonlinear equations of the form:

*K*(*u*)*u* = *f*(*u*)

Where u represents the vector of unknowns, K the 'stiffness matrix', and f the force vector, both K and f possibly depending on the solution u.

Because u is unknown, this system is then approximately solved by some iterative algorithm that, in Kratos, takes the following form:

*A*_{n}Δ*u*_{n} = *b*_{n}

*u*_{n} = *u*_{n − 1} + Δ*u*_{n − 1}

So that u_{n} is an approximation of u and A_{n} and b_{n} can be calculated from previous solutions. In Kratos each system is built according to the design of the specific Element and Scheme classes that have been chosen and then solved by means of a LinearSolver instance. A certain convergence criterion is often placed on the norm of *Δ*u_{n} (that should tend to 0), although other criteria are possible (e.g. energy norm). These criteria are implemented in a particular instance of the ConvergenceCriteria class.

A fixed point strategy is generally applied to create the sequence of systems. For example, in the Newton-Raphson method, b_{n} is equal to the minus the residual (f-Ku)
and A_{n} to the tangent matrix (d(f-Ku)/du), both evaluated at u_{n-1}. In the context of Kratos, A_{n} is regarded as the LHS and b_{n} as the RHS.

## Linear Strategies

They are used for problems that produce systems of linear equations of the form:

*K**u* = *f*

where neither K or f depend on the unknown. In Kratos these problems are formulated as nonlinear problems, of which they are then only a particular case. The reason for this lies in code design, since in this way it is easier to obtain a natural generalization of the SolvingStrategy class. Therefore, also in this context, the increment Δu is solved for. Then taking u_{0} = 0, A_{0} = K and b_{0} = f, the approximate system coincides with the original system and the solution is reached in one iteration.

# Object Description

In this section we interpret the SolvingStrategy in a more concise way by referring to its actual implementation in the code. Therefore, here we will discuss the SolvingStrategy C++ defined class and some of its children to explain how to effectively use them and maybe facilitate the task of programming a new one in Kratos.

The strategy pattern is designed to allow users to implement a new SolvingStrategy and add it to Kratos easily, which increases the extendibility of Kratos. It also allows them to easily select a particular strategy and use it instead of another in order to change the solving algorithm in a straight forward way, which increases the flexibility of the code.

On the other hand, a composite pattern is used to let users combine different strategies in one. For example, a fractional step strategy can be implemented by combining different strategies used for each step in one composite strategy. As in the case of the Process class (see General Structure), the interface for changing the children of the composite strategy is considered to be too sophisticated and is removed from the SolverStrategy. Therefore a composite structure can be constructed by giving all its components at the constructing time and then it can be used but without changing its sub algorithms. In the same spirit, all the system matrices and vectors in the systems to be solved will be stored in the strategy. This permits dealing with multiple LHS and RHS.

## Structure of the base class

### Constructor

Let us look at the constructors definition:

template<class TSparseSpace, class TDenseSpace, class TLinearSolver //= LinearSolver<TSparseSpace,TDenseSpace> > SolvingStrategy( ModelPart& model_part, bool MoveMeshFlag = false ) : mr_model_part(model_part) { SetMoveMeshFlag(MoveMeshFlag); }

TSparseSpace, TDenseSpace and TLinearSolver are classes that define particular sparse matrix container types, dense matrix container types and the associated LinearSolver. This allows for different linear system solving algorithms to be used without changing the strategy. By looking at the constructor's parameters it is seen that any SolvingStrategy instance will take up a

- A ModelPart instance, containing the mesh data and the boundary conditions information. It contains a set of elements that discretize a domain which corresponds to a certain part of the whole model and in which a finite element discretization is to be performed (e.g. there could be more than one model parts as parameters, like a structure_model_part and a fluid_model_part in a FSI application)
- The flag 'MoveMeshFlag', which indicates if the mesh nodes are to be moved with the calculated solution (e.g. if nodal displacements are computed) to be modified from inside the SolverStrategy or not. Note that both parameters are stored as member variables, thus linking a SolverStrategy to a particular ModelPart instance.

### Public Methods

These methods are typically accessed through the Python strategy interface or from inside of a bigger containing SolverStrategy instance. The most important ones are next listed below. They are meant to be rewritten in a children strategy:

virtual void Predict()

It is empty by default. It is used to produce a guess for the solution. If it is not called a trivial predictor is used in which the values of the solution step of interest are assumed equal to the old values.

virtual double Solve()

It only returns the value of the norm of the solution correction (0.0 by default). This method typically encapsulates the greatest amount of computations of all the methods in the SolverStrategy. It contains the iterative loop that implements the sequence of approximate solutions by building the system by assembling local components (by means of a BuilderAndSolver instance, maybe not at each step), solving it and updating the nodal values.

virtual void Clear()

It is empty by default. It can be used to clear internal storage.

virtual bool IsConverged()

It only returns true by default. It should be considered as a "post solution" convergence check which is useful for coupled analysis. The convergence criteria that is used is the one used inside the 'Solve()' step.

virtual void CalculateOutputData()

This method is used when nontrivial results (e.g. stresses) need to be calculated from the solution. This mothod should be called only when needed (e.g. before printing), as it can involve a non negligible cost.

void MoveMesh()

This method is not a virtual function, so it is not meant to be rewritten in derived classes. It simply changes the meshes coordinates with the calculated DISPLACEMENTS (raising an error if the variable DISPLACEMENT is not being solved for) if MoveMeshFlag is set to true.

virtual int Check()

This method is meant to perform expensive checks. It is designed to be called once to verify that the input is correct. By default, it checks weather the DISPLACEMENT variable is needed and raises an error in case it is but the variable is not added to the node and it loops over the elements and conditions of the model part, calling their respective Check methods:

it->Check(GetModelPart().GetProcessInfo());

The return integer is to be interpreted as a flag used to inform the user. It is 0 by default.

## Example: ResidualBasedNewtonRaphsonStrategy

In this section ResidualBasedNewtonRaphsonStrategy strategy is analysed in some detail as an example of a SolverStrategy derived class.

### Constructor

ResidualBasedNewtonRaphsonStrategy( ModelPart& model_part, typename TSchemeType::Pointer pScheme, typename TLinearSolver::Pointer pNewLinearSolver, typename TConvergenceCriteriaType::Pointer pNewConvergenceCriteria, int MaxIterations = 30, bool CalculateReactions = false, bool ReformDofSetAtEachStep = false, bool MoveMeshFlag = false ) : SolvingStrategy<TSparseSpace, TDenseSpace, TLinearSolver>(model_part, MoveMeshFlag)

Let us look at the different arguments:

- The first argument is the model_part, used as explained in the previous section.
- The second argument is a pointer to a Scheme instance. It defines the time integration scheme. (e.g. Newmark)
- The next argument is a pointer to a LinearSolver instance, which defines the linear system solver (e.g. a Conjugate Gradient solver). In this particular case it is used for the solution of the linear system arising at every iteration of Newton-Raphson.
- The next argument is a pointer to a ConvergenceCriteria instance. It defines the convergence criterion for the Newton-Raphson procedure. It can be the norm of the residual or something else (e.g. the energy norm)
- The next argument is MaxIterations. It is the cut of criterion for the iterative procedure. If the convergence is not achieved within the allowed number of iterations, the solution terminates and the value of variable of interest achieved at the last iteration is taken as the result, though a message appears noting that the solution did not converge.
- The next parameter is CalculateReactions, wich activates the CalculateOutputData method when set to true.
- ReformDofSetAtEachStep should be set to true if nodes or elements are erased or added during the solution of the problem.
- MoveMeshFlag should be set to true if use a non-Eulerian approach (the mesh is moved).

The last two flags are therefore important when choosing between Eulerian and Lagrangian frameworks

### Member Variables

Let us look at the member variables of the ResidualBasedNewtonRaphsonStrategy class:

typename TSchemeType::Pointer mpScheme; typename TLinearSolver::Pointer mpLinearSolver; typename TBuilderAndSolverType::Pointer mpBuilderAndSolver; typename TConvergenceCriteriaType::Pointer mpConvergenceCriteria; TSystemVectorPointerType mpDx; TSystemVectorPointerType mpb; TSystemMatrixPointerType mpA; bool mSolutionStepIsInitialized; bool mInitializeWasPerformed; bool mCalculateReactionsFlag; bool mKeepSystemConstantDuringIterations; bool mReformDofSetAtEachStep; unsigned int mMaxIterationNumber;

The first four variables are pointers to structures that carry out a great part of the computations. These are instances of classes Scheme, LinearSolver, BuilderAndSolver and ConvergenceCriteria, the role of which has been briefly outlined in the previous section.

The next three variables correspond to pointers to the system matrices K (mpA), *Δ*u (mpDx) and f (mpb). Their respective types are defined in the base class template argument classes TSparseSpace and TDenseSpace that have been described in the description of the base class' constructor, providing the desired flexibility to the selection of a corresponding LinearSolver.

The next three variables are flags indicative the status of the resolution process. They are used to control the internal workflow.

The rest of the variables are customization flags:

- mKeepSystemConstantDuringIterations It indicates weather or not the system matrices are to be modified at each iteration (e.g. as in the complete Newton-Raphson method). Setting it to true will drop the convergence rate but could result in an efficient method in some applications.
- mReformDofSetAtEachStep It is set to true only when the connectivity changes in each time step (e.g. there is remeshing at each step). This operation involves requiring the DOF set to each element and rebuilding the system matrices at each time step, which is expensive. Therefore, it should be used only when strictly necessary. Otherwise it is only called at the begining of the calculation.
- mMaxIterationNumber Its meaning has already been explained in the description of the class' constructor.

### Public Methods

Here we discuss in some detail the specific implementation of this derived class' public methods.

#### Predict

void Predict()

It calls the scheme's 'Predict' method (see How to Use Scheme), moving the mesh if needed:

GetScheme()->Predict(BaseType::GetModelPart(), rDofSet, mA, mDx, mb); if (this->MoveMeshFlag() == true) BaseType::MoveMesh();

#### Solve

double Solve()

It contains the iterative loop of the Newton-Raphson method. The needed elemental matrices are calculated by a Scheme instance and the system matrices are assembled by the BuilderAndSolver that takes it as a parameter and can deal with the particular container structures and linear solver of the SolverStrategy because it too takes them as template arguments. The flow of operations is as follows:

##### Step 1

A first iteration is initiated by checking if convergence is already achieved by the actual state:

is_converged = mpConvergenceCriteria->PreCriteria(BaseType::GetModelPart(), rDofSet, mA, mDx, mb);

##### Step 2

If the base type member variable mRebuldLevel has been set to 0, just the RHS is rebuild after each time step:

if (BaseType::mRebuildLevel > 1 || BaseType::mStiffnessMatrixIsBuilt == false) pBuilderAndSolver->BuildAndSolve(pScheme, BaseType::GetModelPart(), mA, mDx, mb);

which performes one iteration, that is, it builds the system and solves for mDx.

For most applications, though, a higher level is set and the following method is called instead:

else pBuilderAndSolver->BuildRHSAndSolve(pScheme, BaseType::GetModelPart(), mA, mDx, mb);

##### Step 3

Next the problem variables are updated with the obtained results. This is performed by the scheme:

pScheme->FinalizeNonLinIteration(BaseType::GetModelPart(), mA, mDx, mb);

Additinally, the mesh is moved if needed:

if (BaseType::MoveMeshFlag() == true) BaseType::MoveMesh();

##### Step 4

Now the 'PostCriteria' convergence check is performed only if the 'PreCriteria' method in step 1 had returned 'true'. Otherwise the algorithm simply continues. This method may require updating the RHS:

if (mpConvergenceCriteria->GetActualizeRHSflag() == true) { TSparseSpace::SetToZero(mb);

pBuilderAndSolver->BuildRHS(pScheme, BaseType::GetModelPart(), mb); }

is_converged = mpConvergenceCriteria->PostCriteria(BaseType::GetModelPart(), rDofSet, mA, mDx, mb);

##### Step 5

The iterative loop is initiatied:

while (is_converged == false && iteration_number++ < mMaxIterationNumber)

###### Step 5.1

Just like in Step 1. 'pre' convergence criteria are assessed.

###### Step 5.2

Only if needed, Step 2 is repeated:

if (BaseType::mRebuildLevel > 1 || BaseType::mStiffnessMatrixIsBuilt == false ): //Step 2 is performed

###### Step 5.3

Step 3 is repeated

###### Step 5.4

Step 4 is repeated

##### Step 6

Once the loop is finished, reactions are calculated if required:

if (mCalculateReactionsFlag == true) { pBuilderAndSolver->CalculateReactions(pScheme, BaseType::GetModelPart(), mA, mDx, mb); }

##### Step 7

Finally the scheme's and builder and solver's 'FinalizeSolutionStep' method are called as well as some other clearing methods if required.

#### Clear

void Clear()

It calls special methods defined in the base class template argument classes TSparseSpace and TDenseSpace to clear and resize the system matrices (mpA, mpDx and mpb) to 0. It also calls the builder and solver's and the scheme's respective 'Clear' methods, since they in turn also contain matrices. In order to make sure that the DOFs are recalculated, DofSetIsInitializedFlag is set to false.

#### IsConverged

bool IsConverged()

It calls the builder and solver's method 'BuildRHS' (see How to use Builder And Solver) if an actualized RHS vector is needed for the particular ConvergenceCriteria class that is used.

if (mpConvergenceCriteria->mActualizeRHSIsNeeded == true) { GetBuilderAndSolver()->BuildRHS(GetScheme(), BaseType::GetModelPart(), mb); }

Then it calls ConvergenceCriteria's 'PostCriteria' method, which applies the particular criteria to its input and returns its output (true or false):

return mpConvergenceCriteria->PostCriteria(BaseType::GetModelPart(), rDofSet, mA, mDx, mb);

#### CalculateOutputData

void CalculateOutputData()

It calls the scheme's corresponding method:

GetScheme()->CalculateOutputData(BaseType::GetModelPart(), rDofSet, mA, mDx, mb);

# Python interface

Kratos applications are usually designed to be used through a Python interface. Therefore, the objects described in this page are often created in a python script that we refer to as Strategy python (see How to construct the "solving strategies"). Similarly, the public methods of SolverStrategy will typically be called from the main script, which is usually also python based (see Python Script Tutorial: Using Kratos Solvers)