# How to Use Solving Strategy

(→Nonlinear Strategies) |
(→Nonlinear Strategies) |
||

Line 20: | Line 20: | ||

and rearranged as: | and rearranged as: | ||

− | <math>K(u) | + | <math>K(u)\Delta u = f(u) - K(u)u_0</math> |

where u_0 is an initial value or guess of the solution and du its correction. Because u is unknown, this system has to be in general replaced for another: | where u_0 is an initial value or guess of the solution and du its correction. Because u is unknown, this system has to be in general replaced for another: | ||

− | <math>K' | + | <math>K'\Delta u' = f' - K'u'_0</math> |

such that its solution &Delta u' is an approximation of &Delta u. An iterative strategy (e.g. Newton-Raphson) is in general applied to create a succession of approximate systems | such that its solution &Delta u' is an approximation of &Delta u. An iterative strategy (e.g. Newton-Raphson) is in general applied to create a succession of approximate systems | ||

− | <math>K'_n | + | <math>K'_n\Delta u'_n = f'_n - K'_nu'_n = R_n</math> |

− | <math>u'_n = u'<sub>(n-1)<sub> + | + | <math>u'_n = u'<sub>(n-1)<sub> + \Delta u'_(n-1)</math> |

and a convergence criterion placed on the norm of du'_n, since it must tend to 0 when the residual R_n tends to 0 (and the original system is recovered 'in the limit') | and a convergence criterion placed on the norm of du'_n, since it must tend to 0 when the residual R_n tends to 0 (and the original system is recovered 'in the limit') |

## Revision as of 10:34, 1 August 2013

## Contents |

# Introduction

The SolvingStrategy class has been conceived as an abstraction of the outermost structure of the numerical algorithm's operations in stationary problems or, in the context of transient problems, those involved in the complete evolution of the system to the next time step. These operations are typically a sequence of build-and-solve steps, each consisting in:

- Building a system
- Solving it approximately with a certain tolerance

Incidentally, a SolvingStrategy instance combines simpler structures that in turn are abstractions of component (sub)algorithms. These structures can belong to any of the following classes: Scheme, LinearSolver, BuilderAndSolver, ConvergenceCriteria and even SolvingStrategy. The role of each of these will be clarified through examples in the following sections. It is important to understand all of these complex structures to be able to fully grasp the more complex SolvingStrategy. With a reasonable level of generality, the sequence of the different operations that are (sometimes trivially) performed by a SolvingStrategy instance can be summarized as follows:

## Nonlinear Strategies

They are used for problems that produce systems of nonlinear equations of the form:

*K*(*u*)*u* = *f*(*u*)

Being u the vector of unknowns K the LHS matrix and f(u) the RHS vector, both possibly depending on the solution u. In Kratos Multiphysics this problem is reformulated as:

*K*(*u*)(*u*_{0} + Δ*u*) = *f*(*u*)

and rearranged as:

*K*(*u*)Δ*u* = *f*(*u*) − *K*(*u*)*u*_{0}

where u_0 is an initial value or guess of the solution and du its correction. Because u is unknown, this system has to be in general replaced for another:

*K*'Δ*u*' = *f*' − *K*'*u*'_{0}

such that its solution &Delta u' is an approximation of &Delta u. An iterative strategy (e.g. Newton-Raphson) is in general applied to create a succession of approximate systems

*K*'_{n}Δ*u*'_{n} = *f*'_{n} − *K*'_{n}*u*'_{n} = *R*_{n}

*u*'_{n} = *u*' < *s**u**b* > (*n* − 1) < *s**u**b* > + Δ*u*'_{(}*n* − 1)

and a convergence criterion placed on the norm of du'_n, since it must tend to 0 when the residual R_n tends to 0 (and the original system is recovered 'in the limit')

## Linear Strategies

They are used in problems that are linear with respect to the unknowns.

# Object Description

SolvingStrategy is derived from Process and use the same structure as shown in figure 8.9. All the system matrices and vectors in the systems to be solved will be stored in the strategy. This allows to deal with multiple LHS and RHS. Deriving SolvingStrategy from Process lets users to combine them with some other processes using composition in order to create a more complex Process. The strategy pattern used in this structure lets users to implement a new Strategy and add it to Kratos easily which increases the extendability of Kratos. Also lets them selecting an strategy and use it instead of another one in order to change the solving algorithm, which increases the flexibility of Kratos.

Composite pattern is used to let users combine different strategies in one. For example a fractional step strategy can be implemented by combining different strategies used for each step in one composite strategy. Like for Process, the interface for changing the children of the composite strategy is considered to be too sophisticated and is removed from the Strategy. So a composite structure can be constructed by giving all its components at the constructing time and then it can be used but without changing its sub algorithms.

# Constructor

The interface of SolvingStrategy reflects the general steps in usual finite element algorithms like prediction, solving, convergence control and calculating results. This design results in the following interface:

Predict: A method to predict the solution. If it is not called, a trivial predictor is used and the values of the solution step of interest are assumed equal to the old values.

Solve This method implements the solving procedure. This means building the equation system by assembling local components, solving them using a given linear solver and updating the results.

IsConverged It is a post-solution convergence check. It can be used for example in coupled problems to see if the solution is converged or not.

CalculateOutputData Calculates non trivial results like stresses in structural analysis. [edit] Hierarchical position of the Solving Strategy in KRATOS

Solving strategy is the class, which utilizes such classes as builder_and_solver and scheme as its slaves. Those two terms will be described in the subsequent sections. Roughly speaking, solving strategy provides the overall sequence of calls, necessary for the solution of the problem. It calls at the necessary point the builder_and_solver (see the article on builder_and_solver for the detailed description), which assembles the global matrices (by gathering the elemental contributions) and (possibly) solves the resulting system. The flexibility at this step is provided in a way, that one can "plug-in" many of the provided solvers or write his own one, and then pass it as an argument to the strategy.

Scheme on the other hand is responsible for the time integration. It is also one of the arguments of the solving strategy. As we see, the abstract and unified structure of the solving strategy enables one really to customize the application by choosing and plugging in different building blocks.

Let us have a brief look at an example of "residual_based_newton-raphson_strategy": [edit] constructor =

ResidualBasedNewtonRaphsonStrategy( ModelPart& model_part, typename TSchemeType::Pointer pScheme, typename TLinearSolver::Pointer pNewLinearSolver, typename TConvergenceCriteriaType::Pointer pNewConvergenceCriteria, int MaxIterations = 30, bool CalculateReactions = false, bool ReformDofSetAtEachStep = false, bool MoveMeshFlag = false )

first argument is the model_part, which is the union of elements thought of as to belong to one entity - what we call a model part. E.g. this could be structure_model_part and fluid_model_part in an FSI application.

pScheme - defines the time integration scheme. E.g. Newmark. Linear solver - the solver which will be (in this case) used for the solution of the linear system arising at every iteration of Newton-Raphson. It could be e.g. a Conjugate Gradient solver. Convergence criterion - is the criterion for the Newton-Raphson(in this case) procedure to be converged. It could be a norm of the residual or something else - like the energy norm. MaxIterations is a cut of criterion for the Newton-Raphson (in this case) - if the convergence is not achieved within the allowed number of iterations, the solution terminates and the value of variable of interest achieved at the last iteration is taken as the result, though a message appears that the solution did not converge.

Last two flags are important when choosing between Eulerian and Lagrangian frameworks - if we erase or add nodes or elements during the solution of the problem, we need to set the ReformDofSetAtEachStep to true, and if use non-Eulerian approach, the mesh is also moved - so set the MoveMeshFlag to true in this case.