Skip to content

Commit

Permalink
New Implicit Solver interface with options to select Picard or Newton…
Browse files Browse the repository at this point in the history
… (JFNK) for the nonlinear solver. (ECP-WarpX#4736)

* created Source/FieldSolver/WarXImplicitFieldsEM.cpp file to store functions used with the implicit solvers. Only has ComputeRHSE() and ComputeRHSB() functions thus far.

* OneStep_ImplicitPicard routine now uses ComputeRHSE and ComputeRHSB functions in place of EvolveE and EvolveB functions.

* created Source/Evolve/WarpXImplicitOps.cpp file to store common functions used by implicit solvers. Moved several functions from WarpXOneStepImplicitPicard.cpp to here.

* added NonlinearSolver:: enum. Options are picard and newton. Set from input file using algo.nonlinear_solver = picard, for example. Default is picard. It is not used yet.

* changed EvolveScheme::ImplicitPicard and SemiImplicitPicard to ThetaImplicit and SemiImplicit, respectively. This affects parsing from input file. algo.evolve_scheme = implicit_picard ==> theta_implicit. Have not updated Docs yet.

* NonlinearSolver ==> NonlinearSolverType

* intermediate commit. Added an ImplicitSolversEM class. WarpX owns an instance of this class. OneStepImplicitEM is now done here. This class owns all things pertaining to the implicit integrator. Also added a NonlinerSolvers class, but it is not used yet.

* cleaning up ImplicitSolverEM::OneStep().

* more refactoring of ImplicitSovlerEM class.

* WarpXFieldVec ==> WarpXSolverVec

* removed depricated functions. WarpXSolverVec has zero ghost.

* ImplicitSolverEM::OneStep now looks exactly like Picard solver. Next step is to move it there.

* ImplicitSolverEM::OneStep() now uses the Picard solver object to solve the nonlinear equation.

* changed where implicit solver parameters are printed.

* refactoring of WarpXImplicitOps.cpp

* added NewtonSolver.H file. Doesn't work yet.

* adding more functionality to WarpXSovlerVec class.

* added JacobianFunctionJFNK.H file in NonlinarSolvers/ folder. It contains all of the necessary functions required by the linear operator template parameter for AMReX_GMRES.

* dotMask object used for dot product from Linear Function class now lives in the implicit solver class. This ensures that 1 and only 1 instance of this object will be defined. dotProduct and norm can now be called through the implicit sovler class.

* moved temporary linear_function and GMRES testing lines out of Picard::Define() and into Newton::Define()

* intermediate commit. JFNK almost ready.

* small refactoring of PreRHSOp() and PostUpdateState() functions.

* cleaning things up.

* Newton solver runs. GMRES runs. Next step is to do Particle-suppressed (PS) part.

* minor clean up.

* fixed typo in convergence message for Newton and Picard solvers.

* changed how PostUpdateState() is used in implicit solver. Now parsing Picard particle solver parameters in ImplicitSolverEM class. Using a new formula for the epsilon used in the finite-difference Jacobian action calculation that is suitable for large absolute norms of the solution vector.

* Picard method for particle is now being used. PS-JFNK works.

* moved WarpXImplicitOps.cpp from Source/Evolve/ to Source/FieldSolvers/ImplicitSolvers/

* minor cleanup. PostUpdateState() ==> UpdateWarpXState().

* Moved the particle convergence check into its own function.

* added increment function to WarpXSolverVec class.

* removed some commented out lines in JacobianFunctionJFNK.H

* removed a_tol condition for print message when maximum iterations reached in nonlinear solvers. Newton solver iter is base zero now.

* cleaned up picard method for self-consistent particle update. added ablastr warning message for particles that don't converge after the maximum number of iterations.

* fixed small bug in PicardSolver related to absolute tolerance check for convegence. Default absolute tolerance values are set to zero.

* the mask used to compute the dot product of a WarpXSolverVec is now owned by the WarpXSolverVec class rather then the time solver class. It is a static member to avoid multiple definitions.

* defined accessors for Efield_fp and Bfield_fp Vectors owned by WarpX. ImplicitSolver is no longer a friend class of WarpX. Small tidy for PicardSolver.H.

* SemiImplicitEM and ThetaImplicitEM are now their own independent derived classes from the base ImplicitSolver class.

* added algorithm descriptions to the top of the SemiImplicitEM.cpp and ThetaImplicitEM.cpp files.

* updated appropriate files in Examples and Regression folders to reflect new changes.

* updating docs.

* JacobianFunctionJFNK.H ==> JacobianFunctionMF.H

* [pre-commit.ci] auto fixes from pre-commit.com hooks

for more information, see https://pre-commit.ci

* need to call clear on the static vector m_dotMask owned by the WarpXSovlerVec class to avoid malloc_consolidate(): unaligned fastbin chunk detected error messages at simulation finish.

* moved WarpXSovlerVec class from ../ImplicitSolvers/Utils/ to ../ImplicitSolvers/. Utils directory is deleted.

* ImplicitPushXP: GPU Support for Convergence Test

* cleaning up.

* Atomic: Fix type of `1`

* [pre-commit.ci] auto fixes from pre-commit.com hooks

for more information, see https://pre-commit.ci

* bug fix.

* Update Source/FieldSolver/ImplicitSolvers/WarpXSolverVec.H

Co-authored-by: Revathi  Jambunathan <[email protected]>

* Remove Zero

* adding Copyright lines to top of files. Removed commented code.

* Prevent calling the implicit solver if not initialized

* More robust if conditions

* set implicit verbose to warpx verbose

* Update Source/FieldSolver/ImplicitSolvers/ThetaImplicitEM.cpp

* Simplify call to updates of E

* changed benchmarks_json file names as needed.

* using warpx.verbose

* clang-tidying

* changed header names.

* clang-tyding

* [pre-commit.ci] auto fixes from pre-commit.com hooks

for more information, see https://pre-commit.ci

* more clang-tyding

* clang tidy again

* [pre-commit.ci] auto fixes from pre-commit.com hooks

for more information, see https://pre-commit.ci

* UpdateElectricFiled ==> SetElectricFieldAndApplyBCs

* clang tidy

* more clang tidy

* [pre-commit.ci] auto fixes from pre-commit.com hooks

for more information, see https://pre-commit.ci

* prohibiting copy and move constructors for solver classes.

* [pre-commit.ci] auto fixes from pre-commit.com hooks

for more information, see https://pre-commit.ci

* fixed bug in move constructor in WarpXSovlerVec.

* slight refactoring of ThetaImplicitEM class.

* reducing divisions in Picard method for particles.

* small cosmetic changes to implicit solvers.

* [pre-commit.ci] auto fixes from pre-commit.com hooks

for more information, see https://pre-commit.ci

* removed commented out code.

* updating Docs and adding briefs.

* Fix HIP compilation

* [pre-commit.ci] auto fixes from pre-commit.com hooks

for more information, see https://pre-commit.ci

* fix const

* fixed indent. updated comments.

* New Python test: Particle-Boundary interaction  (ECP-WarpX#4729)

* enable the diagnostic of ParticleScraping in Python

* Update picmi.py

* Update picmi.py

* new test

* python update

* modification of the script

* [pre-commit.ci] auto fixes from pre-commit.com hooks

for more information, see https://pre-commit.ci

* Update PICMI_inputs_rz.py

* json

* update

* Update PICMI_inputs_rz.py

* Update particle_boundary_interaction.json

* Update PICMI_inputs_rz.py

* Update PICMI_inputs_rz.py

* [pre-commit.ci] auto fixes from pre-commit.com hooks

for more information, see https://pre-commit.ci

* Update PICMI_inputs_rz.py

* [pre-commit.ci] auto fixes from pre-commit.com hooks

for more information, see https://pre-commit.ci

* Fix hanging script in parallel

* Make the test executable

* Update analysis script

* Update particle_containers.py

* Update PICMI_inputs_rz.py

* Update analysis.py

* Update analysis.py

* Update particle_containers.py

* [pre-commit.ci] auto fixes from pre-commit.com hooks

for more information, see https://pre-commit.ci

---------

Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
Co-authored-by: Remi Lehe <[email protected]>
Co-authored-by: Axel Huebl <[email protected]>

* Adding normal components to regular boundary buffer (ECP-WarpX#4742)

* first draft

* adding normal only

* [pre-commit.ci] auto fixes from pre-commit.com hooks

for more information, see https://pre-commit.ci

* Update ParticleBoundaryBuffer.cpp

* Update ParticleBoundaryBuffer.cpp

---------

Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>

* Add function to set domain boundary potentials from Python (ECP-WarpX#4740)

* add function to set domain boundary potentials from Python

* switch function arguments to form `potential_[lo/hi]_[x/y/z]` and add to docs

* clean up `ablastr/fields` (ECP-WarpX#4753)

* move PoissonInterpCPtoFP to Interpolate.H

* concatenate nested namespaces

* Split clang-tidy CI test into 4 to improve performances (ECP-WarpX#4747)

* split clang-tidy checks to improve performances

* rename folders and tests

* fix concurrency

* Simplify

---------

Co-authored-by: Axel Huebl <[email protected]>

* Replace links to learn git (ECP-WarpX#4758)

* Replace links to learn git

* Bugfix in `fields.py` for GPU run without `cupy` (ECP-WarpX#4750)

* Bugfix in `fields.py` for GPU run without `cupy`

* apply suggestion from code review

* Release 24.03 (ECP-WarpX#4759)

* AMReX: 24.03

* pyAMReX: 24.03

* WarpX: 24.03

* Implement stair-case Yee solver with EB in RZ geometry (ECP-WarpX#2707)

* Allow compilation with RZ EB

* Do not push cells for RZ Yee solver, when covered with EB

* Fix compilation errors

* Fix additional compilation errors

* [pre-commit.ci] auto fixes from pre-commit.com hooks

for more information, see https://pre-commit.ci

* Fix additional compilation errors

* [pre-commit.ci] auto fixes from pre-commit.com hooks

for more information, see https://pre-commit.ci

* Add automated test

* Add automated test

* Fix path in tests

* Enable parser in RZ

* Update example script

* [pre-commit.ci] auto fixes from pre-commit.com hooks

for more information, see https://pre-commit.ci

* Clean-up PR

* Initialize EB quantities

* Modified EM field initialization in 2D with EB

* Typo fix

* Typo fix

* Ignoring unused variables correctly

* [pre-commit.ci] auto fixes from pre-commit.com hooks

for more information, see https://pre-commit.ci

* Correct condition for updating E

* Correct update of B

* [pre-commit.ci] auto fixes from pre-commit.com hooks

for more information, see https://pre-commit.ci

* Update B push

* Update input script

* Revert "Update input script"

This reverts commit 5087485.

* Update initialization

* Updated test

* Move test to a different folder

* [pre-commit.ci] auto fixes from pre-commit.com hooks

for more information, see https://pre-commit.ci

* Add test for WarpX-test.ini

* [pre-commit.ci] auto fixes from pre-commit.com hooks

for more information, see https://pre-commit.ci

* Fix path for tests

* Update test description

* Update test metadata

* Add checksum file

* Revert changes

* Revert changes

* Change lx to lr

* Revert "Change lx to lr"

This reverts commit be3039a.

* Change lx to lr

---------

Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
Co-authored-by: lgiacome <[email protected]>

* AMReX/pyAMReX/PICSAR: Weekly Update (ECP-WarpX#4763)

* AMReX: Weekly Update

* pyAMReX: Weekly Update

* clean up (ECP-WarpX#4761)

* Fix compilation

* updating some function names to contain Implicit.

* fixed bug that caused segfault on GMRES restart.

* parsing GMRES restart length from input file.

* [pre-commit.ci] auto fixes from pre-commit.com hooks

for more information, see https://pre-commit.ci

* Update field accessor

* [pre-commit.ci] auto fixes from pre-commit.com hooks

for more information, see https://pre-commit.ci

* removed +, -, and * operators from WarpXSolverVec class. These operators encourage inefficient vector arithmetic.

* fix/workaround to field accessor issue.

* [pre-commit.ci] auto fixes from pre-commit.com hooks

for more information, see https://pre-commit.ci

* changing implicit-scheme related header files in WarpX.H and WarpX.cpp

* WarpX::max_particle_iterations ==> WarpX::max_particle_its_in_implicit_scheme and WarpX::particle_tolerance ==> WarpX:particle_tol_in_implicit_scheme

* updating docs.

* updating docs again.

* [pre-commit.ci] auto fixes from pre-commit.com hooks

for more information, see https://pre-commit.ci

* adding comments about the template parameters Vec and Ops used in the NonlinearSolver class.

* adding comments. addressing comments from PR.

* Ensure that laser particles can converge in the implicit solver

* Add braces to make clang-tidy happy

* moving nonlinear solver parameters to base ImplicitSolver class

* mirrors not to be used with implicit schemes.

* moved more to base implicit solver class. adding comments.

* removed some WarpXSolverVec functions. Updated comments.

* clang tidy complains when removing default copy constructor in WarpXSolverVec.H

* amrex::ExecOnFinalize(WarpXSolverVec::clearDotMask)

* WarpXSolverVec (const WarpXSolverVec&) = delete

* updating briefs for nonlinear solvers.

* adding loop over levels.

* static cast amrex::Vector.size() to int

* updating docs for nonlinear solvers.

* adding gmres.restart_length to docs.

* fixed typos in docs.

* Removed PreRHSOp() call from nonlinear solvers.

* clang tidy.

* Prohibit = operator for WarpXSolverVec. Using Copy() instead.

* Document PICMI function `LoadInitialField`

* updating comments in WarpXImplicitOps.cpp

* moved static member m_dotMask definition to the header file with inline added to the declaration.

* [pre-commit.ci] auto fixes from pre-commit.com hooks

for more information, see https://pre-commit.ci

* fixed indent.

---------

Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
Co-authored-by: Axel Huebl <[email protected]>
Co-authored-by: Debojyoti Ghosh <[email protected]>
Co-authored-by: Revathi  Jambunathan <[email protected]>
Co-authored-by: Remi Lehe <[email protected]>
Co-authored-by: Weiqun Zhang <[email protected]>
Co-authored-by: Eya D <[email protected]>
Co-authored-by: Roelof Groenewald <[email protected]>
Co-authored-by: Arianna Formenti <[email protected]>
Co-authored-by: Luca Fedeli <[email protected]>
Co-authored-by: lgiacome <[email protected]>
  • Loading branch information
12 people authored and Haavaan committed Jun 5, 2024
1 parent 4231680 commit 4143ea0
Show file tree
Hide file tree
Showing 43 changed files with 2,717 additions and 582 deletions.
145 changes: 115 additions & 30 deletions Docs/source/usage/parameters.rst
Original file line number Diff line number Diff line change
Expand Up @@ -86,37 +86,122 @@ Overall simulation parameters

* ``explicit``: Use an explicit solver, such as the standard FDTD or PSATD

* ``implicit_picard``: Use an implicit solver with exact energy conservation that uses a Picard iteration to solve the system.
Note that this method is for demonstration only. It is inefficient and does not work well when
:math:`\omega_{pe} \Delta t` is close to or greater than one.
The method is described in `Angus et al., On numerical energy conservation for an implicit particle-in-cell method coupled with a binary Monte-Carlo algorithm for Coulomb collisions <https://doi.org/10.1016/j.jcp.2022.111030>`__.
The version implemented is an updated version that is relativistically correct, including the relativistic gamma factor for the particles.
For exact energy conservation, ``algo.current_deposition = direct`` must be used with ``interpolation.galerkin_scheme = 0``,
and ``algo.current_deposition = Esirkepov`` must be used with ``interpolation.galerkin_scheme = 1`` (which is the default, in
which case charge will also be conserved).

* ``semi_implicit_picard``: Use an energy conserving semi-implicit solver that uses a Picard iteration to solve the system.
Note that this method has the CFL limitation :math:`\Delta t < c/\sqrt( \sum_i 1/\Delta x_i^2 )`. It is inefficient and does not work well or at all when :math:`\omega_{pe} \Delta t` is close to or greater than one.
* ``theta_implicit_em``: Use a fully implicit electromagnetic solver with a time-biasing parameter theta bound between 0.5 and 1.0. Exact energy conservation is achieved using theta = 0.5. Maximal damping of high-k modes is obtained using theta = 1.0. Choices for the nonlinear solver include a Picard iteration scheme and particle-suppressed (PS) JNFK.
The algorithm itself is numerical stable for large time steps. That is, it does not require time steps that resolve the plasma period or the CFL condition for light waves. However, the practicality of using a large time step depends on the nonlinear solver. Note that the Picard solver is for demonstration only. It is inefficient and will most like not converge when
:math:`\omega_{pe} \Delta t` is close to or greater than one or when the CFL condition for light waves is violated. The PS-JFNK method must be used in order to use large time steps. However, the current implementation of PS-JFNK is still inefficient because the JFNK solver is not preconditioned and there is no use of the mass matrices to minimize the cost of a linear iteration. The time step is limited by how many cells a particle can cross in a time step (MPI-related) and by the need to resolve the relavent physics.
The Picard method is described in `Angus et al., On numerical energy conservation for an implicit particle-in-cell method coupled with a binary Monte-Carlo algorithm for Coulomb collisions <https://doi.org/10.1016/j.jcp.2022.111030>`__.
The PS-JFNK method is described in `Angus et al., An implicit particle code with exact energy and charge conservation for electromagnetic studies of dense plasmas <https://doi.org/10.1016/j.jcp.2023.112383>`__ . (The version implemented in WarpX is an updated version that includes the relativistic gamma factor for the particles.) Also see `Chen et al., An energy- and charge-conserving, implicit, electrostatic particle-in-cell algorithm. <https://doi.org/10.1016/j.jcp.2011.05.031>`__ .
Exact energy conservation requires that the interpolation stencil used for the field gather match that used for the current deposition. ``algo.current_deposition = direct`` must be used with ``interpolation.galerkin_scheme = 0``, and ``algo.current_deposition = Esirkepov`` must be used with ``interpolation.galerkin_scheme = 1``. If using ``algo.current_deposition = villasenor``, the corresponding field gather routine will automatically be selected and the ``interpolation.galerkin_scheme`` flag does not need to be specified. The Esirkepov and villasenor deposition schemes are charge-conserving.

* ``semi_implicit_em``: Use an approximately energy conserving semi-implicit electromagnetic solver. Choices for the nonlinear solver include a Picard iteration scheme and particle-suppressed JFNK.
Note that this method has the CFL limitation :math:`\Delta t < c/\sqrt( \sum_i 1/\Delta x_i^2 )`. The Picard solver for this method can only be expected to work well when :math:`\omega_{pe} \Delta t` is less than one.
The method is described in `Chen et al., A semi-implicit, energy- and charge-conserving particle-in-cell algorithm for the relativistic Vlasov-Maxwell equations <https://doi.org/10.1016/j.jcp.2020.109228>`__.
For energy conservation, ``algo.current_deposition = direct`` must be used with ``interpolation.galerkin_scheme = 0``,
and ``algo.current_deposition = Esirkepov`` must be used with ``interpolation.galerkin_scheme = 1`` (which is the default, in
which case charge will also be conserved).

* ``algo.max_picard_iterations`` (`integer`, default: 10)
When `algo.evolve_scheme` is either `implicit_picard` or `semi_implicit_picard`, this sets the maximum number of Picard
itearations that are done each time step.

* ``algo.picard_iteration_tolerance`` (`float`, default: 1.e-7)
When `algo.evolve_scheme` is either `implicit_picard` or `semi_implicit_picard`, this sets the convergence tolerance of
the iterations, the maximum of the relative change of the L2 norm of the field from one iteration to the next.
If this is set to zero, the maximum number of iterations will always be done with the change only calculated on the last
iteration (for a slight optimization).

* ``algo.require_picard_convergence`` (`bool`, default: 1)
When `algo.evolve_scheme` is either `implicit_picard` or `semi_implicit_picard`, this sets whether the iteration each step
is required to converge.
If it is required, an abort is raised if it does not converge and the code then exits.
If not, then a warning is issued and the calculation continues.
Exact energy conservation requires that the interpolation stencil used for the field gather match that used for the current deposition. ``algo.current_deposition = direct`` must be used with ``interpolation.galerkin_scheme = 0``, and ``algo.current_deposition = Esirkepov`` must be used with ``interpolation.galerkin_scheme = 1``. If using ``algo.current_deposition = villasenor``, the corresponding field gather routine will automatically be selected and the ``interpolation.galerkin_scheme`` flag does not need to be specified. The Esirkepov and villasenor deposition schemes are charge-conserving.

* ``implicit_evolve.theta`` (`float`, default: 0.5)
When `algo.evolve_scheme = theta_implicit_em`, the fields used on the RHS of the equations for the implicit advance
are computed as (1-theta)*E_{n} + theta*E_{n+1}. theta is bound between 0.5 and 1. The default value of theta = 0.5
is needed for exact energy conservation. For theta > 0.5, high-k modes will be damped and the method will not be
exactly energy conserving, but the solver may perform better.

* ``implicit_evolve.nonlinear_solver`` (`string`, default: None)
When `algo.evolve_scheme` is either `theta_implicit_em` or `semi_implicit_em`, this sets the nonlinear solver used
to advance the field-particle system in time. Options are `picard` or `newton`.

* ``implicit_evolve.max_particle_iterations`` (`integer`, default: 21)
When `algo.evolve_scheme` is either `theta_implicit_em` or `semi_implicit_em` and `implicit_evolve.nonlinear_solver = newton`
, this sets the maximum number of iterations for the method used to obtain a self-consistent update of the particles at
each iteration in the JFNK process.

* ``implicit_evolve.particle_tolerance`` (`float`, default: 1.e-10)
When `algo.evolve_scheme` is either `theta_implicit_em` or `semi_implicit_em` and `implicit_evolve.nonlinear_solver = newton`
, this sets the relative tolerance for the iterative method used to obtain a self-consistent update of the particles at
each iteration in the JFNK process.

* ``picard.verbose`` (`bool`, default: 1)
When `implicit_evolve.nonlinear_solver = picard`, this sets the verbosity of the Picard solver. If true, then information
on the nonlinear error are printed to screen at each nonlinear iteration.

* ``picard.require_convergence`` (`bool`, default: 1)
When `implicit_evolve.nonlinear_solver = picard`, this sets whether the Picard method is required to converge at each
time step. If it is required, an abort is raised if it does not converge and the code then exits. If not, then a warning
is issued and the calculation continues.

* ``picard.maximum_iterations`` (`int`, default: 100)
When `implicit_evolve.nonlinear_solver = picard`, this sets the maximum iterations used by the Picard method. If
`picard.require_convergence = false`, the solution is considered converged if the iteration count reaches this value,
but a warning is issued. If `picard.require_convergence = true`, then an abort is raised if the iteration count reaches
this value.

* ``picard.relative_tolerance`` (`float`, default: 1.0e-6)
When `implicit_evolve.nonlinear_solver = picard`, this sets the relative tolerance used by the Picard method for determining
convergence. The absolute error for the Picard method is the L2 norm of the difference of the solution vector between
two successive iterations. The relative error is the absolute error after iteration k > 1 divided by the absolute error
after the first iteration. The Picard method is considered converged when the relative error is below the relative tolerance.
This is the preferred means of determining convergence.

* ``picard.absolute_tolerance`` (`float`, default: 0.0)
When `implicit_evolve.nonlinear_solver = picard`, this sets the absolute tolerance used by the Picard method for determining
convergence. The default value is 0.0, which means that the absolute tolerance is not used to determine converence. The
solution vector in the nonlinear solvers are in physical units rather than normalized ones. Thus, the absolute scale
of the problem can vary over many orders and magnitude depending on the problem. The relative tolerance is the preferred
means of determining convergence.

* ``newton.verbose`` (`bool`, default: 1)
When `implicit_evolve.nonlinear_solver = newton`, this sets the verbosity of the Newton solver. If true, then information
on the nonlinear error are printed to screen at each nonlinear iteration.

* ``newton.require_convergence`` (`bool`, default: 1)
When `implicit_evolve.nonlinear_solver = newton`, this sets whether the Newton method is required to converge at each
time step. If it is required, an abort is raised if it does not converge and the code then exits. If not, then a warning
is issued and the calculation continues.

* ``newton.maximum_iterations`` (`int`, default: 100)
When `implicit_evolve.nonlinear_solver = newton`, this sets the maximum iterations used by the Newton method. If
`newton.require_convergence = false`, the solution is considered converged if the iteration count reaches this value,
but a warning is issued. If `newton.require_convergence = true`, then an abort is raised if the iteration count reaches
this value.

* ``newton.relative_tolerance`` (`float`, default: 1.0e-6)
When `implicit_evolve.nonlinear_solver = newton`, this sets the relative tolerance used by the Newton method for determining
convergence. The absolute error for the Newton method is the L2 norm of the residual vector. The relative error is the
absolute error divided by the L2 norm of the initial residual associated with the initial guess. The Newton method is
considered converged when the relative error is below the relative tolerance. This is the preferred means of determining
convergence.

* ``newton.absolute_tolerance`` (`float`, default: 0.0)
When `implicit_evolve.nonlinear_solver = newton`, this sets the absolute tolerance used by the Newton method for determining
convergence. The default value is 0.0, which means that the absolute tolerance is not used to determine converence. The
residual vector in the nonlinear solvers are in physical units rather than normalized ones. Thus, the absolute scale
of the problem can vary over many orders and magnitude depending on the problem. The relative tolerance is the preferred
means of determining convergence.

* ``gmres.verbose_int`` (`int`, default: 2)
When `implicit_evolve.nonlinear_solver = newton`, this sets the verbosity of the AMReX::GMRES linear solver. The default
value of 2 gives maximumal verbosity and information about the residual are printed to the screen at each GMRES iteration.

* ``gmres.restart_length`` (`int`, default: 30)
When `implicit_evolve.nonlinear_solver = newton`, this sets the iteration number at which to do a restart in AMReX::GMRES.
This parameter is used to save memory on building the Krylov subspace basis vectors for linear systems that are ill-conditioned
and require many iterations to converge.

* ``gmres.relative_tolerance`` (`float`, default: 1.0e-4)
When `implicit_evolve.nonlinear_solver = newton`, this sets the relative tolerance used to determine convergence of the
AMReX::GMRES linear solver used to compute the Newton step in the JNFK process. The absolute error is the L2 norm of the
residual vector. The relative error is the absolute error divided by the L2 norm of the initial residual (typically equal
to the norm of the nonlinear residual from the end of the previous Newton iteration). The linear solver is considered
converged when the relative error is below the relative tolerance. This is the preferred means of determining convergence.

* ``gmres.absolute_tolerance`` (`float`, default: 0.0)
When `implicit_evolve.nonlinear_solver = newton`, this sets the absolute tolerance used to determine converence of the
GMRES linear solver used to compute the Newton step in the JNFK process. The default value is 0.0, which means that the
absolute tolerance is not used to determine converence. The residual vector in the nonlinear/linear solvers are in physical
units rather than normalized ones. Thus, the absolute scale of the problem can vary over many orders and magnitude depending
on the problem. The relative tolerance is the preferred means of determining convergence.

* ``gmres.maximum_iterations`` (`int`, default: 1000)
When `implicit_evolve.nonlinear_solver = newton`, this sets the maximum iterations used by the GMRES linear solver. The
solution to the linear system is considered converged if the iteration count reaches this value.

* ``warpx.do_electrostatic`` (`string`) optional (default `none`)
Specifies the electrostatic mode. When turned on, instead of updating
Expand Down
2 changes: 1 addition & 1 deletion Examples/Tests/Implicit/analysis_1d.py
Original file line number Diff line number Diff line change
Expand Up @@ -31,7 +31,7 @@

if re.match('SemiImplicitPicard_1d', fn):
tolerance_rel = 2.5e-5
elif re.match('ImplicitPicard_1d', fn):
elif re.match('ThetaImplicitPicard_1d', fn):
# This case should have near machine precision conservation of energy
tolerance_rel = 1.e-14

Expand Down
14 changes: 11 additions & 3 deletions Examples/Tests/Implicit/inputs_1d
Original file line number Diff line number Diff line change
Expand Up @@ -31,10 +31,18 @@ boundary.particle_hi = periodic
############ NUMERICS ###########
#################################

warpx.verbose = 1
warpx.const_dt = dt
algo.evolve_scheme = implicit_picard
algo.max_picard_iterations = 31
algo.picard_iteration_tolerance = 0.
algo.evolve_scheme = theta_implicit_em

implicit_evolve.nonlinear_solver = "picard"

picard.verbose = true
picard.max_iterations = 31
picard.relative_tolerance = 0.0
picard.absolute_tolerance = 0.0
picard.require_convergence = false

algo.current_deposition = esirkepov
algo.field_gathering = energy-conserving
algo.particle_shape = 2
Expand Down
14 changes: 11 additions & 3 deletions Examples/Tests/Implicit/inputs_1d_semiimplicit
Original file line number Diff line number Diff line change
Expand Up @@ -31,10 +31,18 @@ boundary.particle_hi = periodic
############ NUMERICS ###########
#################################

warpx.verbose = 1
warpx.const_dt = dt
algo.evolve_scheme = semi_implicit_picard
algo.max_picard_iterations = 5
algo.picard_iteration_tolerance = 0.
algo.evolve_scheme = semi_implicit_em

implicit_evolve.nonlinear_solver = "picard"

picard.verbose = true
picard.max_iterations = 5
picard.relative_tolerance = 0.0
picard.absolute_tolerance = 0.0
picard.require_convergence = false

algo.current_deposition = esirkepov
algo.field_gathering = energy-conserving
algo.particle_shape = 2
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -38,10 +38,32 @@ warpx.const_dt = dt
warpx.use_filter = 0

algo.maxwell_solver = Yee
algo.evolve_scheme = "implicit_picard"
algo.require_picard_convergence = 0
algo.max_picard_iterations = 25
algo.picard_iteration_tolerance = 0.0 #1.0e-12
algo.evolve_scheme = "theta_implicit_em"
#algo.evolve_scheme = "semi_implicit_em"

implicit_evolve.theta = 0.5
implicit_evolve.max_particle_iterations = 21
implicit_evolve.particle_tolerance = 1.0e-12

#implicit_evolve.nonlinear_solver = "picard"
#picard.verbose = true
#picard.max_iterations = 25
#picard.relative_tolerance = 0.0 #1.0e-12
#picard.absolute_tolerance = 0.0 #1.0e-24
#picard.require_convergence = false

implicit_evolve.nonlinear_solver = "newton"
newton.verbose = true
newton.max_iterations = 20
newton.relative_tolerance = 1.0e-12
newton.absolute_tolerance = 0.0
newton.require_convergence = false

gmres.verbose_int = 2
gmres.max_iterations = 1000
gmres.relative_tolerance = 1.0e-8
gmres.absolute_tolerance = 0.0

algo.particle_pusher = "boris"
#algo.particle_pusher = "higuera"

Expand Down
8 changes: 4 additions & 4 deletions Regression/WarpX-tests.ini
Original file line number Diff line number Diff line change
Expand Up @@ -4666,7 +4666,7 @@ particleTypes = electrons
outputFile = Point_of_contact_EB_rz_plt
analysisRoutine = Examples/Tests/point_of_contact_EB/analysis.py

[ImplicitPicard_1d]
[ThetaImplicitPicard_1d]
buildDir = .
inputFile = Examples/Tests/Implicit/inputs_1d
runtime_params = warpx.abort_on_warning_threshold=high
Expand All @@ -4683,9 +4683,9 @@ doVis = 0
compareParticles = 1
analysisRoutine = Examples/Tests/Implicit/analysis_1d.py

[ImplicitPicard_VandB_2d]
[ThetaImplicitJFNK_VandB_2d]
buildDir = .
inputFile = Examples/Tests/Implicit/inputs_vandb_2d
inputFile = Examples/Tests/Implicit/inputs_vandb_jfnk_2d
runtime_params = warpx.abort_on_warning_threshold=high
dim = 2
addToCompileString =
Expand All @@ -4698,7 +4698,7 @@ numthreads = 1
compileTest = 0
doVis = 0
compareParticles = 1
analysisRoutine = Examples/Tests/Implicit/analysis_vandb_2d.py
analysisRoutine = Examples/Tests/Implicit/analysis_vandb_jfnk_2d.py

[SemiImplicitPicard_1d]
buildDir = .
Expand Down
1 change: 0 additions & 1 deletion Source/Evolve/CMakeLists.txt
Original file line number Diff line number Diff line change
Expand Up @@ -4,6 +4,5 @@ foreach(D IN LISTS WarpX_DIMS)
PRIVATE
WarpXEvolve.cpp
WarpXComputeDt.cpp
WarpXOneStepImplicitPicard.cpp
)
endforeach()
1 change: 0 additions & 1 deletion Source/Evolve/Make.package
Original file line number Diff line number Diff line change
@@ -1,5 +1,4 @@
CEXE_sources += WarpXEvolve.cpp
CEXE_sources += WarpXComputeDt.cpp
CEXE_sources += WarpXOneStepImplicitPicard.cpp

VPATH_LOCATIONS += $(WARPX_HOME)/Source/Evolve
6 changes: 2 additions & 4 deletions Source/Evolve/WarpXEvolve.cpp
Original file line number Diff line number Diff line change
Expand Up @@ -125,10 +125,8 @@ WarpX::Evolve (int numsteps)

ExecutePythonCallback("particleinjection");

// TODO: move out
if (evolve_scheme == EvolveScheme::ImplicitPicard ||
evolve_scheme == EvolveScheme::SemiImplicitPicard) {
OneStep_ImplicitPicard(cur_time);
if (m_implicit_solver) {
m_implicit_solver->OneStep(cur_time, dt[0], step);
}
else if ( electromagnetic_solver_id == ElectromagneticSolverAlgo::None ||
electromagnetic_solver_id == ElectromagneticSolverAlgo::HybridPIC )
Expand Down
Loading

0 comments on commit 4143ea0

Please sign in to comment.