Quick StartParameter FormatCommon parametersGeneral parametersSmoothed aggregation parametersGeometric multigrid parametersOutput parameters

A sample input deck - with annotations (for cutting and pasting):

-ksp_max_it 50-ksp_rtol 1.e-3-ksp_type cg-pc_type mg-aggmg_smooths 1# -aggmg_factor 200# -mg_levels_pc_type asm # for hard problems# -mg_levels_pc_type gs # Gauss-Seidel - and use '-pc_symmetric'# -aggmg_use_aggragate_blocks # way to pick blocks for 'asm' smoother# -prometheus_nodes_per_block 50 # an alternative for picking blocks-prometheus_mis_levels 2 # recommended for many problems-prometheus_random_mis# in FEI: shared ownership rule = proc-with-local-elem-out_verbose 2Sierra parser commands:

SOLUTION METHOD = cg $ defaultPRECONDITIONING METHOD =multigrid $ (nodal-jacobi)$ CONSTRAINT SOLUTION METHOD = uzawa $ (default) others: dd-lu schur projectionMAXIMUM ITERATIONS = 50$ MAX UZAWA ITERATIONS = 4 $ default is 10RESIDUAL NORM TOLERANCE = 1.e-3MULTIGRID METHOD = smoothed-aggregation$ MULTIGRID REDUCTION FACTOR = 200 $ will use METIS aggregation$ SQUARE MAX IND SETS = 1$ SMOOTHER PRECONDITIONING METHOD = symmetric-gauss-seidel$ SMOOTHER PRECONDITIONING METHOD = additive-schwarz $ 'nodal-jacobi' is default$ NUM NODES PER PRECONDITIONER SUBDOMAIN = 100USE AGGREGATION SUBDOMAINS = 1RANDOMIZE MAX IND SETS = 1shared ownership rule = proc-with-local-elem $ required in fei with constraintsDEBUG OUTPUT LEVEL = 2

Prometheus solver parameters can be provided on the command line or in a .petscrc file.Prometheus has three main functional

modules:Prometheus also uses some PETSc's parameters (eg,Prometheus ( -prometheus_)commandsmoothed aggregation ( -aggmg_)commandgeometric multigrid ( -geomg_)command-ksp_type), Prometheus smoothers (eg,-mg_levels_pc_type gs) and output parameters (-out_). The default solver method is plain aggregation (command-aggmg_) ; the use of "command-aggmg_smooths 1" to get smoothed aggregation or "-prometheus_geomg" to use geometric multigrid (-geomg_). We recommend that users start with plain aggregation as it is a simpler algorithm with decent performance and we anticipate that it will be easier to use.commandPETSc: PETSc's command line arguments are used for generic solver parameters. See the samples makefiles and .petscrc file in ./Test for examples and the PETSc documentation for complete listings. Prometheus ( -prometheus_): General unstructured multigrid functions and PETSc interface.command [arg]Geometric MG ( -geomg_): Geometric multigrid algorithm.command [arg]Smoothed Aggregation MG ( -aggmg_): Algebraic multigrid algorithm.command [arg]Prometheus smoothers( -mg_levels_pc_type): Smoothers.[arg]Output( -out_): Output parameters.commandSierra Frameworkusers will use commands that are essentially wrappers for these arguments and their values are shown in brackets [SIERRA].

-ksp_type (-mg_levels_ksp_type)[(SMOOTHER) SOLUTION METHOD =method]Specify the top level iterative solver or accelerator (multigrid smoother). Prometheus supports a subset of PETSc's Krylov methods, but they can be added easily and will be upon request. Available methods are:

- cg: Conjugate gradients (symmetric positive definite systems). [cg]
- gmres: Generalized minimum residuals (general systems). [gmres]
- richardson: Richardson iterations (general systems). [richardson]
- bcgs: Bi-conjugate gradient stabilized (general systems). [bicgstab]
- cr: Conjugate residuals (symmetric indefinite systems). [cr]
- chebyshev: Chebyshev polynomial smoother. [chebychev]
- preonly: only apply the preconditioner. [preonly]
-pc_type (-mg_levels_pc_type)[(SMOOTHER) PRECONDITIONING METHOD =method]Specify the top level preconditioner (multigrid smoother's preconditioner). Prometheus supports a subset of PETSc's preconditioners plus some extras. Available methods are:

- mg: Multigrid. [multigrid]
- asm: Additive Schwarz (use with
-prometheus_blocksdescribed below). [additive-schwarz]- gs: Gauss-Seidel (nodaly blocked). [gauss-seidel]
- jacobi: diagonal preconditioning. [jacobi]
- bjacobi: Block Jacobi . [block-jacobi]
- gs_asm: Multiplicative Schwarz (use with
-prometheus_blocks). [multiplicative-schwarz]- nodal_asm: nodaly blocked diagonal preconditioner. [nodal-jacobi]
- lu: LU direct solver (not parallel, used for top solver and subdomain solvers by default). [lu]
- ilu: ILU(0).
[ilu]- none: Use the identity as the preconditioner. [none]

-pc_symmetric[symmetric-gauss-seidelorsymmetric-multiplicative-schwarz]

Use symmetric Gauss-Seidel if "-pc_type gs" is used. This is necessary with symmetric Krylov methods like CG. Note, this does not result in two passes of Gauss-Seidel - use '-prometheus_pc_steps 2' to achieve this behavior.-pc_mg_smoothdown (and -pc_mg_smoothup)[NUM SMOOTHING STEPS =x]Specify the number of pre smoothing steps at each level (a differing number of post smoothing step is not currently supported). Default is one(1).-prometheus_pc_steps[PRECONDITIONER STEPS = x]The number of applications of the preconditioner (ie, number of richardson iterations to use of the preconditioner). This is not applicable to multigrid smoothers (use "-pc_mg_smoothdown") or multigrid preconditioners. Default is one(1).-ksp_max_it[MAXIMUM ITERATIONS =x]Specify the maximum number of solver iterations.-ksp_rtol (-ksp_atol and -ksp_divtol)[RESIDUAL NORM TOLERANCE =x]Specify the relative residual tolerance for convergence criterion.-ksp_gmres_restart[RESTART ITERATIONS =x]Specify the maximum number of GMRES restart vectors. Default is 50.-pc_mg_type[MULTIGRID ALGORITHM =algo]Specify the multigrid type (ie, multiplicative, full or additive). Default is "full".

-mg_levels_pc_type gs[gauss-seidel]A parallel nodal block Gauss-Seidel smoother (multiplicative Schwarz method with nodal sub-domains)-mg_levels_pc_type gs_asm[multiplicative-schwarz]A parallel multiplicative Schwarz method for larger blocks than "gs" above. Used in conjunction with-prometheus_blocksdescribed above this provides a multiplicative Schwarz smoother to compliment PETSc/Petra additive Schwarz method (ASM). See my home page for more information about the parallel Gauss-Seidel algorithm.-mg_levels_pc_type nodal_asm[nodal-jacobi]Parallel nodal block additive Schwarz method smoother.

-prometheus_mis_levels 2[SQUARE MAX IND SETS = 1]Use the square of the matrix graph in the maximal independent set algorithm. This will lead to fewer vertices on the coarse grids and hence shorter execution times for each iteration, though for "hard" problems it can deteriorate convergence rate.

-prometheus_geomg[MULTIGRID METHOD = geometric]Use the geometric method as the multigrid method.-prometheus_nocond[NA]Do not compute (approximate) condition number of preconditioned system. This is cheap to compute for CG and gives an estimate of the condition number of the preconditioned system.-prometheus_blocks[NUM PRECONDITIONER SUBDOMAINS =Nx]Number of blocks in block Jacobi (additive Schwarz) and Gauss-Seidel (multiplicative Schwarz) smoothers. Prometheus will construct the blocks, for the Schwarz smoother, with METIS, to give "well shaped" subdomains as this is likely to provide an effective smoother, and is a goodautomaticway to construct the subdomains for domain decomposition smoothers.-prometheus_nodes_per_block [NUM NODES PER PRECONDITIONER SUBDOMAIN = x]used to set the number of subdomains (see "-prometheus_blocks") with the approximate number of nodes to have per subdomain.-prometheus_schwarz_overlap[PRECONDITIONER SUBDOMAIN OVERLAP = 1]Number of node layers of overlap to be added to the Scrawl subdomains (as specified by "-prometheus_blocks" and "-prometheus_nodes_per_block") . This can make for vary powerful smoothers, but are expensive. Allowable values are 0 (default), 1 and 2. One(1) is recommended over two(2) as most of the benefit is obtained with a minimal amount of overlap.-prometheus_flat[NO COARSE GRID PROCESSOR REDUCTION = 1]Do not use processor agglomeration (reducing the number of active processors on coarse grids).-prometheus_no_repartition[NO COARSE GRID REPARITIONING = 1]Do not repartition coarse grids. Note, with this command line option ParMETIS is not used.-prometheus_tol_2[NA]tol2Advanced feature for nonlinear problems. IfPrometheus::PreSolveis called withnitergreater than zero then Prometheus will adjust the relative residual of the solver to that of the smaller of the reduction in the residual from the previous time step timesand the PETSc tolerance given by the command line argument "tol2-ksp_rtol" value. Thus, for nonlinear problems one can use "tol-ksp_rtol 1.e-3" and "-prometheus_tol_2 1.e-1" to have the residual tolerance in the KSP solver, at Newton step k (k>0), set to the minimum of 1.e-3 and 1.e-1 * r_{k+1}/r_{k.}-prometheus_random_mis[RANDOMIZE MAX IND SETS = 1]Randomize vertices in the maximal independent set algorithm. This will generally lead to fewer vertices on the coarse grids and hence shorter execution times for each iteration, though for "hard" problems it can deteriorate convergence rate, especially for thin body problems. Note, Prometheus will post process the MIS to reduce the number of coarse grid nodes and hence complexity of the coarse grids and restriction operators.-prometheus_scale_matrix[SCALE MATRIX = 1]Scale the stiffness matrix with the diagonal. Explicit diagonal preconditioning-prometheus_preduce_base C[NUM EQ PER PROCESSOR LIMIT =x]Prometheus reduces the number of active processors when the size of the grid gets too small. Prometheus tries to keep x number of degrees of freedom per processor, x = min(A+ C, D) by adjustingp- the number of active processors. See the next parameters for "C" and "D"p-prometheus_preduce_rate A[NA]See "-prometheus_preduce_base" above.-prometheus_preduce_maxn D[NA]See "-prometheus_preduce_base" above.-prometheus_mis_levels N[SQUARE MAX IND SETS = 1]Default is one(1). For aggressive coarsening use two(2) the only other alternative now. This is the number of levels to use for the maximal independent set (MIS) algorithm, which is used as the basis for the grid coarsening in both multigrid methods, (ie, runs the MIS algorithm on A^{N}). Aggressive coarsening can be advisable for tetrahedral meshes or easy problems like Poisson as it reduces the complexity of the coarse grids and hence of the solves as well.-prometheus_use_iterative_top_solver[USE ITERATIVE COARSE GRID SOLVER = 1]Use and iterative solver for coarse grid. Used for singular matrices.-prometheus_levels N[NUM LEVELS =x]Used for the Finite Element Interface (FEI) for Prometheus primarily. Specifies the number of levels to use. That is, "-prometheus_levels 2"would construct one coarse grids and hence be a two level solver. Prometheus will stop constructing coarse grids when the top grid gets too small (see "-prometheus_top_grid_limit").

-prometheus_top_grid_limit[NUM COARSE GRID EQUATIONS LIMIT =x]Number of equations limit for the construction of coarse grids (default = 1000). Grid coarsening will stop when the current grid is below this limit. Useful when an iterative coarse grid solver is used (-prometheus_use_iterative_top_solver).-prometheus_use_deterministic[MAKE DETERMINATE = 1]Try to use deterministic algorithms.

-aggmg_smooths i[MULTIGRID METHOD smoothed-aggregation]

The number (i) of smoothing steps to perform on the prolongation operator in the algebraic "smoothed aggregation" algorithm. This is currently restricted to one step (ie, "-aggmg_smooths 1"). An argument of 0 will result in unsmoothed aggregation (the default) which is a simpler method and can be effective on some problems though it does not scale as well as smoothed aggregation.-aggmg_factor i[MULTIGRID REDUCTION FACTOR = x]We recommend that the user not use this option (or use a value of i=1 for the factor) - this will result in the standard maximal independent set aggregation method. Alternatively the amount of decrease in the number of vertices between each level can be set explicitly with an integer argument larger than 1. For instance, if the fine grid has 90,000 vertices and "-aggmg_factor 30" is given on the command line then the first coarse grid will have 3,000 vertices, the second coarse grid will have 100 vertices and so on (Note, Prometheus will stop coarsening when the number of equations falls below a certain threshold, eg, 800). If "i" is a negative number then Prometheus will decide how fast to coarsen (with a factor computed with the average number of edges in the graph). If i = 0 and-aggmg_globalis not used then the geometric method will be used. This coarsening is implemented with a mesh partitioner (METIS) and is done in serial on each processor so that the aggregation domains will be nested in the processor domains (provided by the user for the fine grid and partitioned by Prometheus on the coarse grids).-aggmg_use_aggragate_blocks[USE AGGREGATION SUBDOMAINS = 1]Use the Prometheus aggregates for the blocks of the block Jacobi smoother preconditioner. Block Jacobi must be specified as the smoother preconditioner and the the algebraic solver must be specified (ie,-prometheus_geomgmust not be specified so that the geometric method is used).

-geomg_cos_tol x[NA]

Tolerance on the cosine of the angle that is used to define an "edge" in the geometric multigrid heuristics.

-geomg_use_geo_hueristics[NA]The geometric multigrid heuristics are turned off by default as they do not work properly if the the elements are not provided in the right orientation or we do not support your element topology (we currently support first order quadrilateral shells, tetrahedra and hexahedra).

-out_verbose i[DEBUG OUTPUT LEVEL = 1]If i=1 (default): Print "normal" output, if i=0: only print error messages, if i>1: print verbose output (useful for debugging only).-out_matrix[NA]Write out matrices to files in matlab format (ie, i j A_{ij}).-out_files[NA]Write FEAP input files for grids.