|
| impure subroutine | m_mpi_common::s_initialize_mpi_common_module |
| | The computation of parameters, the allocation of memory, the association of pointers and/or the execution of any other procedures that are necessary to setup the module.
|
| |
| impure subroutine | m_mpi_common::s_mpi_initialize |
| | The subroutine initializes the MPI execution environment and queries both the number of processors which will be available for the job and the local processor rank.
|
| |
| impure subroutine | m_mpi_common::s_initialize_mpi_data (q_cons_vf, ib_markers, levelset, levelset_norm, beta) |
| |
| subroutine | m_mpi_common::s_initialize_mpi_data_ds (q_cons_vf) |
| |
| impure subroutine | m_mpi_common::s_mpi_gather_data (my_vector, counts, gathered_vector, root) |
| |
| impure subroutine | m_mpi_common::mpi_bcast_time_step_values (proc_time, time_avg) |
| |
| impure subroutine | m_mpi_common::s_prohibit_abort (condition, message) |
| |
| impure subroutine | m_mpi_common::s_mpi_reduce_stability_criteria_extrema (icfl_max_loc, vcfl_max_loc, rc_min_loc, icfl_max_glb, vcfl_max_glb, rc_min_glb) |
| | The goal of this subroutine is to determine the global extrema of the stability criteria in the computational domain. This is performed by sifting through the local extrema of each stability criterion. Note that each of the local extrema is from a single process, within its assigned section of the computational domain. Finally, note that the global extrema values are only bookkeept on the rank 0 processor.
|
| |
| impure subroutine | m_mpi_common::s_mpi_allreduce_sum (var_loc, var_glb) |
| | The following subroutine takes the input local variable from all processors and reduces to the sum of all values. The reduced variable is recorded back onto the original local variable on each processor.
|
| |
| impure subroutine | m_mpi_common::s_mpi_allreduce_integer_sum (var_loc, var_glb) |
| | The following subroutine takes the input local variable from all processors and reduces to the sum of all values. The reduced variable is recorded back onto the original local variable on each processor.
|
| |
| impure subroutine | m_mpi_common::s_mpi_allreduce_min (var_loc, var_glb) |
| | The following subroutine takes the input local variable from all processors and reduces to the minimum of all values. The reduced variable is recorded back onto the original local variable on each processor.
|
| |
| impure subroutine | m_mpi_common::s_mpi_allreduce_max (var_loc, var_glb) |
| | The following subroutine takes the input local variable from all processors and reduces to the maximum of all values. The reduced variable is recorded back onto the original local variable on each processor.
|
| |
| impure subroutine | m_mpi_common::s_mpi_reduce_min (var_loc) |
| | The following subroutine takes the inputted variable and determines its minimum value on the entire computational domain. The result is stored back into inputted variable.
|
| |
| impure subroutine | m_mpi_common::s_mpi_reduce_maxloc (var_loc) |
| | The following subroutine takes the first element of the 2-element inputted variable and determines its maximum value on the entire computational domain. The result is stored back into the first element of the variable while the rank of the processor that is in charge of the sub- domain containing the maximum is stored into the second element of the variable.
|
| |
| impure subroutine | m_mpi_common::s_mpi_abort (prnt, code) |
| | The subroutine terminates the MPI execution environment.
|
| |
| impure subroutine | m_mpi_common::s_mpi_barrier |
| | Halts all processes until all have reached barrier.
|
| |
| impure subroutine | m_mpi_common::s_mpi_finalize |
| | The subroutine finalizes the MPI execution environment.
|
| |
| subroutine | m_mpi_common::s_mpi_sendrecv_variables_buffers (q_comm, mpi_dir, pbc_loc, nvar, pb_in, mv_in) |
| | The goal of this procedure is to populate the buffers of the cell-average conservative variables by communicating with the neighboring processors.
|
| |
| subroutine | m_mpi_common::s_mpi_decompose_computational_domain |
| | The purpose of this procedure is to optimally decompose the computational domain among the available processors. This is performed by attempting to award each processor, in each of the coordinate directions, approximately the same number of cells, and then recomputing the affected global parameters.
|
| |
| subroutine | m_mpi_common::s_mpi_sendrecv_grid_variables_buffers (mpi_dir, pbc_loc) |
| | The goal of this procedure is to populate the buffers of the grid variables by communicating with the neighboring processors. Note that only the buffers of the cell-width distributions are handled in such a way. This is because the buffers of cell-boundary locations may be calculated directly from those of the cell-width distributions.
|
| |
| impure subroutine | m_mpi_common::s_finalize_mpi_common_module |
| | Module deallocation and/or disassociation procedures.
|
| |