MFC
Exascale flow solver
Loading...
Searching...
No Matches
m_mpi_common Module Reference

MPI communication layer: domain decomposition, halo exchange, reductions, and parallel I/O setup. More...

Functions/Subroutines

impure subroutine s_initialize_mpi_common_module
 The computation of parameters, the allocation of memory, the association of pointers and/or the execution of any other procedures that are necessary to setup the module.
impure subroutine s_mpi_initialize
 The subroutine initializes the MPI execution environment and queries both the number of processors which will be available for the job and the local processor rank.
impure subroutine s_initialize_mpi_data (q_cons_vf, ib_markers, beta)
subroutine s_initialize_mpi_data_ds (q_cons_vf)
impure subroutine s_mpi_gather_data (my_vector, counts, gathered_vector, root)
 Gathers variable-length real vectors from all MPI ranks onto the root process.
impure subroutine mpi_bcast_time_step_values (proc_time, time_avg)
 Gathers per-rank time step wall-clock times onto rank 0 for performance reporting.
impure subroutine s_prohibit_abort (condition, message)
 Prints a case file error with the prohibited condition and message, then aborts execution.
impure subroutine s_mpi_reduce_stability_criteria_extrema (icfl_max_loc, vcfl_max_loc, rc_min_loc, icfl_max_glb, vcfl_max_glb, rc_min_glb)
 The goal of this subroutine is to determine the global extrema of the stability criteria in the computational domain. This is performed by sifting through the local extrema of each stability criterion. Note that each of the local extrema is from a single process, within its assigned section of the computational domain. Finally, note that the global extrema values are only bookkeept on the rank 0 processor.
impure subroutine s_mpi_allreduce_sum (var_loc, var_glb)
 The following subroutine takes the input local variable from all processors and reduces to the sum of all values. The reduced variable is recorded back onto the original local variable on each processor.
impure subroutine s_mpi_allreduce_vectors_sum (var_loc, var_glb, num_vectors, vector_length)
 This subroutine follows the behavior of the s_mpi_allreduce_sum subroutine with the additional feature that it reduces an array of vectors.
impure subroutine s_mpi_allreduce_integer_sum (var_loc, var_glb)
 The following subroutine takes the input local variable from all processors and reduces to the sum of all values. The reduced variable is recorded back onto the original local variable on each processor.
impure subroutine s_mpi_allreduce_min (var_loc, var_glb)
 The following subroutine takes the input local variable from all processors and reduces to the minimum of all values. The reduced variable is recorded back onto the original local variable on each processor.
impure subroutine s_mpi_allreduce_max (var_loc, var_glb)
 The following subroutine takes the input local variable from all processors and reduces to the maximum of all values. The reduced variable is recorded back onto the original local variable on each processor.
impure subroutine s_mpi_reduce_min (var_loc)
 The following subroutine takes the inputted variable and determines its minimum value on the entire computational domain. The result is stored back into inputted variable.
impure subroutine s_mpi_reduce_maxloc (var_loc)
 The following subroutine takes the first element of the 2-element inputted variable and determines its maximum value on the entire computational domain. The result is stored back into the first element of the variable while the rank of the processor that is in charge of the sub- domain containing the maximum is stored into the second element of the variable.
impure subroutine s_mpi_abort (prnt, code)
 The subroutine terminates the MPI execution environment.
impure subroutine s_mpi_barrier
 Halts all processes until all have reached barrier.
impure subroutine s_mpi_finalize
 The subroutine finalizes the MPI execution environment.
subroutine s_mpi_sendrecv_variables_buffers (q_comm, mpi_dir, pbc_loc, nvar, pb_in, mv_in)
 The goal of this procedure is to populate the buffers of the cell-average conservative variables by communicating with the neighboring processors.
subroutine s_mpi_decompose_computational_domain
 The purpose of this procedure is to optimally decompose the computational domain among the available processors. This is performed by attempting to award each processor, in each of the coordinate directions, approximately the same number of cells, and then recomputing the affected global parameters.
subroutine s_mpi_sendrecv_grid_variables_buffers (mpi_dir, pbc_loc)
 The goal of this procedure is to populate the buffers of the grid variables by communicating with the neighboring processors. Note that only the buffers of the cell-width distributions are handled in such a way. This is because the buffers of cell-boundary locations may be calculated directly from those of the cell-width distributions.
impure subroutine s_finalize_mpi_common_module
 Module deallocation and/or disassociation procedures.

Variables

integer, private v_size
real(wp), dimension(:), allocatable, private buff_send
 This variable is utilized to pack and send the buffer of the cell-average primitive variables, for a single computational domain boundary at the time, to the relevant neighboring processor.
real(wp), dimension(:), allocatable, private buff_recv
 buff_recv is utilized to receive and unpack the buffer of the cell- average primitive variables, for a single computational domain boundary at the time, from the relevant neighboring processor.
integer(kind=8) halo_size

Detailed Description

MPI communication layer: domain decomposition, halo exchange, reductions, and parallel I/O setup.

Function/Subroutine Documentation

◆ mpi_bcast_time_step_values()

impure subroutine m_mpi_common::mpi_bcast_time_step_values ( real(wp), dimension(0:num_procs - 1), intent(inout) proc_time,
real(wp), intent(inout) time_avg )

Gathers per-rank time step wall-clock times onto rank 0 for performance reporting.

Definition at line 691 of file m_mpi_common.fpp.f90.

Here is the caller graph for this function:

◆ s_finalize_mpi_common_module()

impure subroutine m_mpi_common::s_finalize_mpi_common_module

Module deallocation and/or disassociation procedures.

Definition at line 3030 of file m_mpi_common.fpp.f90.

Here is the caller graph for this function:

◆ s_initialize_mpi_common_module()

impure subroutine m_mpi_common::s_initialize_mpi_common_module

The computation of parameters, the allocation of memory, the association of pointers and/or the execution of any other procedures that are necessary to setup the module.

Definition at line 379 of file m_mpi_common.fpp.f90.

Here is the caller graph for this function:

◆ s_initialize_mpi_data()

impure subroutine m_mpi_common::s_initialize_mpi_data ( type(scalar_field), dimension(sys_size), intent(in) q_cons_vf,
type(integer_field), intent(in), optional ib_markers,
type(scalar_field), intent(in), optional beta )

Definition at line 518 of file m_mpi_common.fpp.f90.

Here is the caller graph for this function:

◆ s_initialize_mpi_data_ds()

subroutine m_mpi_common::s_initialize_mpi_data_ds ( type(scalar_field), dimension(sys_size), intent(in) q_cons_vf)

Definition at line 607 of file m_mpi_common.fpp.f90.

◆ s_mpi_abort()

impure subroutine m_mpi_common::s_mpi_abort ( character(len=*), intent(in), optional prnt,
integer, intent(in), optional code )

The subroutine terminates the MPI execution environment.

Parameters
prnterror message to be printed
codeoptional exit code

Definition at line 970 of file m_mpi_common.fpp.f90.

Here is the caller graph for this function:

◆ s_mpi_allreduce_integer_sum()

impure subroutine m_mpi_common::s_mpi_allreduce_integer_sum ( integer, intent(in) var_loc,
integer, intent(out) var_glb )

The following subroutine takes the input local variable from all processors and reduces to the sum of all values. The reduced variable is recorded back onto the original local variable on each processor.

Parameters
var_locSome variable containing the local value which should be reduced amongst all the processors in the communicator.
var_glbThe globally reduced value

Definition at line 837 of file m_mpi_common.fpp.f90.

◆ s_mpi_allreduce_max()

impure subroutine m_mpi_common::s_mpi_allreduce_max ( real(wp), intent(in) var_loc,
real(wp), intent(out) var_glb )

The following subroutine takes the input local variable from all processors and reduces to the maximum of all values. The reduced variable is recorded back onto the original local variable on each processor.

Parameters
var_locSome variable containing the local value which should be reduced amongst all the processors in the communicator.
var_glbThe globally reduced value

Definition at line 884 of file m_mpi_common.fpp.f90.

◆ s_mpi_allreduce_min()

impure subroutine m_mpi_common::s_mpi_allreduce_min ( real(wp), intent(in) var_loc,
real(wp), intent(out) var_glb )

The following subroutine takes the input local variable from all processors and reduces to the minimum of all values. The reduced variable is recorded back onto the original local variable on each processor.

Parameters
var_locSome variable containing the local value which should be reduced amongst all the processors in the communicator.
var_glbThe globally reduced value

Definition at line 861 of file m_mpi_common.fpp.f90.

◆ s_mpi_allreduce_sum()

impure subroutine m_mpi_common::s_mpi_allreduce_sum ( real(wp), intent(in) var_loc,
real(wp), intent(out) var_glb )

The following subroutine takes the input local variable from all processors and reduces to the sum of all values. The reduced variable is recorded back onto the original local variable on each processor.

Parameters
var_locSome variable containing the local value which should be reduced amongst all the processors in the communicator.
var_glbThe globally reduced value

Definition at line 788 of file m_mpi_common.fpp.f90.

◆ s_mpi_allreduce_vectors_sum()

impure subroutine m_mpi_common::s_mpi_allreduce_vectors_sum ( real(wp), dimension(:, :), intent(in) var_loc,
real(wp), dimension(:, :), intent(out) var_glb,
integer, intent(in) num_vectors,
integer, intent(in) vector_length )

This subroutine follows the behavior of the s_mpi_allreduce_sum subroutine with the additional feature that it reduces an array of vectors.

Definition at line 806 of file m_mpi_common.fpp.f90.

◆ s_mpi_barrier()

impure subroutine m_mpi_common::s_mpi_barrier

Halts all processes until all have reached barrier.

Definition at line 1003 of file m_mpi_common.fpp.f90.

Here is the caller graph for this function:

◆ s_mpi_decompose_computational_domain()

subroutine m_mpi_common::s_mpi_decompose_computational_domain

The purpose of this procedure is to optimally decompose the computational domain among the available processors. This is performed by attempting to award each processor, in each of the coordinate directions, approximately the same number of cells, and then recomputing the affected global parameters.

Definition at line 2312 of file m_mpi_common.fpp.f90.

Here is the call graph for this function:
Here is the caller graph for this function:

◆ s_mpi_finalize()

impure subroutine m_mpi_common::s_mpi_finalize

The subroutine finalizes the MPI execution environment.

Definition at line 1016 of file m_mpi_common.fpp.f90.

Here is the caller graph for this function:

◆ s_mpi_gather_data()

impure subroutine m_mpi_common::s_mpi_gather_data ( real(wp), dimension(counts), intent(in) my_vector,
integer, intent(in) counts,
real(wp), dimension(:), intent(out), allocatable gathered_vector,
integer, intent(in) root )

Gathers variable-length real vectors from all MPI ranks onto the root process.

Definition at line 658 of file m_mpi_common.fpp.f90.

◆ s_mpi_initialize()

impure subroutine m_mpi_common::s_mpi_initialize

The subroutine initializes the MPI execution environment and queries both the number of processors which will be available for the job and the local processor rank.

Definition at line 487 of file m_mpi_common.fpp.f90.

Here is the caller graph for this function:

◆ s_mpi_reduce_maxloc()

impure subroutine m_mpi_common::s_mpi_reduce_maxloc ( real(wp), dimension(2), intent(inout) var_loc)

The following subroutine takes the first element of the 2-element inputted variable and determines its maximum value on the entire computational domain. The result is stored back into the first element of the variable while the rank of the processor that is in charge of the sub- domain containing the maximum is stored into the second element of the variable.

Parameters
var_locOn input, this variable holds the local value and processor rank, which are to be reduced among all the processors in communicator. On output, this variable holds the maximum value, reduced amongst all of the local values, and the process rank to which the value belongs.

Definition at line 942 of file m_mpi_common.fpp.f90.

◆ s_mpi_reduce_min()

impure subroutine m_mpi_common::s_mpi_reduce_min ( real(wp), intent(inout) var_loc)

The following subroutine takes the inputted variable and determines its minimum value on the entire computational domain. The result is stored back into inputted variable.

Parameters
var_locholds the local value to be reduced among all the processors in communicator. On output, the variable holds the minimum value, reduced amongst all of the local values.

Definition at line 906 of file m_mpi_common.fpp.f90.

Here is the caller graph for this function:

◆ s_mpi_reduce_stability_criteria_extrema()

impure subroutine m_mpi_common::s_mpi_reduce_stability_criteria_extrema ( real(wp), intent(in) icfl_max_loc,
real(wp), intent(in) vcfl_max_loc,
real(wp), intent(in) rc_min_loc,
real(wp), intent(out) icfl_max_glb,
real(wp), intent(out) vcfl_max_glb,
real(wp), intent(out) rc_min_glb )

The goal of this subroutine is to determine the global extrema of the stability criteria in the computational domain. This is performed by sifting through the local extrema of each stability criterion. Note that each of the local extrema is from a single process, within its assigned section of the computational domain. Finally, note that the global extrema values are only bookkeept on the rank 0 processor.

Parameters
icfl_max_locLocal maximum ICFL stability criterion
vcfl_max_locLocal maximum VCFL stability criterion
Rc_min_locLocal minimum Rc stability criterion
icfl_max_glbGlobal maximum ICFL stability criterion
vcfl_max_glbGlobal maximum VCFL stability criterion
Rc_min_glbGlobal minimum Rc stability criterion

Definition at line 733 of file m_mpi_common.fpp.f90.

◆ s_mpi_sendrecv_grid_variables_buffers()

subroutine m_mpi_common::s_mpi_sendrecv_grid_variables_buffers ( integer, intent(in) mpi_dir,
integer, intent(in) pbc_loc )

The goal of this procedure is to populate the buffers of the grid variables by communicating with the neighboring processors. Note that only the buffers of the cell-width distributions are handled in such a way. This is because the buffers of cell-boundary locations may be calculated directly from those of the cell-width distributions.

Parameters
mpi_dirMPI communication coordinate direction
pbc_locProcessor boundary condition (PBC) location

Definition at line 2853 of file m_mpi_common.fpp.f90.

◆ s_mpi_sendrecv_variables_buffers()

subroutine m_mpi_common::s_mpi_sendrecv_variables_buffers ( type(scalar_field), dimension(1:), intent(inout) q_comm,
integer, intent(in) mpi_dir,
integer, intent(in) pbc_loc,
integer, intent(in) nvar,
real(stp), dimension(idwbuff(1)%beg:, idwbuff(2)%beg:, idwbuff(3)%beg:, 1:, 1:), intent(inout), optional pb_in,
real(stp), dimension(idwbuff(1)%beg:, idwbuff(2)%beg:, idwbuff(3)%beg:, 1:, 1:), intent(inout), optional mv_in )

The goal of this procedure is to populate the buffers of the cell-average conservative variables by communicating with the neighboring processors.

Parameters
q_commCell-average conservative variables
mpi_dirMPI communication coordinate direction
pbc_locProcessor boundary condition (PBC) location
nVarNumber of variables to communicate
pb_inOptional internal bubble pressure
mv_inOptional bubble mass velocity

Definition at line 1037 of file m_mpi_common.fpp.f90.

Here is the call graph for this function:

◆ s_prohibit_abort()

impure subroutine m_mpi_common::s_prohibit_abort ( character(len=*), intent(in) condition,
character(len=*), intent(in) message )

Prints a case file error with the prohibited condition and message, then aborts execution.

Definition at line 706 of file m_mpi_common.fpp.f90.

Here is the call graph for this function:

Variable Documentation

◆ buff_recv

real(wp), dimension(:), allocatable, private m_mpi_common::buff_recv
private

buff_recv is utilized to receive and unpack the buffer of the cell- average primitive variables, for a single computational domain boundary at the time, from the relevant neighboring processor.

Definition at line 342 of file m_mpi_common.fpp.f90.

◆ buff_send

real(wp), dimension(:), allocatable, private m_mpi_common::buff_send
private

This variable is utilized to pack and send the buffer of the cell-average primitive variables, for a single computational domain boundary at the time, to the relevant neighboring processor.

Definition at line 337 of file m_mpi_common.fpp.f90.

◆ halo_size

integer(kind=8) m_mpi_common::halo_size

Definition at line 361 of file m_mpi_common.fpp.f90.

◆ v_size

integer, private m_mpi_common::v_size
private

Definition at line 323 of file m_mpi_common.fpp.f90.