MFC: Pre-Process
High-fidelity multiphase flow simulation
Loading...
Searching...
No Matches
m_mpi_common Module Reference

The module serves as a proxy to the parameters and subroutines available in the MPI implementation's MPI module. Specifically, the purpose of the proxy is to harness basic MPI commands into more complicated procedures as to accomplish the communication goals for the simulation. More...

Functions/Subroutines

impure subroutine s_initialize_mpi_common_module
 The computation of parameters, the allocation of memory, the association of pointers and/or the execution of any other procedures that are necessary to setup the module.
 
impure subroutine s_mpi_initialize
 The subroutine initializes the MPI execution environment and queries both the number of processors which will be available for the job and the local processor rank.
 
impure subroutine s_initialize_mpi_data (q_cons_vf, ib_markers, levelset, levelset_norm, beta)
 
subroutine s_initialize_mpi_data_ds (q_cons_vf)
 
impure subroutine s_mpi_gather_data (my_vector, counts, gathered_vector, root)
 
impure subroutine mpi_bcast_time_step_values (proc_time, time_avg)
 
impure subroutine s_prohibit_abort (condition, message)
 
impure subroutine s_mpi_reduce_stability_criteria_extrema (icfl_max_loc, vcfl_max_loc, rc_min_loc, icfl_max_glb, vcfl_max_glb, rc_min_glb)
 The goal of this subroutine is to determine the global extrema of the stability criteria in the computational domain. This is performed by sifting through the local extrema of each stability criterion. Note that each of the local extrema is from a single process, within its assigned section of the computational domain. Finally, note that the global extrema values are only bookkeept on the rank 0 processor.
 
impure subroutine s_mpi_allreduce_sum (var_loc, var_glb)
 The following subroutine takes the input local variable from all processors and reduces to the sum of all values. The reduced variable is recorded back onto the original local variable on each processor.
 
impure subroutine s_mpi_allreduce_integer_sum (var_loc, var_glb)
 The following subroutine takes the input local variable from all processors and reduces to the sum of all values. The reduced variable is recorded back onto the original local variable on each processor.
 
impure subroutine s_mpi_allreduce_min (var_loc, var_glb)
 The following subroutine takes the input local variable from all processors and reduces to the minimum of all values. The reduced variable is recorded back onto the original local variable on each processor.
 
impure subroutine s_mpi_allreduce_max (var_loc, var_glb)
 The following subroutine takes the input local variable from all processors and reduces to the maximum of all values. The reduced variable is recorded back onto the original local variable on each processor.
 
impure subroutine s_mpi_reduce_min (var_loc)
 The following subroutine takes the inputted variable and determines its minimum value on the entire computational domain. The result is stored back into inputted variable.
 
impure subroutine s_mpi_reduce_maxloc (var_loc)
 The following subroutine takes the first element of the 2-element inputted variable and determines its maximum value on the entire computational domain. The result is stored back into the first element of the variable while the rank of the processor that is in charge of the sub- domain containing the maximum is stored into the second element of the variable.
 
impure subroutine s_mpi_abort (prnt, code)
 The subroutine terminates the MPI execution environment.
 
impure subroutine s_mpi_barrier
 Halts all processes until all have reached barrier.
 
impure subroutine s_mpi_finalize
 The subroutine finalizes the MPI execution environment.
 
subroutine s_mpi_sendrecv_variables_buffers (q_comm, mpi_dir, pbc_loc, nvar, pb_in, mv_in)
 The goal of this procedure is to populate the buffers of the cell-average conservative variables by communicating with the neighboring processors.
 
subroutine s_mpi_decompose_computational_domain
 The purpose of this procedure is to optimally decompose the computational domain among the available processors. This is performed by attempting to award each processor, in each of the coordinate directions, approximately the same number of cells, and then recomputing the affected global parameters.
 
subroutine s_mpi_sendrecv_grid_variables_buffers (mpi_dir, pbc_loc)
 The goal of this procedure is to populate the buffers of the grid variables by communicating with the neighboring processors. Note that only the buffers of the cell-width distributions are handled in such a way. This is because the buffers of cell-boundary locations may be calculated directly from those of the cell-width distributions.
 
impure subroutine s_finalize_mpi_common_module
 Module deallocation and/or disassociation procedures.
 

Variables

integer, private v_size
 
real(wp), dimension(:), allocatable, private buff_send
 This variable is utilized to pack and send the buffer of the cell-average primitive variables, for a single computational domain boundary at the time, to the relevant neighboring processor.
 
real(wp), dimension(:), allocatable, private buff_recv
 buff_recv is utilized to receive and unpack the buffer of the cell- average primitive variables, for a single computational domain boundary at the time, from the relevant neighboring processor.
 
integer(kind=8) halo_size
 

Detailed Description

The module serves as a proxy to the parameters and subroutines available in the MPI implementation's MPI module. Specifically, the purpose of the proxy is to harness basic MPI commands into more complicated procedures as to accomplish the communication goals for the simulation.

Function/Subroutine Documentation

◆ mpi_bcast_time_step_values()

impure subroutine m_mpi_common::mpi_bcast_time_step_values ( real(wp), dimension(0:num_procs - 1), intent(inout) proc_time,
real(wp), intent(inout) time_avg )
Here is the caller graph for this function:

◆ s_finalize_mpi_common_module()

impure subroutine m_mpi_common::s_finalize_mpi_common_module

Module deallocation and/or disassociation procedures.

Here is the caller graph for this function:

◆ s_initialize_mpi_common_module()

impure subroutine m_mpi_common::s_initialize_mpi_common_module

The computation of parameters, the allocation of memory, the association of pointers and/or the execution of any other procedures that are necessary to setup the module.

Here is the caller graph for this function:

◆ s_initialize_mpi_data()

impure subroutine m_mpi_common::s_initialize_mpi_data ( type(scalar_field), dimension(sys_size), intent(in) q_cons_vf,
type(integer_field), intent(in), optional ib_markers,
type(levelset_field), intent(in), optional levelset,
type(levelset_norm_field), intent(in), optional levelset_norm,
type(scalar_field), intent(in), optional beta )
Here is the caller graph for this function:

◆ s_initialize_mpi_data_ds()

subroutine m_mpi_common::s_initialize_mpi_data_ds ( type(scalar_field), dimension(sys_size), intent(in) q_cons_vf)

◆ s_mpi_abort()

impure subroutine m_mpi_common::s_mpi_abort ( character(len=*), intent(in), optional prnt,
integer, intent(in), optional code )

The subroutine terminates the MPI execution environment.

Parameters
prnterror message to be printed
Here is the caller graph for this function:

◆ s_mpi_allreduce_integer_sum()

impure subroutine m_mpi_common::s_mpi_allreduce_integer_sum ( integer, intent(in) var_loc,
integer, intent(out) var_glb )

The following subroutine takes the input local variable from all processors and reduces to the sum of all values. The reduced variable is recorded back onto the original local variable on each processor.

Parameters
var_locSome variable containing the local value which should be reduced amongst all the processors in the communicator.
var_glbThe globally reduced value

◆ s_mpi_allreduce_max()

impure subroutine m_mpi_common::s_mpi_allreduce_max ( real(wp), intent(in) var_loc,
real(wp), intent(out) var_glb )

The following subroutine takes the input local variable from all processors and reduces to the maximum of all values. The reduced variable is recorded back onto the original local variable on each processor.

Parameters
var_locSome variable containing the local value which should be reduced amongst all the processors in the communicator.
var_glbThe globally reduced value

◆ s_mpi_allreduce_min()

impure subroutine m_mpi_common::s_mpi_allreduce_min ( real(wp), intent(in) var_loc,
real(wp), intent(out) var_glb )

The following subroutine takes the input local variable from all processors and reduces to the minimum of all values. The reduced variable is recorded back onto the original local variable on each processor.

Parameters
var_locSome variable containing the local value which should be reduced amongst all the processors in the communicator.
var_glbThe globally reduced value

◆ s_mpi_allreduce_sum()

impure subroutine m_mpi_common::s_mpi_allreduce_sum ( real(wp), intent(in) var_loc,
real(wp), intent(out) var_glb )

The following subroutine takes the input local variable from all processors and reduces to the sum of all values. The reduced variable is recorded back onto the original local variable on each processor.

Parameters
var_locSome variable containing the local value which should be reduced amongst all the processors in the communicator.
var_glbThe globally reduced value

◆ s_mpi_barrier()

impure subroutine m_mpi_common::s_mpi_barrier

Halts all processes until all have reached barrier.

Here is the caller graph for this function:

◆ s_mpi_decompose_computational_domain()

subroutine m_mpi_common::s_mpi_decompose_computational_domain

The purpose of this procedure is to optimally decompose the computational domain among the available processors. This is performed by attempting to award each processor, in each of the coordinate directions, approximately the same number of cells, and then recomputing the affected global parameters.

Here is the call graph for this function:
Here is the caller graph for this function:

◆ s_mpi_finalize()

impure subroutine m_mpi_common::s_mpi_finalize

The subroutine finalizes the MPI execution environment.

Here is the caller graph for this function:

◆ s_mpi_gather_data()

impure subroutine m_mpi_common::s_mpi_gather_data ( real(wp), dimension(counts), intent(in) my_vector,
integer, intent(in) counts,
real(wp), dimension(:), intent(out), allocatable gathered_vector,
integer, intent(in) root )

◆ s_mpi_initialize()

impure subroutine m_mpi_common::s_mpi_initialize

The subroutine initializes the MPI execution environment and queries both the number of processors which will be available for the job and the local processor rank.

Here is the caller graph for this function:

◆ s_mpi_reduce_maxloc()

impure subroutine m_mpi_common::s_mpi_reduce_maxloc ( real(wp), dimension(2), intent(inout) var_loc)

The following subroutine takes the first element of the 2-element inputted variable and determines its maximum value on the entire computational domain. The result is stored back into the first element of the variable while the rank of the processor that is in charge of the sub- domain containing the maximum is stored into the second element of the variable.

Parameters
var_locOn input, this variable holds the local value and processor rank, which are to be reduced among all the processors in communicator. On output, this variable holds the maximum value, reduced amongst all of the local values, and the process rank to which the value belongs.

◆ s_mpi_reduce_min()

impure subroutine m_mpi_common::s_mpi_reduce_min ( real(wp), intent(inout) var_loc)

The following subroutine takes the inputted variable and determines its minimum value on the entire computational domain. The result is stored back into inputted variable.

Parameters
var_locholds the local value to be reduced among all the processors in communicator. On output, the variable holds the minimum value, reduced amongst all of the local values.
Here is the caller graph for this function:

◆ s_mpi_reduce_stability_criteria_extrema()

impure subroutine m_mpi_common::s_mpi_reduce_stability_criteria_extrema ( real(wp), intent(in) icfl_max_loc,
real(wp), intent(in) vcfl_max_loc,
real(wp), intent(in) rc_min_loc,
real(wp), intent(out) icfl_max_glb,
real(wp), intent(out) vcfl_max_glb,
real(wp), intent(out) rc_min_glb )

The goal of this subroutine is to determine the global extrema of the stability criteria in the computational domain. This is performed by sifting through the local extrema of each stability criterion. Note that each of the local extrema is from a single process, within its assigned section of the computational domain. Finally, note that the global extrema values are only bookkeept on the rank 0 processor.

Parameters
icfl_max_locLocal maximum ICFL stability criterion
vcfl_max_locLocal maximum VCFL stability criterion
Rc_min_locLocal minimum Rc stability criterion
icfl_max_glbGlobal maximum ICFL stability criterion
vcfl_max_glbGlobal maximum VCFL stability criterion
Rc_min_glbGlobal minimum Rc stability criterion

◆ s_mpi_sendrecv_grid_variables_buffers()

subroutine m_mpi_common::s_mpi_sendrecv_grid_variables_buffers ( integer, intent(in) mpi_dir,
integer, intent(in) pbc_loc )

The goal of this procedure is to populate the buffers of the grid variables by communicating with the neighboring processors. Note that only the buffers of the cell-width distributions are handled in such a way. This is because the buffers of cell-boundary locations may be calculated directly from those of the cell-width distributions.

Parameters
mpi_dirMPI communication coordinate direction
pbc_locProcessor boundary condition (PBC) location

◆ s_mpi_sendrecv_variables_buffers()

subroutine m_mpi_common::s_mpi_sendrecv_variables_buffers ( type(scalar_field), dimension(1:), intent(inout) q_comm,
integer, intent(in) mpi_dir,
integer, intent(in) pbc_loc,
integer, intent(in) nvar,
real(stp), dimension(idwbuff(1)%beg:, idwbuff(2)%beg:, idwbuff(3)%beg:, 1:, 1:), intent(inout), optional pb_in,
real(stp), dimension(idwbuff(1)%beg:, idwbuff(2)%beg:, idwbuff(3)%beg:, 1:, 1:), intent(inout), optional mv_in )

The goal of this procedure is to populate the buffers of the cell-average conservative variables by communicating with the neighboring processors.

Parameters
q_cons_vfCell-average conservative variables
mpi_dirMPI communication coordinate direction
pbc_locProcessor boundary condition (PBC) location

◆ s_prohibit_abort()

impure subroutine m_mpi_common::s_prohibit_abort ( character(len=*), intent(in) condition,
character(len=*), intent(in) message )
Here is the call graph for this function:

Variable Documentation

◆ buff_recv

real(wp), dimension(:), allocatable, private m_mpi_common::buff_recv
private

buff_recv is utilized to receive and unpack the buffer of the cell- average primitive variables, for a single computational domain boundary at the time, from the relevant neighboring processor.

◆ buff_send

real(wp), dimension(:), allocatable, private m_mpi_common::buff_send
private

This variable is utilized to pack and send the buffer of the cell-average primitive variables, for a single computational domain boundary at the time, to the relevant neighboring processor.

◆ halo_size

integer(kind=8) m_mpi_common::halo_size

◆ v_size

integer, private m_mpi_common::v_size
private