MFC: Pre-Process
High-fidelity multiphase flow simulation
Loading...
Searching...
No Matches
m_mpi_common.fpp.f90 File Reference

Modules

module  m_mpi_common
 The module serves as a proxy to the parameters and subroutines available in the MPI implementation's MPI module. Specifically, the purpose of the proxy is to harness basic MPI commands into more complicated procedures as to accomplish the communication goals for the simulation.
 

Functions/Subroutines

impure subroutine m_mpi_common::s_initialize_mpi_common_module
 The computation of parameters, the allocation of memory, the association of pointers and/or the execution of any other procedures that are necessary to setup the module.
 
impure subroutine m_mpi_common::s_mpi_initialize
 The subroutine initializes the MPI execution environment and queries both the number of processors which will be available for the job and the local processor rank.
 
impure subroutine m_mpi_common::s_initialize_mpi_data (q_cons_vf, ib_markers, levelset, levelset_norm, beta)
 
subroutine m_mpi_common::s_initialize_mpi_data_ds (q_cons_vf)
 
impure subroutine m_mpi_common::s_mpi_gather_data (my_vector, counts, gathered_vector, root)
 
impure subroutine m_mpi_common::mpi_bcast_time_step_values (proc_time, time_avg)
 
impure subroutine m_mpi_common::s_prohibit_abort (condition, message)
 
impure subroutine m_mpi_common::s_mpi_reduce_stability_criteria_extrema (icfl_max_loc, vcfl_max_loc, rc_min_loc, icfl_max_glb, vcfl_max_glb, rc_min_glb)
 The goal of this subroutine is to determine the global extrema of the stability criteria in the computational domain. This is performed by sifting through the local extrema of each stability criterion. Note that each of the local extrema is from a single process, within its assigned section of the computational domain. Finally, note that the global extrema values are only bookkeept on the rank 0 processor.
 
impure subroutine m_mpi_common::s_mpi_allreduce_sum (var_loc, var_glb)
 The following subroutine takes the input local variable from all processors and reduces to the sum of all values. The reduced variable is recorded back onto the original local variable on each processor.
 
impure subroutine m_mpi_common::s_mpi_allreduce_integer_sum (var_loc, var_glb)
 The following subroutine takes the input local variable from all processors and reduces to the sum of all values. The reduced variable is recorded back onto the original local variable on each processor.
 
impure subroutine m_mpi_common::s_mpi_allreduce_min (var_loc, var_glb)
 The following subroutine takes the input local variable from all processors and reduces to the minimum of all values. The reduced variable is recorded back onto the original local variable on each processor.
 
impure subroutine m_mpi_common::s_mpi_allreduce_max (var_loc, var_glb)
 The following subroutine takes the input local variable from all processors and reduces to the maximum of all values. The reduced variable is recorded back onto the original local variable on each processor.
 
impure subroutine m_mpi_common::s_mpi_reduce_min (var_loc)
 The following subroutine takes the inputted variable and determines its minimum value on the entire computational domain. The result is stored back into inputted variable.
 
impure subroutine m_mpi_common::s_mpi_reduce_maxloc (var_loc)
 The following subroutine takes the first element of the 2-element inputted variable and determines its maximum value on the entire computational domain. The result is stored back into the first element of the variable while the rank of the processor that is in charge of the sub- domain containing the maximum is stored into the second element of the variable.
 
impure subroutine m_mpi_common::s_mpi_abort (prnt, code)
 The subroutine terminates the MPI execution environment.
 
impure subroutine m_mpi_common::s_mpi_barrier
 Halts all processes until all have reached barrier.
 
impure subroutine m_mpi_common::s_mpi_finalize
 The subroutine finalizes the MPI execution environment.
 
subroutine m_mpi_common::s_mpi_sendrecv_variables_buffers (q_comm, mpi_dir, pbc_loc, nvar, pb_in, mv_in)
 The goal of this procedure is to populate the buffers of the cell-average conservative variables by communicating with the neighboring processors.
 
subroutine m_mpi_common::s_mpi_decompose_computational_domain
 The purpose of this procedure is to optimally decompose the computational domain among the available processors. This is performed by attempting to award each processor, in each of the coordinate directions, approximately the same number of cells, and then recomputing the affected global parameters.
 
subroutine m_mpi_common::s_mpi_sendrecv_grid_variables_buffers (mpi_dir, pbc_loc)
 The goal of this procedure is to populate the buffers of the grid variables by communicating with the neighboring processors. Note that only the buffers of the cell-width distributions are handled in such a way. This is because the buffers of cell-boundary locations may be calculated directly from those of the cell-width distributions.
 
impure subroutine m_mpi_common::s_finalize_mpi_common_module
 Module deallocation and/or disassociation procedures.
 

Variables

integer, private m_mpi_common::v_size
 
real(wp), dimension(:), allocatable, private m_mpi_common::buff_send
 This variable is utilized to pack and send the buffer of the cell-average primitive variables, for a single computational domain boundary at the time, to the relevant neighboring processor.
 
real(wp), dimension(:), allocatable, private m_mpi_common::buff_recv
 buff_recv is utilized to receive and unpack the buffer of the cell- average primitive variables, for a single computational domain boundary at the time, from the relevant neighboring processor.
 
integer(kind=8) m_mpi_common::halo_size