Loading [MathJax]/extensions/tex2jax.js
MFC: Pre-Process
High-fidelity multiphase flow simulation
All Classes Namespaces Files Functions Variables Pages
m_mpi_common.fpp.f90 File Reference

Functions/Subroutines

program __m_mpi_common_fpp_f90__
 
subroutine s_initialize_mpi_common_module
 The computation of parameters, the allocation of memory, the association of pointers and/or the execution of any other procedures that are necessary to setup the module.
 
subroutine s_mpi_initialize
 The subroutine initializes the MPI execution environment and queries both the number of processors which will be available for the job and the local processor rank.
 
subroutine s_initialize_mpi_data (q_cons_vf, ib_markers, levelset, levelset_norm, beta)
 
subroutine s_mpi_gather_data (my_vector, counts, gathered_vector, root)
 
subroutine mpi_bcast_time_step_values (proc_time, time_avg)
 
subroutine s_prohibit_abort (condition, message)
 
subroutine s_mpi_reduce_stability_criteria_extrema (icfl_max_loc, vcfl_max_loc, ccfl_max_loc, rc_min_loc, icfl_max_glb, vcfl_max_glb, ccfl_max_glb, rc_min_glb)
 The goal of this subroutine is to determine the global extrema of the stability criteria in the computational domain. This is performed by sifting through the local extrema of each stability criterion. Note that each of the local extrema is from a single process, within its assigned section of the computational domain. Finally, note that the global extrema values are only bookkeept on the rank 0 processor.
 
subroutine s_mpi_allreduce_sum (var_loc, var_glb)
 The following subroutine takes the input local variable from all processors and reduces to the sum of all values. The reduced variable is recorded back onto the original local variable on each processor.
 
subroutine s_mpi_allreduce_min (var_loc, var_glb)
 The following subroutine takes the input local variable from all processors and reduces to the minimum of all values. The reduced variable is recorded back onto the original local variable on each processor.
 
subroutine s_mpi_allreduce_max (var_loc, var_glb)
 The following subroutine takes the input local variable from all processors and reduces to the maximum of all values. The reduced variable is recorded back onto the original local variable on each processor.
 
subroutine s_mpi_reduce_min (var_loc)
 The following subroutine takes the inputted variable and determines its minimum value on the entire computational domain. The result is stored back into inputted variable.
 
subroutine s_mpi_reduce_maxloc (var_loc)
 The following subroutine takes the first element of the 2-element inputted variable and determines its maximum value on the entire computational domain. The result is stored back into the first element of the variable while the rank of the processor that is in charge of the sub- domain containing the maximum is stored into the second element of the variable.
 
subroutine s_mpi_abort (prnt, code)
 The subroutine terminates the MPI execution environment.
 
subroutine s_mpi_barrier
 Halts all processes until all have reached barrier.
 
subroutine s_mpi_finalize
 The subroutine finalizes the MPI execution environment.
 
subroutine s_mpi_sendrecv_variables_buffers (q_cons_vf, pb, mv, mpi_dir, pbc_loc)
 The goal of this procedure is to populate the buffers of the cell-average conservative variables by communicating with the neighboring processors.
 
subroutine s_finalize_mpi_common_module
 Module deallocation and/or disassociation procedures.
 

Function/Subroutine Documentation

◆ __m_mpi_common_fpp_f90__()

program __m_mpi_common_fpp_f90__

◆ mpi_bcast_time_step_values()

subroutine __m_mpi_common_fpp_f90__::mpi_bcast_time_step_values ( real(wp), dimension(0:num_procs - 1), intent(inout) proc_time,
real(wp), intent(inout) time_avg )
private
Here is the caller graph for this function:

◆ s_finalize_mpi_common_module()

subroutine __m_mpi_common_fpp_f90__::s_finalize_mpi_common_module
private

Module deallocation and/or disassociation procedures.

Here is the caller graph for this function:

◆ s_initialize_mpi_common_module()

subroutine __m_mpi_common_fpp_f90__::s_initialize_mpi_common_module

The computation of parameters, the allocation of memory, the association of pointers and/or the execution of any other procedures that are necessary to setup the module.

Here is the caller graph for this function:

◆ s_initialize_mpi_data()

subroutine __m_mpi_common_fpp_f90__::s_initialize_mpi_data ( type(scalar_field), dimension(sys_size), intent(in) q_cons_vf,
type(integer_field), intent(in), optional ib_markers,
type(levelset_field), intent(in), optional levelset,
type(levelset_norm_field), intent(in), optional levelset_norm,
type(scalar_field), intent(in), optional beta )
private
Here is the caller graph for this function:

◆ s_mpi_abort()

subroutine __m_mpi_common_fpp_f90__::s_mpi_abort ( character(len=*), intent(in), optional prnt,
integer, intent(in), optional code )
private

The subroutine terminates the MPI execution environment.

Parameters
prnterror message to be printed
Here is the caller graph for this function:

◆ s_mpi_allreduce_max()

subroutine __m_mpi_common_fpp_f90__::s_mpi_allreduce_max ( real(wp), intent(in) var_loc,
real(wp), intent(out) var_glb )
private

The following subroutine takes the input local variable from all processors and reduces to the maximum of all values. The reduced variable is recorded back onto the original local variable on each processor.

Parameters
var_locSome variable containing the local value which should be reduced amongst all the processors in the communicator.
var_glbThe globally reduced value

◆ s_mpi_allreduce_min()

subroutine __m_mpi_common_fpp_f90__::s_mpi_allreduce_min ( real(wp), intent(in) var_loc,
real(wp), intent(out) var_glb )
private

The following subroutine takes the input local variable from all processors and reduces to the minimum of all values. The reduced variable is recorded back onto the original local variable on each processor.

Parameters
var_locSome variable containing the local value which should be reduced amongst all the processors in the communicator.
var_glbThe globally reduced value

◆ s_mpi_allreduce_sum()

subroutine __m_mpi_common_fpp_f90__::s_mpi_allreduce_sum ( real(wp), intent(in) var_loc,
real(wp), intent(out) var_glb )
private

The following subroutine takes the input local variable from all processors and reduces to the sum of all values. The reduced variable is recorded back onto the original local variable on each processor.

Parameters
var_locSome variable containing the local value which should be reduced amongst all the processors in the communicator.
var_glbThe globally reduced value

◆ s_mpi_barrier()

subroutine __m_mpi_common_fpp_f90__::s_mpi_barrier
private

Halts all processes until all have reached barrier.

Here is the caller graph for this function:

◆ s_mpi_finalize()

subroutine __m_mpi_common_fpp_f90__::s_mpi_finalize
private

The subroutine finalizes the MPI execution environment.

Here is the caller graph for this function:

◆ s_mpi_gather_data()

subroutine __m_mpi_common_fpp_f90__::s_mpi_gather_data ( real(wp), dimension(counts), intent(in) my_vector,
integer, intent(in) counts,
real(wp), dimension(:), intent(out), allocatable gathered_vector,
integer, intent(in) root )
private

◆ s_mpi_initialize()

subroutine __m_mpi_common_fpp_f90__::s_mpi_initialize
private

The subroutine initializes the MPI execution environment and queries both the number of processors which will be available for the job and the local processor rank.

Here is the caller graph for this function:

◆ s_mpi_reduce_maxloc()

subroutine __m_mpi_common_fpp_f90__::s_mpi_reduce_maxloc ( real(wp), dimension(2), intent(inout) var_loc)
private

The following subroutine takes the first element of the 2-element inputted variable and determines its maximum value on the entire computational domain. The result is stored back into the first element of the variable while the rank of the processor that is in charge of the sub- domain containing the maximum is stored into the second element of the variable.

Parameters
var_locOn input, this variable holds the local value and processor rank, which are to be reduced among all the processors in communicator. On output, this variable holds the maximum value, reduced amongst all of the local values, and the process rank to which the value belongs.

◆ s_mpi_reduce_min()

subroutine __m_mpi_common_fpp_f90__::s_mpi_reduce_min ( real(wp), intent(inout) var_loc)
private

The following subroutine takes the inputted variable and determines its minimum value on the entire computational domain. The result is stored back into inputted variable.

Parameters
var_locholds the local value to be reduced among all the processors in communicator. On output, the variable holds the minimum value, reduced amongst all of the local values.
Here is the caller graph for this function:

◆ s_mpi_reduce_stability_criteria_extrema()

subroutine __m_mpi_common_fpp_f90__::s_mpi_reduce_stability_criteria_extrema ( real(wp), intent(in) icfl_max_loc,
real(wp), intent(in) vcfl_max_loc,
real(wp), intent(in) ccfl_max_loc,
real(wp), intent(in) rc_min_loc,
real(wp), intent(out) icfl_max_glb,
real(wp), intent(out) vcfl_max_glb,
real(wp), intent(out) ccfl_max_glb,
real(wp), intent(out) rc_min_glb )
private

The goal of this subroutine is to determine the global extrema of the stability criteria in the computational domain. This is performed by sifting through the local extrema of each stability criterion. Note that each of the local extrema is from a single process, within its assigned section of the computational domain. Finally, note that the global extrema values are only bookkeept on the rank 0 processor.

Parameters
icfl_max_locLocal maximum ICFL stability criterion
vcfl_max_locLocal maximum VCFL stability criterion
Rc_min_locLocal minimum Rc stability criterion
icfl_max_glbGlobal maximum ICFL stability criterion
vcfl_max_glbGlobal maximum VCFL stability criterion
Rc_min_glbGlobal minimum Rc stability criterion

◆ s_mpi_sendrecv_variables_buffers()

subroutine __m_mpi_common_fpp_f90__::s_mpi_sendrecv_variables_buffers ( type(scalar_field), dimension(sys_size), intent(inout) q_cons_vf,
real(wp), dimension(idwbuff(1)%beg:, idwbuff(2)%beg:, idwbuff(3)%beg:, 1:, 1:), intent(inout) pb,
real(wp), dimension(idwbuff(1)%beg:, idwbuff(2)%beg:, idwbuff(3)%beg:, 1:, 1:), intent(inout) mv,
integer, intent(in) mpi_dir,
integer, intent(in) pbc_loc )
private

The goal of this procedure is to populate the buffers of the cell-average conservative variables by communicating with the neighboring processors.

Parameters
q_cons_vfCell-average conservative variables
mpi_dirMPI communication coordinate direction
pbc_locProcessor boundary condition (PBC) location
Here is the call graph for this function:
Here is the caller graph for this function:

◆ s_prohibit_abort()

subroutine __m_mpi_common_fpp_f90__::s_prohibit_abort ( character(len=*), intent(in) condition,
character(len=*), intent(in) message )
private
Here is the call graph for this function: