MFC
Exascale flow solver
Loading...
Searching...
No Matches
m_mpi_common Module Reference

MPI communication layer: domain decomposition, halo exchange, reductions, and parallel I/O setup. More...

Functions/Subroutines

impure subroutine s_initialize_mpi_common_module
 Initialize the module.
impure subroutine s_mpi_initialize
 Initialize the MPI execution environment and query the number of processors and local rank.
impure subroutine s_initialize_mpi_data (q_cons_vf, ib_markers, beta)
 Set up MPI I/O data views and variable pointers for parallel file output.
subroutine s_initialize_mpi_data_ds (q_cons_vf)
 Set up MPI I/O data views for downsampled (coarsened) parallel file output.
impure subroutine s_mpi_gather_data (my_vector, counts, gathered_vector, root)
 Gather variable-length real vectors from all MPI ranks onto the root process.
impure subroutine mpi_bcast_time_step_values (proc_time, time_avg)
 Gather per-rank time step wall-clock times onto rank 0 for performance reporting.
impure subroutine s_prohibit_abort (condition, message)
 Print a case file error with the prohibited condition and message, then abort execution.
impure subroutine s_mpi_reduce_stability_criteria_extrema (icfl_max_loc, vcfl_max_loc, rc_min_loc, icfl_max_glb, vcfl_max_glb, rc_min_glb)
 The goal of this subroutine is to determine the global extrema of the stability criteria in the computational domain. This is performed by sifting through the local extrema of each stability criterion. Note that each of the local extrema is from a single process, within its assigned section of the computational domain. Finally, note that the global extrema values are only bookkeept on the rank 0 processor.
impure subroutine s_mpi_allreduce_sum (var_loc, var_glb)
 Reduce a local real value to its global sum across all MPI ranks.
impure subroutine s_mpi_allreduce_vectors_sum (var_loc, var_glb, num_vectors, vector_length)
 Reduce an array of vectors to their global sums across all MPI ranks.
impure subroutine s_mpi_allreduce_integer_sum (var_loc, var_glb)
 Reduce a local integer value to its global sum across all MPI ranks.
impure subroutine s_mpi_allreduce_min (var_loc, var_glb)
 Reduce a local real value to its global minimum across all MPI ranks.
impure subroutine s_mpi_allreduce_max (var_loc, var_glb)
 Reduce a local real value to its global maximum across all MPI ranks.
impure subroutine s_mpi_reduce_min (var_loc)
 Reduce a local real value to its global minimum across all ranks.
impure subroutine s_mpi_reduce_maxloc (var_loc)
 Reduce a 2-element variable to its global maximum value with the owning processor rank (MPI_MAXLOC). Reduce a local value to its global maximum with location (rank) across all ranks.
impure subroutine s_mpi_abort (prnt, code)
 The subroutine terminates the MPI execution environment.
impure subroutine s_mpi_barrier
 Halts all processes until all have reached barrier.
impure subroutine s_mpi_finalize
 The subroutine finalizes the MPI execution environment.
subroutine s_mpi_sendrecv_variables_buffers (q_comm, mpi_dir, pbc_loc, nvar, pb_in, mv_in)
 The goal of this procedure is to populate the buffers of the cell-average conservative variables by communicating with the neighboring processors.
subroutine s_mpi_decompose_computational_domain
 Decompose the computational domain among processors by balancing cells per rank in each coordinate direction.
subroutine s_mpi_sendrecv_grid_variables_buffers (mpi_dir, pbc_loc)
 The goal of this procedure is to populate the buffers of the grid variables by communicating with the neighboring processors. Note that only the buffers of the cell-width distributions are handled in such a way. This is because the buffers of cell-boundary locations may be calculated directly from those of the cell-width distributions.
impure subroutine s_finalize_mpi_common_module
 Module deallocation and/or disassociation procedures.

Variables

integer, private v_size
real(wp), dimension(:), allocatable, private buff_send
 Primitive variable send buffer for halo exchange.
real(wp), dimension(:), allocatable, private buff_recv
 Primitive variable receive buffer for halo exchange.
integer(kind=8) halo_size

Detailed Description

MPI communication layer: domain decomposition, halo exchange, reductions, and parallel I/O setup.

Function/Subroutine Documentation

◆ mpi_bcast_time_step_values()

impure subroutine m_mpi_common::mpi_bcast_time_step_values ( real(wp), dimension(0:num_procs - 1), intent(inout) proc_time,
real(wp), intent(inout) time_avg )

Gather per-rank time step wall-clock times onto rank 0 for performance reporting.

Definition at line 669 of file m_mpi_common.fpp.f90.

Here is the caller graph for this function:

◆ s_finalize_mpi_common_module()

impure subroutine m_mpi_common::s_finalize_mpi_common_module

Module deallocation and/or disassociation procedures.

Definition at line 2453 of file m_mpi_common.fpp.f90.

Here is the caller graph for this function:

◆ s_initialize_mpi_common_module()

impure subroutine m_mpi_common::s_initialize_mpi_common_module

Initialize the module.

Definition at line 387 of file m_mpi_common.fpp.f90.

Here is the caller graph for this function:

◆ s_initialize_mpi_data()

impure subroutine m_mpi_common::s_initialize_mpi_data ( type(scalar_field), dimension(sys_size), intent(in) q_cons_vf,
type(integer_field), intent(in), optional ib_markers,
type(scalar_field), intent(in), optional beta )

Set up MPI I/O data views and variable pointers for parallel file output.

Definition at line 511 of file m_mpi_common.fpp.f90.

Here is the caller graph for this function:

◆ s_initialize_mpi_data_ds()

subroutine m_mpi_common::s_initialize_mpi_data_ds ( type(scalar_field), dimension(sys_size), intent(in) q_cons_vf)

Set up MPI I/O data views for downsampled (coarsened) parallel file output.

Definition at line 593 of file m_mpi_common.fpp.f90.

◆ s_mpi_abort()

impure subroutine m_mpi_common::s_mpi_abort ( character(len=*), intent(in), optional prnt,
integer, intent(in), optional code )

The subroutine terminates the MPI execution environment.

Definition at line 851 of file m_mpi_common.fpp.f90.

Here is the caller graph for this function:

◆ s_mpi_allreduce_integer_sum()

impure subroutine m_mpi_common::s_mpi_allreduce_integer_sum ( integer, intent(in) var_loc,
integer, intent(out) var_glb )

Reduce a local integer value to its global sum across all MPI ranks.

Definition at line 771 of file m_mpi_common.fpp.f90.

◆ s_mpi_allreduce_max()

impure subroutine m_mpi_common::s_mpi_allreduce_max ( real(wp), intent(in) var_loc,
real(wp), intent(out) var_glb )

Reduce a local real value to its global maximum across all MPI ranks.

Definition at line 801 of file m_mpi_common.fpp.f90.

◆ s_mpi_allreduce_min()

impure subroutine m_mpi_common::s_mpi_allreduce_min ( real(wp), intent(in) var_loc,
real(wp), intent(out) var_glb )

Reduce a local real value to its global minimum across all MPI ranks.

Definition at line 787 of file m_mpi_common.fpp.f90.

◆ s_mpi_allreduce_sum()

impure subroutine m_mpi_common::s_mpi_allreduce_sum ( real(wp), intent(in) var_loc,
real(wp), intent(out) var_glb )

Reduce a local real value to its global sum across all MPI ranks.

Definition at line 736 of file m_mpi_common.fpp.f90.

◆ s_mpi_allreduce_vectors_sum()

impure subroutine m_mpi_common::s_mpi_allreduce_vectors_sum ( real(wp), dimension(:,:), intent(in) var_loc,
real(wp), dimension(:,:), intent(out) var_glb,
integer, intent(in) num_vectors,
integer, intent(in) vector_length )

Reduce an array of vectors to their global sums across all MPI ranks.

Definition at line 750 of file m_mpi_common.fpp.f90.

◆ s_mpi_barrier()

impure subroutine m_mpi_common::s_mpi_barrier

Halts all processes until all have reached barrier.

Definition at line 882 of file m_mpi_common.fpp.f90.

Here is the caller graph for this function:

◆ s_mpi_decompose_computational_domain()

subroutine m_mpi_common::s_mpi_decompose_computational_domain

Decompose the computational domain among processors by balancing cells per rank in each coordinate direction.

Non-optimal number of processors in the x-, y- and z-directions

Definition at line 1966 of file m_mpi_common.fpp.f90.

Here is the call graph for this function:
Here is the caller graph for this function:

◆ s_mpi_finalize()

impure subroutine m_mpi_common::s_mpi_finalize

The subroutine finalizes the MPI execution environment.

Definition at line 893 of file m_mpi_common.fpp.f90.

Here is the caller graph for this function:

◆ s_mpi_gather_data()

impure subroutine m_mpi_common::s_mpi_gather_data ( real(wp), dimension(counts), intent(in) my_vector,
integer, intent(in) counts,
real(wp), dimension(:), intent(out), allocatable gathered_vector,
integer, intent(in) root )

Gather variable-length real vectors from all MPI ranks onto the root process.

Parameters
[in]countsArray of vector lengths for each process
[in]my_vectorInput vector on each process
[in]rootRank of the root process
[out]gathered_vectorGathered vector on the root process

Definition at line 639 of file m_mpi_common.fpp.f90.

◆ s_mpi_initialize()

impure subroutine m_mpi_common::s_mpi_initialize

Initialize the MPI execution environment and query the number of processors and local rank.

Definition at line 488 of file m_mpi_common.fpp.f90.

Here is the caller graph for this function:

◆ s_mpi_reduce_maxloc()

impure subroutine m_mpi_common::s_mpi_reduce_maxloc ( real(wp), dimension(2), intent(inout) var_loc)

Reduce a 2-element variable to its global maximum value with the owning processor rank (MPI_MAXLOC). Reduce a local value to its global maximum with location (rank) across all ranks.

Definition at line 834 of file m_mpi_common.fpp.f90.

◆ s_mpi_reduce_min()

impure subroutine m_mpi_common::s_mpi_reduce_min ( real(wp), intent(inout) var_loc)

Reduce a local real value to its global minimum across all ranks.

Definition at line 815 of file m_mpi_common.fpp.f90.

Here is the caller graph for this function:

◆ s_mpi_reduce_stability_criteria_extrema()

impure subroutine m_mpi_common::s_mpi_reduce_stability_criteria_extrema ( real(wp), intent(in) icfl_max_loc,
real(wp), intent(in) vcfl_max_loc,
real(wp), intent(in) rc_min_loc,
real(wp), intent(out) icfl_max_glb,
real(wp), intent(out) vcfl_max_glb,
real(wp), intent(out) rc_min_glb )

The goal of this subroutine is to determine the global extrema of the stability criteria in the computational domain. This is performed by sifting through the local extrema of each stability criterion. Note that each of the local extrema is from a single process, within its assigned section of the computational domain. Finally, note that the global extrema values are only bookkeept on the rank 0 processor.

Definition at line 702 of file m_mpi_common.fpp.f90.

◆ s_mpi_sendrecv_grid_variables_buffers()

subroutine m_mpi_common::s_mpi_sendrecv_grid_variables_buffers ( integer, intent(in) mpi_dir,
integer, intent(in) pbc_loc )

The goal of this procedure is to populate the buffers of the grid variables by communicating with the neighboring processors. Note that only the buffers of the cell-width distributions are handled in such a way. This is because the buffers of cell-boundary locations may be calculated directly from those of the cell-width distributions.

Definition at line 2381 of file m_mpi_common.fpp.f90.

◆ s_mpi_sendrecv_variables_buffers()

subroutine m_mpi_common::s_mpi_sendrecv_variables_buffers ( type(scalar_field), dimension(1:), intent(inout) q_comm,
integer, intent(in) mpi_dir,
integer, intent(in) pbc_loc,
integer, intent(in) nvar,
real(stp), dimension(idwbuff(1)%beg:,idwbuff(2)%beg:,idwbuff(3)%beg:,1:,1:), intent(inout), optional pb_in,
real(stp), dimension(idwbuff(1)%beg:,idwbuff(2)%beg:,idwbuff(3)%beg:,1:,1:), intent(inout), optional mv_in )

The goal of this procedure is to populate the buffers of the cell-average conservative variables by communicating with the neighboring processors.

Definition at line 905 of file m_mpi_common.fpp.f90.

Here is the call graph for this function:

◆ s_prohibit_abort()

impure subroutine m_mpi_common::s_prohibit_abort ( character(len=*), intent(in) condition,
character(len=*), intent(in) message )

Print a case file error with the prohibited condition and message, then abort execution.

Definition at line 683 of file m_mpi_common.fpp.f90.

Here is the call graph for this function:

Variable Documentation

◆ buff_recv

real(wp), dimension(:), allocatable, private m_mpi_common::buff_recv
private

Primitive variable receive buffer for halo exchange.

Definition at line 356 of file m_mpi_common.fpp.f90.

◆ buff_send

real(wp), dimension(:), allocatable, private m_mpi_common::buff_send
private

Primitive variable send buffer for halo exchange.

Definition at line 355 of file m_mpi_common.fpp.f90.

◆ halo_size

integer(kind=8) m_mpi_common::halo_size

Definition at line 371 of file m_mpi_common.fpp.f90.

◆ v_size

integer, private m_mpi_common::v_size
private

Definition at line 342 of file m_mpi_common.fpp.f90.