|
MFC
Exascale flow solver
|
Contains module m_mpi_common. More...
Go to the source code of this file.
Modules | |
| module | m_mpi_common |
| MPI communication layer: domain decomposition, halo exchange, reductions, and parallel I/O setup. | |
Functions/Subroutines | |
| impure subroutine | m_mpi_common::s_initialize_mpi_common_module |
| Initialize the module. | |
| impure subroutine | m_mpi_common::s_mpi_initialize |
| Initialize the MPI execution environment and query the number of processors and local rank. | |
| impure subroutine | m_mpi_common::s_initialize_mpi_data (q_cons_vf, ib_markers, beta) |
| Set up MPI I/O data views and variable pointers for parallel file output. | |
| subroutine | m_mpi_common::s_initialize_mpi_data_ds (q_cons_vf) |
| Set up MPI I/O data views for downsampled (coarsened) parallel file output. | |
| impure subroutine | m_mpi_common::s_mpi_gather_data (my_vector, counts, gathered_vector, root) |
| Gather variable-length real vectors from all MPI ranks onto the root process. | |
| impure subroutine | m_mpi_common::mpi_bcast_time_step_values (proc_time, time_avg) |
| Gather per-rank time step wall-clock times onto rank 0 for performance reporting. | |
| impure subroutine | m_mpi_common::s_prohibit_abort (condition, message) |
| Print a case file error with the prohibited condition and message, then abort execution. | |
| impure subroutine | m_mpi_common::s_mpi_reduce_stability_criteria_extrema (icfl_max_loc, vcfl_max_loc, rc_min_loc, icfl_max_glb, vcfl_max_glb, rc_min_glb) |
| The goal of this subroutine is to determine the global extrema of the stability criteria in the computational domain. This is performed by sifting through the local extrema of each stability criterion. Note that each of the local extrema is from a single process, within its assigned section of the computational domain. Finally, note that the global extrema values are only bookkeept on the rank 0 processor. | |
| impure subroutine | m_mpi_common::s_mpi_allreduce_sum (var_loc, var_glb) |
| Reduce a local real value to its global sum across all MPI ranks. | |
| impure subroutine | m_mpi_common::s_mpi_allreduce_vectors_sum (var_loc, var_glb, num_vectors, vector_length) |
| Reduce an array of vectors to their global sums across all MPI ranks. | |
| impure subroutine | m_mpi_common::s_mpi_allreduce_integer_sum (var_loc, var_glb) |
| Reduce a local integer value to its global sum across all MPI ranks. | |
| impure subroutine | m_mpi_common::s_mpi_allreduce_min (var_loc, var_glb) |
| Reduce a local real value to its global minimum across all MPI ranks. | |
| impure subroutine | m_mpi_common::s_mpi_allreduce_max (var_loc, var_glb) |
| Reduce a local real value to its global maximum across all MPI ranks. | |
| impure subroutine | m_mpi_common::s_mpi_reduce_min (var_loc) |
| Reduce a local real value to its global minimum across all ranks. | |
| impure subroutine | m_mpi_common::s_mpi_reduce_maxloc (var_loc) |
| Reduce a 2-element variable to its global maximum value with the owning processor rank (MPI_MAXLOC). Reduce a local value to its global maximum with location (rank) across all ranks. | |
| impure subroutine | m_mpi_common::s_mpi_abort (prnt, code) |
| The subroutine terminates the MPI execution environment. | |
| impure subroutine | m_mpi_common::s_mpi_barrier |
| Halts all processes until all have reached barrier. | |
| impure subroutine | m_mpi_common::s_mpi_finalize |
| The subroutine finalizes the MPI execution environment. | |
| subroutine | m_mpi_common::s_mpi_sendrecv_variables_buffers (q_comm, mpi_dir, pbc_loc, nvar, pb_in, mv_in) |
| The goal of this procedure is to populate the buffers of the cell-average conservative variables by communicating with the neighboring processors. | |
| subroutine | m_mpi_common::s_mpi_decompose_computational_domain |
| Decompose the computational domain among processors by balancing cells per rank in each coordinate direction. | |
| subroutine | m_mpi_common::s_mpi_sendrecv_grid_variables_buffers (mpi_dir, pbc_loc) |
| The goal of this procedure is to populate the buffers of the grid variables by communicating with the neighboring processors. Note that only the buffers of the cell-width distributions are handled in such a way. This is because the buffers of cell-boundary locations may be calculated directly from those of the cell-width distributions. | |
| impure subroutine | m_mpi_common::s_finalize_mpi_common_module |
| Module deallocation and/or disassociation procedures. | |
Variables | |
| integer, private | m_mpi_common::v_size |
| real(wp), dimension(:), allocatable, private | m_mpi_common::buff_send |
| Primitive variable send buffer for halo exchange. | |
| real(wp), dimension(:), allocatable, private | m_mpi_common::buff_recv |
| Primitive variable receive buffer for halo exchange. | |
| integer(kind=8) | m_mpi_common::halo_size |
Contains module m_mpi_common.
Definition in file m_mpi_common.fpp.f90.