MFC: Post-Process
High-fidelity multiphase flow simulation
Loading...
Searching...
No Matches
m_mpi_proxy Module Reference

This module serves as a proxy to the parameters and subroutines available in the MPI implementation's MPI module. Specifically, the role of the proxy is to harness basic MPI commands into more complex procedures as to achieve the required communication goals for the post-process. More...

Functions/Subroutines

subroutine s_initialize_mpi_proxy_module ()
 Computation of parameters, allocation procedures, and/or any other tasks needed to properly setup the module.
 
subroutine s_mpi_bcast_user_inputs ()
 Since only processor with rank 0 is in charge of reading and checking the consistency of the user provided inputs, these are not available to the remaining processors. This subroutine is then in charge of broadcasting the required information.
 
subroutine s_mpi_decompose_computational_domain ()
 This subroutine takes care of efficiently distributing the computational domain among the available processors as well as recomputing some of the global parameters so that they reflect the configuration of sub-domain that is overseen by the local processor.
 
subroutine s_mpi_sendrecv_grid_vars_buffer_regions (pbc_loc, sweep_coord)
 Communicates the buffer regions associated with the grid variables with processors in charge of the neighboring sub-domains. Note that only cell-width spacings feature buffer regions so that no information relating to the cell-boundary locations is communicated.
 
subroutine s_mpi_sendrecv_cons_vars_buffer_regions (q_cons_vf, pbc_loc, sweep_coord)
 Communicates buffer regions associated with conservative variables with processors in charge of the neighboring sub-domains.
 
subroutine s_mpi_gather_spatial_extents (spatial_extents)
 This subroutine gathers the Silo database metadata for the spatial extents in order to boost the performance of the multidimensional visualization.
 
subroutine s_mpi_defragment_1d_grid_variable ()
 This subroutine collects the sub-domain cell-boundary or cell-center locations data from all of the processors and puts back together the grid of the entire computational domain on the rank 0 processor. This is only done for 1D simulations.
 
subroutine s_mpi_gather_data_extents (q_sf, data_extents)
 This subroutine gathers the Silo database metadata for the flow variable's extents as to boost performance of the multidimensional visualization.
 
subroutine s_mpi_defragment_1d_flow_variable (q_sf, q_root_sf)
 This subroutine gathers the sub-domain flow variable data from all of the processors and puts it back together for the entire computational domain on the rank 0 processor. This is only done for 1D simulations.
 
subroutine s_finalize_mpi_proxy_module ()
 Deallocation procedures for the module.
 

Variables

Buffers of the conservative variables received/sent from/to neighboring

processors. Note that these variables are structured as vectors rather than arrays.

real(kind(0d0)), dimension(:), allocatable q_cons_buffer_in
 
real(kind(0d0)), dimension(:), allocatable q_cons_buffer_out
 
Receive counts and displacement vector variables, respectively, used in

enabling MPI to gather varying amounts of data from all processes to the root process

integer, dimension(:), allocatable recvcounts
 
integer, dimension(:), allocatable displs
 
Generic flags used to identify and report MPI errors
integer, private err_code
 
integer, private ierr
 

Detailed Description

This module serves as a proxy to the parameters and subroutines available in the MPI implementation's MPI module. Specifically, the role of the proxy is to harness basic MPI commands into more complex procedures as to achieve the required communication goals for the post-process.

Function/Subroutine Documentation

◆ s_finalize_mpi_proxy_module()

subroutine m_mpi_proxy::s_finalize_mpi_proxy_module

Deallocation procedures for the module.

Here is the caller graph for this function:

◆ s_initialize_mpi_proxy_module()

subroutine m_mpi_proxy::s_initialize_mpi_proxy_module

Computation of parameters, allocation procedures, and/or any other tasks needed to properly setup the module.

Here is the caller graph for this function:

◆ s_mpi_bcast_user_inputs()

subroutine m_mpi_proxy::s_mpi_bcast_user_inputs

Since only processor with rank 0 is in charge of reading and checking the consistency of the user provided inputs, these are not available to the remaining processors. This subroutine is then in charge of broadcasting the required information.

Here is the caller graph for this function:

◆ s_mpi_decompose_computational_domain()

subroutine m_mpi_proxy::s_mpi_decompose_computational_domain

This subroutine takes care of efficiently distributing the computational domain among the available processors as well as recomputing some of the global parameters so that they reflect the configuration of sub-domain that is overseen by the local processor.

Here is the caller graph for this function:

◆ s_mpi_defragment_1d_flow_variable()

subroutine m_mpi_proxy::s_mpi_defragment_1d_flow_variable ( real(kind(0d0)), dimension(0:m, 0:0, 0:0), intent(in) q_sf,
real(kind(0d0)), dimension(0:m_root, 0:0, 0:0), intent(inout) q_root_sf )

This subroutine gathers the sub-domain flow variable data from all of the processors and puts it back together for the entire computational domain on the rank 0 processor. This is only done for 1D simulations.

Parameters
q_sfFlow variable defined on a single computational sub-domain
q_root_sfFlow variable defined on the entire computational domain
Here is the caller graph for this function:

◆ s_mpi_defragment_1d_grid_variable()

subroutine m_mpi_proxy::s_mpi_defragment_1d_grid_variable

This subroutine collects the sub-domain cell-boundary or cell-center locations data from all of the processors and puts back together the grid of the entire computational domain on the rank 0 processor. This is only done for 1D simulations.

Here is the caller graph for this function:

◆ s_mpi_gather_data_extents()

subroutine m_mpi_proxy::s_mpi_gather_data_extents ( real(kind(0d0)), dimension(:, :, :), intent(in) q_sf,
real(kind(0d0)), dimension(1:2, 0:num_procs - 1), intent(inout) data_extents )

This subroutine gathers the Silo database metadata for the flow variable's extents as to boost performance of the multidimensional visualization.

Parameters
q_sfFlow variable defined on a single computational sub-domain
data_extentsThe flow variable extents on each of the processor's sub-domain. First dimension of array corresponds to the former's minimum and maximum values, respectively, while second dimension corresponds to each processor's rank.
Here is the caller graph for this function:

◆ s_mpi_gather_spatial_extents()

subroutine m_mpi_proxy::s_mpi_gather_spatial_extents ( real(kind(0d0)), dimension(1:, 0:), intent(inout) spatial_extents)

This subroutine gathers the Silo database metadata for the spatial extents in order to boost the performance of the multidimensional visualization.

Parameters
spatial_extentsSpatial extents for each processor's sub-domain. First dimension corresponds to the minimum and maximum values, respectively, while the second dimension corresponds to the processor rank.
Here is the caller graph for this function:

◆ s_mpi_sendrecv_cons_vars_buffer_regions()

subroutine m_mpi_proxy::s_mpi_sendrecv_cons_vars_buffer_regions ( type(scalar_field), dimension(sys_size), intent(inout) q_cons_vf,
character(len=3), intent(in) pbc_loc,
character, intent(in) sweep_coord )

Communicates buffer regions associated with conservative variables with processors in charge of the neighboring sub-domains.

Parameters
q_cons_vfConservative variables
pbc_locProcessor boundary condition (PBC) location
sweep_coordCoordinate direction normal to the processor boundary
Here is the caller graph for this function:

◆ s_mpi_sendrecv_grid_vars_buffer_regions()

subroutine m_mpi_proxy::s_mpi_sendrecv_grid_vars_buffer_regions ( character(len=3), intent(in) pbc_loc,
character, intent(in) sweep_coord )

Communicates the buffer regions associated with the grid variables with processors in charge of the neighboring sub-domains. Note that only cell-width spacings feature buffer regions so that no information relating to the cell-boundary locations is communicated.

Parameters
pbc_locProcessor boundary condition (PBC) location
sweep_coordCoordinate direction normal to the processor boundary
Here is the caller graph for this function:

Variable Documentation

◆ displs

integer, dimension(:), allocatable m_mpi_proxy::displs

◆ err_code

integer, private m_mpi_proxy::err_code
private

◆ ierr

integer, private m_mpi_proxy::ierr
private

◆ q_cons_buffer_in

real(kind(0d0)), dimension(:), allocatable m_mpi_proxy::q_cons_buffer_in

◆ q_cons_buffer_out

real(kind(0d0)), dimension(:), allocatable m_mpi_proxy::q_cons_buffer_out

◆ recvcounts

integer, dimension(:), allocatable m_mpi_proxy::recvcounts