MFC: Post-Process
High-fidelity multiphase flow simulation
Loading...
Searching...
No Matches
m_mpi_proxy Module Reference

This module serves as a proxy to the parameters and subroutines available in the MPI implementation's MPI module. Specifically, the role of the proxy is to harness basic MPI commands into more complex procedures as to achieve the required communication goals for the post-process. More...

Functions/Subroutines

impure subroutine s_initialize_mpi_proxy_module
 Computation of parameters, allocation procedures, and/or any other tasks needed to properly setup the module.
 
impure subroutine s_mpi_bcast_user_inputs
 Since only processor with rank 0 is in charge of reading and checking the consistency of the user provided inputs, these are not available to the remaining processors. This subroutine is then in charge of broadcasting the required information.
 
impure subroutine s_mpi_gather_spatial_extents (spatial_extents)
 This subroutine gathers the Silo database metadata for the spatial extents in order to boost the performance of the multidimensional visualization.
 
impure subroutine s_mpi_defragment_1d_grid_variable
 This subroutine collects the sub-domain cell-boundary or cell-center locations data from all of the processors and puts back together the grid of the entire computational domain on the rank 0 processor. This is only done for 1D simulations.
 
impure subroutine s_mpi_gather_data_extents (q_sf, data_extents)
 This subroutine gathers the Silo database metadata for the flow variable's extents as to boost performance of the multidimensional visualization.
 
impure subroutine s_mpi_defragment_1d_flow_variable (q_sf, q_root_sf)
 This subroutine gathers the sub-domain flow variable data from all of the processors and puts it back together for the entire computational domain on the rank 0 processor. This is only done for 1D simulations.
 
impure subroutine s_finalize_mpi_proxy_module
 Deallocation procedures for the module.
 

Variables

Receive counts and displacement vector variables, respectively, used in

enabling MPI to gather varying amounts of data from all processes to the root process

integer, dimension(:), allocatable recvcounts
 
integer, dimension(:), allocatable displs
 

Detailed Description

This module serves as a proxy to the parameters and subroutines available in the MPI implementation's MPI module. Specifically, the role of the proxy is to harness basic MPI commands into more complex procedures as to achieve the required communication goals for the post-process.

Function/Subroutine Documentation

◆ s_finalize_mpi_proxy_module()

impure subroutine m_mpi_proxy::s_finalize_mpi_proxy_module

Deallocation procedures for the module.

◆ s_initialize_mpi_proxy_module()

impure subroutine m_mpi_proxy::s_initialize_mpi_proxy_module

Computation of parameters, allocation procedures, and/or any other tasks needed to properly setup the module.

Here is the caller graph for this function:

◆ s_mpi_bcast_user_inputs()

impure subroutine m_mpi_proxy::s_mpi_bcast_user_inputs

Since only processor with rank 0 is in charge of reading and checking the consistency of the user provided inputs, these are not available to the remaining processors. This subroutine is then in charge of broadcasting the required information.

◆ s_mpi_defragment_1d_flow_variable()

impure subroutine m_mpi_proxy::s_mpi_defragment_1d_flow_variable ( real(wp), dimension(0:m), intent(in) q_sf,
real(wp), dimension(0:m), intent(inout) q_root_sf )

This subroutine gathers the sub-domain flow variable data from all of the processors and puts it back together for the entire computational domain on the rank 0 processor. This is only done for 1D simulations.

Parameters
q_sfFlow variable defined on a single computational sub-domain
q_root_sfFlow variable defined on the entire computational domain
Here is the caller graph for this function:

◆ s_mpi_defragment_1d_grid_variable()

impure subroutine m_mpi_proxy::s_mpi_defragment_1d_grid_variable

This subroutine collects the sub-domain cell-boundary or cell-center locations data from all of the processors and puts back together the grid of the entire computational domain on the rank 0 processor. This is only done for 1D simulations.

Here is the caller graph for this function:

◆ s_mpi_gather_data_extents()

impure subroutine m_mpi_proxy::s_mpi_gather_data_extents ( real(wp), dimension(:, :, :), intent(in) q_sf,
real(wp), dimension(1:2, 0:num_procs - 1), intent(inout) data_extents )

This subroutine gathers the Silo database metadata for the flow variable's extents as to boost performance of the multidimensional visualization.

Parameters
q_sfFlow variable defined on a single computational sub-domain
data_extentsThe flow variable extents on each of the processor's sub-domain. First dimension of array corresponds to the former's minimum and maximum values, respectively, while second dimension corresponds to each processor's rank.
Here is the caller graph for this function:

◆ s_mpi_gather_spatial_extents()

impure subroutine m_mpi_proxy::s_mpi_gather_spatial_extents ( real(wp), dimension(1:, 0:), intent(inout) spatial_extents)

This subroutine gathers the Silo database metadata for the spatial extents in order to boost the performance of the multidimensional visualization.

Parameters
spatial_extentsSpatial extents for each processor's sub-domain. First dimension corresponds to the minimum and maximum values, respectively, while the second dimension corresponds to the processor rank.
Here is the caller graph for this function:

Variable Documentation

◆ displs

integer, dimension(:), allocatable m_mpi_proxy::displs

◆ recvcounts

integer, dimension(:), allocatable m_mpi_proxy::recvcounts