MFC
Exascale flow solver
Loading...
Searching...
No Matches
m_mpi_proxy Module Reference

MPI gather and scatter operations for distributing post-process grid and flow-variable data. More...

Functions/Subroutines

impure subroutine s_initialize_mpi_proxy_module
 Computation of parameters, allocation procedures, and/or any other tasks needed to properly setup the module.
impure subroutine s_mpi_bcast_user_inputs
 Since only processor with rank 0 is in charge of reading and checking the consistency of the user provided inputs, these are not available to the remaining processors. This subroutine is then in charge of broadcasting the required information.
impure subroutine s_mpi_gather_spatial_extents (spatial_extents)
 This subroutine gathers the Silo database metadata for the spatial extents in order to boost the performance of the multidimensional visualization.
impure subroutine s_mpi_defragment_1d_grid_variable
 This subroutine collects the sub-domain cell-boundary or cell-center locations data from all of the processors and puts back together the grid of the entire computational domain on the rank 0 processor. This is only done for 1D simulations.
impure subroutine s_mpi_gather_data_extents (q_sf, data_extents)
 This subroutine gathers the Silo database metadata for the flow variable's extents as to boost performance of the multidimensional visualization.
impure subroutine s_mpi_defragment_1d_flow_variable (q_sf, q_root_sf)
 This subroutine gathers the sub-domain flow variable data from all of the processors and puts it back together for the entire computational domain on the rank 0 processor. This is only done for 1D simulations.
impure subroutine s_finalize_mpi_proxy_module
 Deallocation procedures for the module.

Variables

Receive counts and displacement vector variables, respectively, used in

enabling MPI to gather varying amounts of data from all processes to the root process

integer, dimension(:), allocatable recvcounts
integer, dimension(:), allocatable displs

Detailed Description

MPI gather and scatter operations for distributing post-process grid and flow-variable data.

Function/Subroutine Documentation

◆ s_finalize_mpi_proxy_module()

impure subroutine m_mpi_proxy::s_finalize_mpi_proxy_module

Deallocation procedures for the module.

Definition at line 629 of file m_mpi_proxy.fpp.f90.

◆ s_initialize_mpi_proxy_module()

impure subroutine m_mpi_proxy::s_initialize_mpi_proxy_module

Computation of parameters, allocation procedures, and/or any other tasks needed to properly setup the module.

Definition at line 35 of file m_mpi_proxy.fpp.f90.

Here is the caller graph for this function:

◆ s_mpi_bcast_user_inputs()

impure subroutine m_mpi_proxy::s_mpi_bcast_user_inputs

Since only processor with rank 0 is in charge of reading and checking the consistency of the user provided inputs, these are not available to the remaining processors. This subroutine is then in charge of broadcasting the required information.

Definition at line 77 of file m_mpi_proxy.fpp.f90.

◆ s_mpi_defragment_1d_flow_variable()

impure subroutine m_mpi_proxy::s_mpi_defragment_1d_flow_variable ( real(wp), dimension(0:m), intent(in) q_sf,
real(wp), dimension(0:m), intent(inout) q_root_sf )

This subroutine gathers the sub-domain flow variable data from all of the processors and puts it back together for the entire computational domain on the rank 0 processor. This is only done for 1D simulations.

Parameters
q_sfFlow variable defined on a single computational sub-domain
q_root_sfFlow variable defined on the entire computational domain

Definition at line 609 of file m_mpi_proxy.fpp.f90.

Here is the caller graph for this function:

◆ s_mpi_defragment_1d_grid_variable()

impure subroutine m_mpi_proxy::s_mpi_defragment_1d_grid_variable

This subroutine collects the sub-domain cell-boundary or cell-center locations data from all of the processors and puts back together the grid of the entire computational domain on the rank 0 processor. This is only done for 1D simulations.

Definition at line 525 of file m_mpi_proxy.fpp.f90.

Here is the caller graph for this function:

◆ s_mpi_gather_data_extents()

impure subroutine m_mpi_proxy::s_mpi_gather_data_extents ( real(wp), dimension(:, :, :), intent(in) q_sf,
real(wp), dimension(1:2, 0:num_procs - 1), intent(inout) data_extents )

This subroutine gathers the Silo database metadata for the flow variable's extents as to boost performance of the multidimensional visualization.

Parameters
q_sfFlow variable defined on a single computational sub-domain
data_extentsThe flow variable extents on each of the processor's sub-domain. First dimension of array corresponds to the former's minimum and maximum values, respectively, while second dimension corresponds to each processor's rank.

Definition at line 562 of file m_mpi_proxy.fpp.f90.

Here is the caller graph for this function:

◆ s_mpi_gather_spatial_extents()

impure subroutine m_mpi_proxy::s_mpi_gather_spatial_extents ( real(wp), dimension(1:, 0:), intent(inout) spatial_extents)

This subroutine gathers the Silo database metadata for the spatial extents in order to boost the performance of the multidimensional visualization.

Parameters
spatial_extentsSpatial extents for each processor's sub-domain. First dimension corresponds to the minimum and maximum values, respectively, while the second dimension corresponds to the processor rank.

Definition at line 388 of file m_mpi_proxy.fpp.f90.

Here is the caller graph for this function:

Variable Documentation

◆ displs

integer, dimension(:), allocatable m_mpi_proxy::displs

Definition at line 28 of file m_mpi_proxy.fpp.f90.

◆ recvcounts

integer, dimension(:), allocatable m_mpi_proxy::recvcounts

Definition at line 27 of file m_mpi_proxy.fpp.f90.