MFC:Post_process  v1.0
m_mpi_proxy Module Reference

This module serves as a proxy to the parameters and subroutines available in the MPI implementation's MPI module. Specifically, the role of the proxy is to harness basic MPI commands into more complex procedures as to achieve the required communication goals for the post-process. More...

Functions/Subroutines

subroutine s_mpi_initialize ()
 The subroutine intializes the MPI environment and queries both the number of processors that will be available for the job as well as the local processor rank. More...
 
subroutine s_mpi_abort ()
 The subroutine terminates the MPI execution environment. More...
 
q_cons_vf Conservative variables

This subroutine defines local and global sizes for the data

subroutine s_initialize_mpi_data (q_cons_vf)
 
subroutine s_mpi_barrier ()
 Halts all processes until all have reached barrier. More...
 
subroutine s_initialize_mpi_proxy_module ()
 Computation of parameters, allocation procedures, and/or any other tasks needed to properly setup the module. More...
 
subroutine s_mpi_bcast_user_inputs ()
 Since only processor with rank 0 is in charge of reading and checking the consistency of the user provided inputs, these are not available to the remaining processors. This subroutine is then in charge of broadcasting the required information. More...
 
subroutine s_mpi_decompose_computational_domain ()
 This subroutine takes care of efficiently distributing the computational domain among the available processors as well as recomputing some of the global parameters so that they reflect the configuration of sub-domain that is overseen by the local processor. More...
 
subroutine s_mpi_sendrecv_grid_vars_buffer_regions (pbc_loc, sweep_coord)
 Communicates the buffer regions associated with the grid variables with processors in charge of the neighbooring sub-domains. Note that only cell-width spacings feature buffer regions so that no information relating to the cell-boundary locations is communicated. More...
 
subroutine s_mpi_sendrecv_cons_vars_buffer_regions (q_cons_vf, pbc_loc, sweep_coord)
 Communicates buffer regions associated with conservative variables with processors in charge of the neighbooring sub-domains. More...
 
subroutine s_mpi_reduce_maxloc (var_loc)
 The following subroutine takes the first element of the 2-element inputted variable and determines its maximum value on the entire computational domain. The result is stored back into the first element of the variable while the rank of the processor that is in charge of the sub- domain containing the maximum is stored into the second element of the variable. More...
 
subroutine s_mpi_gather_spatial_extents (spatial_extents)
 This subroutine gathers the Silo database metadata for the spatial extents in order to boost the performance of the multidimensional visualization. More...
 
subroutine s_mpi_defragment_1d_grid_variable ()
 This subroutine collects the sub-domain cell-boundary or cell-center locations data from all of the processors and puts back together the grid of the entire computational domain on the rank 0 processor. This is only done for 1D simulations. More...
 
subroutine s_mpi_gather_data_extents (q_sf, data_extents)
 This subroutine gathers the Silo database metadata for the flow variable's extents as to boost performance of the multidimensional visualization. More...
 
subroutine s_mpi_defragment_1d_flow_variable (q_sf, q_root_sf)
 This subroutine gathers the sub-domain flow variable data from all of the processors and puts it back together for the entire computational domain on the rank 0 processor. This is only done for 1D simulations. More...
 
subroutine s_finalize_mpi_proxy_module ()
 Deallocation procedures for the module. More...
 
subroutine s_mpi_finalize ()
 Finalization of all MPI related processes. More...
 

Variables

Buffers of the conservative variables recieved/sent from/to neighbooring

processors. Note that these variables are structured as vectors rather than arrays.

real(kind(0d0)), dimension(:), allocatable q_cons_buffer_in
 
real(kind(0d0)), dimension(:), allocatable q_cons_buffer_out
 
Recieve counts and displacement vector variables, respectively, used in

enabling MPI to gather varying amounts of data from all processes to the root process

integer, dimension(:), allocatable recvcounts
 
integer, dimension(:), allocatable displs
 
Generic flags used to identify and report MPI errors
integer, private err_code
 
integer, private ierr
 

Detailed Description

This module serves as a proxy to the parameters and subroutines available in the MPI implementation's MPI module. Specifically, the role of the proxy is to harness basic MPI commands into more complex procedures as to achieve the required communication goals for the post-process.

Function/Subroutine Documentation

◆ s_finalize_mpi_proxy_module()

subroutine m_mpi_proxy::s_finalize_mpi_proxy_module ( )

Deallocation procedures for the module.

Definition at line 1871 of file m_mpi_proxy.f90.

Here is the caller graph for this function:

◆ s_initialize_mpi_data()

subroutine m_mpi_proxy::s_initialize_mpi_data ( type(scalar_field), dimension(sys_size), intent(in)  q_cons_vf)

Definition at line 125 of file m_mpi_proxy.f90.

Here is the caller graph for this function:

◆ s_initialize_mpi_proxy_module()

subroutine m_mpi_proxy::s_initialize_mpi_proxy_module ( )

Computation of parameters, allocation procedures, and/or any other tasks needed to properly setup the module.

Definition at line 175 of file m_mpi_proxy.f90.

Here is the caller graph for this function:

◆ s_mpi_abort()

subroutine m_mpi_proxy::s_mpi_abort ( )

The subroutine terminates the MPI execution environment.

Definition at line 113 of file m_mpi_proxy.f90.

Here is the caller graph for this function:

◆ s_mpi_barrier()

subroutine m_mpi_proxy::s_mpi_barrier ( )

Halts all processes until all have reached barrier.

Definition at line 164 of file m_mpi_proxy.f90.

Here is the caller graph for this function:

◆ s_mpi_bcast_user_inputs()

subroutine m_mpi_proxy::s_mpi_bcast_user_inputs ( )

Since only processor with rank 0 is in charge of reading and checking the consistency of the user provided inputs, these are not available to the remaining processors. This subroutine is then in charge of broadcasting the required information.

Definition at line 273 of file m_mpi_proxy.f90.

Here is the caller graph for this function:

◆ s_mpi_decompose_computational_domain()

subroutine m_mpi_proxy::s_mpi_decompose_computational_domain ( )

This subroutine takes care of efficiently distributing the computational domain among the available processors as well as recomputing some of the global parameters so that they reflect the configuration of sub-domain that is overseen by the local processor.

Definition at line 440 of file m_mpi_proxy.f90.

Here is the caller graph for this function:

◆ s_mpi_defragment_1d_flow_variable()

subroutine m_mpi_proxy::s_mpi_defragment_1d_flow_variable ( real(kind(0d0)), dimension(0:m,0:0,0:0), intent(in)  q_sf,
real(kind(0d0)), dimension(0:m_root,0:0,0:0), intent(inout)  q_root_sf 
)

This subroutine gathers the sub-domain flow variable data from all of the processors and puts it back together for the entire computational domain on the rank 0 processor. This is only done for 1D simulations.

Parameters
q_sfFlow variable defined on a single computational sub-domain
q_root_sfFlow variable defined on the entire computational domain

Definition at line 1847 of file m_mpi_proxy.f90.

◆ s_mpi_defragment_1d_grid_variable()

subroutine m_mpi_proxy::s_mpi_defragment_1d_grid_variable ( )

This subroutine collects the sub-domain cell-boundary or cell-center locations data from all of the processors and puts back together the grid of the entire computational domain on the rank 0 processor. This is only done for 1D simulations.

Definition at line 1781 of file m_mpi_proxy.f90.

◆ s_mpi_finalize()

subroutine m_mpi_proxy::s_mpi_finalize ( )

Finalization of all MPI related processes.

Definition at line 1894 of file m_mpi_proxy.f90.

Here is the caller graph for this function:

◆ s_mpi_gather_data_extents()

subroutine m_mpi_proxy::s_mpi_gather_data_extents ( real(kind(0d0)), dimension(:,:,:), intent(in)  q_sf,
real(kind(0d0)), dimension(1:2,0:num_procs-1), intent(inout)  data_extents 
)

This subroutine gathers the Silo database metadata for the flow variable's extents as to boost performance of the multidimensional visualization.

Parameters
q_sfFlow variable defined on a single computational sub-domain
data_extentsThe flow variable extents on each of the processor's sub-domain. First dimension of array corresponds to the former's minimum and maximum values, respectively, while second dimension corresponds to each processor's rank.

Definition at line 1816 of file m_mpi_proxy.f90.

◆ s_mpi_gather_spatial_extents()

subroutine m_mpi_proxy::s_mpi_gather_spatial_extents ( real(kind(0d0)), dimension(1:,0:), intent(inout)  spatial_extents)

This subroutine gathers the Silo database metadata for the spatial extents in order to boost the performance of the multidimensional visualization.

Parameters
spatial_extentsSpatial extents for each processor's sub-domain. First dimension corresponds to the minimum and maximum values, respectively, while the second dimension corresponds to the processor rank.

Definition at line 1661 of file m_mpi_proxy.f90.

◆ s_mpi_initialize()

subroutine m_mpi_proxy::s_mpi_initialize ( )

The subroutine intializes the MPI environment and queries both the number of processors that will be available for the job as well as the local processor rank.

Definition at line 84 of file m_mpi_proxy.f90.

Here is the caller graph for this function:

◆ s_mpi_reduce_maxloc()

subroutine m_mpi_proxy::s_mpi_reduce_maxloc ( real(kind(0d0)), dimension(2), intent(inout)  var_loc)

The following subroutine takes the first element of the 2-element inputted variable and determines its maximum value on the entire computational domain. The result is stored back into the first element of the variable while the rank of the processor that is in charge of the sub- domain containing the maximum is stored into the second element of the variable.

Parameters
var_locOn input, this variable holds the local value and processor rank, which are to be reduced among all the processors in communicator. On output, this variable holds the maximum value, reduced amongst all of the local values, and the process rank to which the value belongs.

Definition at line 1631 of file m_mpi_proxy.f90.

Here is the caller graph for this function:

◆ s_mpi_sendrecv_cons_vars_buffer_regions()

subroutine m_mpi_proxy::s_mpi_sendrecv_cons_vars_buffer_regions ( type(scalar_field), dimension(sys_size), intent(inout)  q_cons_vf,
character(len = 3), intent(in)  pbc_loc,
character, intent(in)  sweep_coord 
)

Communicates buffer regions associated with conservative variables with processors in charge of the neighbooring sub-domains.

Parameters
q_cons_vfConservative variables
pbc_locProcessor boundary condition (PBC) location
sweep_coordCoordinate direction normal to the processor boundary

Definition at line 1094 of file m_mpi_proxy.f90.

Here is the caller graph for this function:

◆ s_mpi_sendrecv_grid_vars_buffer_regions()

subroutine m_mpi_proxy::s_mpi_sendrecv_grid_vars_buffer_regions ( character(len = 3), intent(in)  pbc_loc,
character, intent(in)  sweep_coord 
)

Communicates the buffer regions associated with the grid variables with processors in charge of the neighbooring sub-domains. Note that only cell-width spacings feature buffer regions so that no information relating to the cell-boundary locations is communicated.

Parameters
pbc_locProcessor boundary condition (PBC) location
sweep_coordCoordinate direction normal to the processor boundary

Definition at line 892 of file m_mpi_proxy.f90.

Here is the caller graph for this function:

Variable Documentation

◆ displs

integer, dimension(:), allocatable m_mpi_proxy::displs

Definition at line 67 of file m_mpi_proxy.f90.

◆ err_code

integer, private m_mpi_proxy::err_code
private

Definition at line 72 of file m_mpi_proxy.f90.

◆ ierr

integer, private m_mpi_proxy::ierr
private

Definition at line 72 of file m_mpi_proxy.f90.

◆ q_cons_buffer_in

real(kind(0d0)), dimension(:), allocatable m_mpi_proxy::q_cons_buffer_in

Definition at line 58 of file m_mpi_proxy.f90.

◆ q_cons_buffer_out

real(kind(0d0)), dimension(:), allocatable m_mpi_proxy::q_cons_buffer_out

Definition at line 59 of file m_mpi_proxy.f90.

◆ recvcounts

integer, dimension(:), allocatable m_mpi_proxy::recvcounts

Definition at line 66 of file m_mpi_proxy.f90.