This module serves as a proxy to the parameters and subroutines available in the MPI implementation's MPI module. Specifically, the role of the proxy is to harness basic MPI commands into more complex procedures as to achieve the required communication goals for the post-process.
More...
|
subroutine | s_initialize_mpi_proxy_module |
| Computation of parameters, allocation procedures, and/or any other tasks needed to properly setup the module.
|
|
subroutine | s_mpi_bcast_user_inputs |
| Since only processor with rank 0 is in charge of reading and checking the consistency of the user provided inputs, these are not available to the remaining processors. This subroutine is then in charge of broadcasting the required information.
|
|
subroutine | s_mpi_decompose_computational_domain |
| This subroutine takes care of efficiently distributing the computational domain among the available processors as well as recomputing some of the global parameters so that they reflect the configuration of sub-domain that is overseen by the local processor.
|
|
subroutine | s_mpi_sendrecv_grid_vars_buffer_regions (pbc_loc, sweep_coord) |
| Communicates the buffer regions associated with the grid variables with processors in charge of the neighboring sub-domains. Note that only cell-width spacings feature buffer regions so that no information relating to the cell-boundary locations is communicated.
|
|
subroutine | s_mpi_sendrecv_cons_vars_buffer_regions (q_cons_vf, pbc_loc, sweep_coord, q_particle) |
| Communicates buffer regions associated with conservative variables with processors in charge of the neighboring sub-domains.
|
|
subroutine | s_mpi_gather_spatial_extents (spatial_extents) |
| This subroutine gathers the Silo database metadata for the spatial extents in order to boost the performance of the multidimensional visualization.
|
|
subroutine | s_mpi_defragment_1d_grid_variable |
| This subroutine collects the sub-domain cell-boundary or cell-center locations data from all of the processors and puts back together the grid of the entire computational domain on the rank 0 processor. This is only done for 1D simulations.
|
|
subroutine | s_mpi_gather_data_extents (q_sf, data_extents) |
| This subroutine gathers the Silo database metadata for the flow variable's extents as to boost performance of the multidimensional visualization.
|
|
subroutine | s_mpi_defragment_1d_flow_variable (q_sf, q_root_sf) |
| This subroutine gathers the sub-domain flow variable data from all of the processors and puts it back together for the entire computational domain on the rank 0 processor. This is only done for 1D simulations.
|
|
subroutine | s_finalize_mpi_proxy_module |
| Deallocation procedures for the module.
|
|
|
|
processors. Note that these variables are structured as vectors rather than arrays.
|
real(wp), dimension(:), allocatable | q_cons_buffer_in |
|
real(wp), dimension(:), allocatable | q_cons_buffer_out |
|
|
enabling MPI to gather varying amounts of data from all processes to the root process
|
integer, dimension(:), allocatable | recvcounts |
|
integer, dimension(:), allocatable | displs |
|
|
integer, private | err_code |
|
integer, private | ierr |
|
This module serves as a proxy to the parameters and subroutines available in the MPI implementation's MPI module. Specifically, the role of the proxy is to harness basic MPI commands into more complex procedures as to achieve the required communication goals for the post-process.
◆ s_finalize_mpi_proxy_module()
subroutine m_mpi_proxy::s_finalize_mpi_proxy_module |
Deallocation procedures for the module.
◆ s_initialize_mpi_proxy_module()
subroutine m_mpi_proxy::s_initialize_mpi_proxy_module |
Computation of parameters, allocation procedures, and/or any other tasks needed to properly setup the module.
◆ s_mpi_bcast_user_inputs()
subroutine m_mpi_proxy::s_mpi_bcast_user_inputs |
Since only processor with rank 0 is in charge of reading and checking the consistency of the user provided inputs, these are not available to the remaining processors. This subroutine is then in charge of broadcasting the required information.
◆ s_mpi_decompose_computational_domain()
subroutine m_mpi_proxy::s_mpi_decompose_computational_domain |
This subroutine takes care of efficiently distributing the computational domain among the available processors as well as recomputing some of the global parameters so that they reflect the configuration of sub-domain that is overseen by the local processor.
◆ s_mpi_defragment_1d_flow_variable()
subroutine m_mpi_proxy::s_mpi_defragment_1d_flow_variable |
( |
real(wp), dimension(0:m), intent(in) | q_sf, |
|
|
real(wp), dimension(0:m), intent(inout) | q_root_sf ) |
This subroutine gathers the sub-domain flow variable data from all of the processors and puts it back together for the entire computational domain on the rank 0 processor. This is only done for 1D simulations.
- Parameters
-
q_sf | Flow variable defined on a single computational sub-domain |
q_root_sf | Flow variable defined on the entire computational domain |
◆ s_mpi_defragment_1d_grid_variable()
subroutine m_mpi_proxy::s_mpi_defragment_1d_grid_variable |
This subroutine collects the sub-domain cell-boundary or cell-center locations data from all of the processors and puts back together the grid of the entire computational domain on the rank 0 processor. This is only done for 1D simulations.
◆ s_mpi_gather_data_extents()
subroutine m_mpi_proxy::s_mpi_gather_data_extents |
( |
real(wp), dimension(:, :, :), intent(in) | q_sf, |
|
|
real(wp), dimension(1:2, 0:num_procs - 1), intent(inout) | data_extents ) |
This subroutine gathers the Silo database metadata for the flow variable's extents as to boost performance of the multidimensional visualization.
- Parameters
-
q_sf | Flow variable defined on a single computational sub-domain |
data_extents | The flow variable extents on each of the processor's sub-domain. First dimension of array corresponds to the former's minimum and maximum values, respectively, while second dimension corresponds to each processor's rank. |
◆ s_mpi_gather_spatial_extents()
subroutine m_mpi_proxy::s_mpi_gather_spatial_extents |
( |
real(wp), dimension(1:, 0:), intent(inout) | spatial_extents | ) |
|
This subroutine gathers the Silo database metadata for the spatial extents in order to boost the performance of the multidimensional visualization.
- Parameters
-
spatial_extents | Spatial extents for each processor's sub-domain. First dimension corresponds to the minimum and maximum values, respectively, while the second dimension corresponds to the processor rank. |
◆ s_mpi_sendrecv_cons_vars_buffer_regions()
subroutine m_mpi_proxy::s_mpi_sendrecv_cons_vars_buffer_regions |
( |
type(scalar_field), dimension(sys_size), intent(inout) | q_cons_vf, |
|
|
character(len=3), intent(in) | pbc_loc, |
|
|
character, intent(in) | sweep_coord, |
|
|
type(scalar_field), intent(inout), optional | q_particle ) |
Communicates buffer regions associated with conservative variables with processors in charge of the neighboring sub-domains.
- Parameters
-
q_cons_vf | Conservative variables |
pbc_loc | Processor boundary condition (PBC) location |
sweep_coord | Coordinate direction normal to the processor boundary |
q_particle | Projection of the lagrangian particles in the Eulerian framework |
◆ s_mpi_sendrecv_grid_vars_buffer_regions()
subroutine m_mpi_proxy::s_mpi_sendrecv_grid_vars_buffer_regions |
( |
character(len=3), intent(in) | pbc_loc, |
|
|
character, intent(in) | sweep_coord ) |
Communicates the buffer regions associated with the grid variables with processors in charge of the neighboring sub-domains. Note that only cell-width spacings feature buffer regions so that no information relating to the cell-boundary locations is communicated.
- Parameters
-
pbc_loc | Processor boundary condition (PBC) location |
sweep_coord | Coordinate direction normal to the processor boundary |
◆ displs
integer, dimension(:), allocatable m_mpi_proxy::displs |
◆ err_code
integer, private m_mpi_proxy::err_code |
|
private |
◆ ierr
integer, private m_mpi_proxy::ierr |
|
private |
◆ q_cons_buffer_in
real(wp), dimension(:), allocatable m_mpi_proxy::q_cons_buffer_in |
◆ q_cons_buffer_out
real(wp), dimension(:), allocatable m_mpi_proxy::q_cons_buffer_out |
◆ recvcounts
integer, dimension(:), allocatable m_mpi_proxy::recvcounts |