MFC: Post-Process
High-fidelity multiphase flow simulation
|
Modules | |
module | m_mpi_proxy |
This module serves as a proxy to the parameters and subroutines available in the MPI implementation's MPI module. Specifically, the role of the proxy is to harness basic MPI commands into more complex procedures as to achieve the required communication goals for the post-process. | |
Functions/Subroutines | |
subroutine | m_mpi_proxy::s_initialize_mpi_proxy_module |
Computation of parameters, allocation procedures, and/or any other tasks needed to properly setup the module. | |
subroutine | m_mpi_proxy::s_mpi_bcast_user_inputs |
Since only processor with rank 0 is in charge of reading and checking the consistency of the user provided inputs, these are not available to the remaining processors. This subroutine is then in charge of broadcasting the required information. | |
subroutine | m_mpi_proxy::s_mpi_decompose_computational_domain |
This subroutine takes care of efficiently distributing the computational domain among the available processors as well as recomputing some of the global parameters so that they reflect the configuration of sub-domain that is overseen by the local processor. | |
subroutine | m_mpi_proxy::s_mpi_sendrecv_grid_vars_buffer_regions (pbc_loc, sweep_coord) |
Communicates the buffer regions associated with the grid variables with processors in charge of the neighboring sub-domains. Note that only cell-width spacings feature buffer regions so that no information relating to the cell-boundary locations is communicated. | |
subroutine | m_mpi_proxy::s_mpi_sendrecv_cons_vars_buffer_regions (q_cons_vf, pbc_loc, sweep_coord, q_particle) |
Communicates buffer regions associated with conservative variables with processors in charge of the neighboring sub-domains. | |
subroutine | m_mpi_proxy::s_mpi_gather_spatial_extents (spatial_extents) |
This subroutine gathers the Silo database metadata for the spatial extents in order to boost the performance of the multidimensional visualization. | |
subroutine | m_mpi_proxy::s_mpi_defragment_1d_grid_variable |
This subroutine collects the sub-domain cell-boundary or cell-center locations data from all of the processors and puts back together the grid of the entire computational domain on the rank 0 processor. This is only done for 1D simulations. | |
subroutine | m_mpi_proxy::s_mpi_gather_data_extents (q_sf, data_extents) |
This subroutine gathers the Silo database metadata for the flow variable's extents as to boost performance of the multidimensional visualization. | |
subroutine | m_mpi_proxy::s_mpi_defragment_1d_flow_variable (q_sf, q_root_sf) |
This subroutine gathers the sub-domain flow variable data from all of the processors and puts it back together for the entire computational domain on the rank 0 processor. This is only done for 1D simulations. | |
subroutine | m_mpi_proxy::s_finalize_mpi_proxy_module |
Deallocation procedures for the module. | |
Variables | |
Buffers of the conservative variables received/sent from/to neighboring | |
processors. Note that these variables are structured as vectors rather than arrays. | |
real(wp), dimension(:), allocatable | m_mpi_proxy::q_cons_buffer_in |
real(wp), dimension(:), allocatable | m_mpi_proxy::q_cons_buffer_out |
Receive counts and displacement vector variables, respectively, used in | |
enabling MPI to gather varying amounts of data from all processes to the root process | |
integer, dimension(:), allocatable | m_mpi_proxy::recvcounts |
integer, dimension(:), allocatable | m_mpi_proxy::displs |
Generic flags used to identify and report MPI errors | |
integer, private | m_mpi_proxy::err_code |
integer, private | m_mpi_proxy::ierr |