MFC: Simulation
High-fidelity multiphase flow simulation
Loading...
Searching...
No Matches
m_mpi_proxy.fpp.f90 File Reference

Functions/Subroutines

program __m_mpi_proxy_fpp_f90__
 
subroutine s_initialize_mpi_proxy_module ()
 The computation of parameters, the allocation of memory, the association of pointers and/or the execution of any other procedures that are necessary to setup the module.
 
subroutine s_mpi_bcast_user_inputs ()
 Since only the processor with rank 0 reads and verifies the consistency of user inputs, these are initially not available to the other processors. Then, the purpose of this subroutine is to distribute the user inputs to the remaining processors in the communicator.
 
subroutine s_mpi_decompose_computational_domain ()
 The purpose of this procedure is to optimally decompose the computational domain among the available processors. This is performed by attempting to award each processor, in each of the coordinate directions, approximately the same number of cells, and then recomputing the affected global parameters.
 
subroutine s_mpi_sendrecv_grid_variables_buffers (mpi_dir, pbc_loc)
 The goal of this procedure is to populate the buffers of the grid variables by communicating with the neighboring processors. Note that only the buffers of the cell-width distributions are handled in such a way. This is because the buffers of cell-boundary locations may be calculated directly from those of the cell-width distributions.
 
subroutine s_mpi_sendrecv_variables_buffers (q_cons_vf, pb, mv, mpi_dir, pbc_loc)
 The goal of this procedure is to populate the buffers of the cell-average conservative variables by communicating with the neighboring processors.
 
subroutine s_mpi_sendrecv_ib_buffers (ib_markers, gp_layers)
 The goal of this procedure is to populate the buffers of the cell-average conservative variables by communicating with the neighboring processors.
 
subroutine s_finalize_mpi_proxy_module ()
 Module deallocation and/or disassociation procedures.
 

Function/Subroutine Documentation

◆ __m_mpi_proxy_fpp_f90__()

program __m_mpi_proxy_fpp_f90__

◆ s_finalize_mpi_proxy_module()

subroutine __m_mpi_proxy_fpp_f90__::s_finalize_mpi_proxy_module
private

Module deallocation and/or disassociation procedures.

Here is the caller graph for this function:

◆ s_initialize_mpi_proxy_module()

subroutine __m_mpi_proxy_fpp_f90__::s_initialize_mpi_proxy_module
private

The computation of parameters, the allocation of memory, the association of pointers and/or the execution of any other procedures that are necessary to setup the module.

Here is the caller graph for this function:

◆ s_mpi_bcast_user_inputs()

subroutine __m_mpi_proxy_fpp_f90__::s_mpi_bcast_user_inputs
private

Since only the processor with rank 0 reads and verifies the consistency of user inputs, these are initially not available to the other processors. Then, the purpose of this subroutine is to distribute the user inputs to the remaining processors in the communicator.

Here is the caller graph for this function:

◆ s_mpi_decompose_computational_domain()

subroutine __m_mpi_proxy_fpp_f90__::s_mpi_decompose_computational_domain
private

The purpose of this procedure is to optimally decompose the computational domain among the available processors. This is performed by attempting to award each processor, in each of the coordinate directions, approximately the same number of cells, and then recomputing the affected global parameters.

Here is the call graph for this function:
Here is the caller graph for this function:

◆ s_mpi_sendrecv_grid_variables_buffers()

subroutine __m_mpi_proxy_fpp_f90__::s_mpi_sendrecv_grid_variables_buffers ( integer, intent(in) mpi_dir,
integer, intent(in) pbc_loc )
private

The goal of this procedure is to populate the buffers of the grid variables by communicating with the neighboring processors. Note that only the buffers of the cell-width distributions are handled in such a way. This is because the buffers of cell-boundary locations may be calculated directly from those of the cell-width distributions.

Parameters
mpi_dirMPI communication coordinate direction
pbc_locProcessor boundary condition (PBC) location
Here is the caller graph for this function:

◆ s_mpi_sendrecv_ib_buffers()

subroutine __m_mpi_proxy_fpp_f90__::s_mpi_sendrecv_ib_buffers ( type(integer_field), intent(inout) ib_markers,
integer, intent(in) gp_layers )
private

The goal of this procedure is to populate the buffers of the cell-average conservative variables by communicating with the neighboring processors.

Parameters
q_cons_vfCell-average conservative variables
mpi_dirMPI communication coordinate direction
pbc_locProcessor boundary condition (PBC) location
Here is the caller graph for this function:

◆ s_mpi_sendrecv_variables_buffers()

subroutine __m_mpi_proxy_fpp_f90__::s_mpi_sendrecv_variables_buffers ( type(scalar_field), dimension(sys_size), intent(inout) q_cons_vf,
real(kind(0d0)), dimension(startx:, starty:, startz:, 1:, 1:), intent(inout) pb,
real(kind(0d0)), dimension(startx:, starty:, startz:, 1:, 1:), intent(inout) mv,
integer, intent(in) mpi_dir,
integer, intent(in) pbc_loc )
private

The goal of this procedure is to populate the buffers of the cell-average conservative variables by communicating with the neighboring processors.

Parameters
q_cons_vfCell-average conservative variables
mpi_dirMPI communication coordinate direction
pbc_locProcessor boundary condition (PBC) location
Here is the call graph for this function:
Here is the caller graph for this function: