|
program | __m_mpi_proxy_fpp_f90__ |
|
subroutine | s_initialize_mpi_proxy_module |
| The computation of parameters, the allocation of memory, the association of pointers and/or the execution of any other procedures that are necessary to setup the module.
|
|
subroutine | s_mpi_bcast_user_inputs () |
| Since only the processor with rank 0 reads and verifies the consistency of user inputs, these are initially not available to the other processors. Then, the purpose of this subroutine is to distribute the user inputs to the remaining processors in the communicator.
|
|
subroutine | s_mpi_decompose_computational_domain |
| The purpose of this procedure is to optimally decompose the computational domain among the available processors. This is performed by attempting to award each processor, in each of the coordinate directions, approximately the same number of cells, and then recomputing the affected global parameters.
|
|
subroutine | s_mpi_sendrecv_grid_variables_buffers (mpi_dir, pbc_loc) |
| The goal of this procedure is to populate the buffers of the grid variables by communicating with the neighboring processors. Note that only the buffers of the cell-width distributions are handled in such a way. This is because the buffers of cell-boundary locations may be calculated directly from those of the cell-width distributions.
|
|
subroutine | s_mpi_sendrecv_ib_buffers (ib_markers, gp_layers) |
| The goal of this procedure is to populate the buffers of the cell-average conservative variables by communicating with the neighboring processors.
|
|
subroutine | s_mpi_sendrecv_capilary_variables_buffers (c_divs_vf, mpi_dir, pbc_loc) |
|
subroutine | s_mpi_send_random_number (phi_rn, num_freq) |
|
subroutine | s_finalize_mpi_proxy_module |
| Module deallocation and/or disassociation procedures.
|
|