|
MFC
Exascale flow solver
|
Contains module m_mpi_proxy. More...
Go to the source code of this file.
Modules | |
| module | m_mpi_proxy |
| MPI halo exchange, domain decomposition, and buffer packing/unpacking for the simulation solver. | |
Functions/Subroutines | |
| subroutine | m_mpi_proxy::s_initialize_mpi_proxy_module () |
| Allocates immersed boundary communication buffers for MPI halo exchanges. | |
| impure subroutine | m_mpi_proxy::s_mpi_bcast_user_inputs () |
| Since only the processor with rank 0 reads and verifies the consistency of user inputs, these are initially not available to the other processors. Then, the purpose of this subroutine is to distribute the user inputs to the remaining processors in the communicator. | |
| subroutine | m_mpi_proxy::s_mpi_sendrecv_ib_buffers (ib_markers, mpi_dir, pbc_loc) |
| Packs, exchanges, and unpacks immersed boundary marker buffers between neighboring MPI ranks. | |
| impure subroutine | m_mpi_proxy::s_mpi_send_random_number (phi_rn, num_freq) |
| Broadcasts random phase numbers from rank 0 to all MPI processes. | |
| subroutine | m_mpi_proxy::s_finalize_mpi_proxy_module () |
| Deallocates immersed boundary MPI communication buffers. | |
Variables | |
| integer, dimension(:), allocatable, private | m_mpi_proxy::ib_buff_send |
| This variable is utilized to pack and send the buffer of the immersed boundary markers, for a single computational domain boundary at the time, to the relevant neighboring processor. | |
| integer, dimension(:), allocatable, private | m_mpi_proxy::ib_buff_recv |
| q_cons_buff_recv is utilized to receive and unpack the buffer of the immersed boundary markers, for a single computational domain boundary at the time, from the relevant neighboring processor. | |
| integer | m_mpi_proxy::i_halo_size |
Contains module m_mpi_proxy.
Definition in file m_mpi_proxy.fpp.f90.