MFC
Exascale flow solver
Loading...
Searching...
No Matches
m_mpi_proxy Module Reference

MPI halo exchange, domain decomposition, and buffer packing/unpacking for the simulation solver. More...

Functions/Subroutines

subroutine s_initialize_mpi_proxy_module ()
 Allocates immersed boundary communication buffers for MPI halo exchanges.
impure subroutine s_mpi_bcast_user_inputs ()
 Since only the processor with rank 0 reads and verifies the consistency of user inputs, these are initially not available to the other processors. Then, the purpose of this subroutine is to distribute the user inputs to the remaining processors in the communicator.
impure subroutine s_mpi_send_random_number (phi_rn, num_freq)
 Broadcasts random phase numbers from rank 0 to all MPI processes.
subroutine s_finalize_mpi_proxy_module ()
 Deallocates immersed boundary MPI communication buffers.

Variables

integer, dimension(:), allocatable, private ib_buff_send
 This variable is utilized to pack and send the buffer of the immersed boundary markers, for a single computational domain boundary at the time, to the relevant neighboring processor.
integer, dimension(:), allocatable, private ib_buff_recv
 q_cons_buff_recv is utilized to receive and unpack the buffer of the immersed boundary markers, for a single computational domain boundary at the time, from the relevant neighboring processor.
integer i_halo_size

Detailed Description

MPI halo exchange, domain decomposition, and buffer packing/unpacking for the simulation solver.

Function/Subroutine Documentation

◆ s_finalize_mpi_proxy_module()

subroutine m_mpi_proxy::s_finalize_mpi_proxy_module

Deallocates immersed boundary MPI communication buffers.

Definition at line 1081 of file m_mpi_proxy.fpp.f90.

Here is the caller graph for this function:

◆ s_initialize_mpi_proxy_module()

subroutine m_mpi_proxy::s_initialize_mpi_proxy_module

Allocates immersed boundary communication buffers for MPI halo exchanges.

Definition at line 353 of file m_mpi_proxy.fpp.f90.

Here is the caller graph for this function:

◆ s_mpi_bcast_user_inputs()

impure subroutine m_mpi_proxy::s_mpi_bcast_user_inputs

Since only the processor with rank 0 reads and verifies the consistency of user inputs, these are initially not available to the other processors. Then, the purpose of this subroutine is to distribute the user inputs to the remaining processors in the communicator.

Definition at line 428 of file m_mpi_proxy.fpp.f90.

Here is the caller graph for this function:

◆ s_mpi_send_random_number()

impure subroutine m_mpi_proxy::s_mpi_send_random_number ( real(wp), dimension(1:num_freq), intent(inout) phi_rn,
integer, intent(in) num_freq )

Broadcasts random phase numbers from rank 0 to all MPI processes.

Definition at line 1069 of file m_mpi_proxy.fpp.f90.

Variable Documentation

◆ i_halo_size

integer m_mpi_proxy::i_halo_size

Definition at line 337 of file m_mpi_proxy.fpp.f90.

◆ ib_buff_recv

integer, dimension(:), allocatable, private m_mpi_proxy::ib_buff_recv
private

q_cons_buff_recv is utilized to receive and unpack the buffer of the immersed boundary markers, for a single computational domain boundary at the time, from the relevant neighboring processor.

Definition at line 332 of file m_mpi_proxy.fpp.f90.

◆ ib_buff_send

integer, dimension(:), allocatable, private m_mpi_proxy::ib_buff_send
private

This variable is utilized to pack and send the buffer of the immersed boundary markers, for a single computational domain boundary at the time, to the relevant neighboring processor.

Definition at line 327 of file m_mpi_proxy.fpp.f90.