|
MFC: Pre-Process
High-fidelity multiphase flow simulation
|
The module serves as a proxy to the parameters and subroutines available in the MPI implementation's MPI module. Specifically, the purpose of the proxy is to harness basic MPI commands into more complicated procedures as to accomplish the communication goals for the simulation. More...
Functions/Subroutines | |
| impure subroutine | s_initialize_mpi_common_module |
| The computation of parameters, the allocation of memory, the association of pointers and/or the execution of any other procedures that are necessary to setup the module. | |
| impure subroutine | s_mpi_initialize |
| The subroutine initializes the MPI execution environment and queries both the number of processors which will be available for the job and the local processor rank. | |
| impure subroutine | s_initialize_mpi_data (q_cons_vf, ib_markers, levelset, levelset_norm, beta) |
| subroutine | s_initialize_mpi_data_ds (q_cons_vf) |
| impure subroutine | s_mpi_gather_data (my_vector, counts, gathered_vector, root) |
| impure subroutine | mpi_bcast_time_step_values (proc_time, time_avg) |
| impure subroutine | s_prohibit_abort (condition, message) |
| impure subroutine | s_mpi_reduce_stability_criteria_extrema (icfl_max_loc, vcfl_max_loc, rc_min_loc, icfl_max_glb, vcfl_max_glb, rc_min_glb) |
| The goal of this subroutine is to determine the global extrema of the stability criteria in the computational domain. This is performed by sifting through the local extrema of each stability criterion. Note that each of the local extrema is from a single process, within its assigned section of the computational domain. Finally, note that the global extrema values are only bookkeept on the rank 0 processor. | |
| impure subroutine | s_mpi_allreduce_sum (var_loc, var_glb) |
| The following subroutine takes the input local variable from all processors and reduces to the sum of all values. The reduced variable is recorded back onto the original local variable on each processor. | |
| impure subroutine | s_mpi_allreduce_integer_sum (var_loc, var_glb) |
| The following subroutine takes the input local variable from all processors and reduces to the sum of all values. The reduced variable is recorded back onto the original local variable on each processor. | |
| impure subroutine | s_mpi_allreduce_min (var_loc, var_glb) |
| The following subroutine takes the input local variable from all processors and reduces to the minimum of all values. The reduced variable is recorded back onto the original local variable on each processor. | |
| impure subroutine | s_mpi_allreduce_max (var_loc, var_glb) |
| The following subroutine takes the input local variable from all processors and reduces to the maximum of all values. The reduced variable is recorded back onto the original local variable on each processor. | |
| impure subroutine | s_mpi_reduce_min (var_loc) |
| The following subroutine takes the inputted variable and determines its minimum value on the entire computational domain. The result is stored back into inputted variable. | |
| impure subroutine | s_mpi_reduce_maxloc (var_loc) |
| The following subroutine takes the first element of the 2-element inputted variable and determines its maximum value on the entire computational domain. The result is stored back into the first element of the variable while the rank of the processor that is in charge of the sub- domain containing the maximum is stored into the second element of the variable. | |
| impure subroutine | s_mpi_abort (prnt, code) |
| The subroutine terminates the MPI execution environment. | |
| impure subroutine | s_mpi_barrier |
| Halts all processes until all have reached barrier. | |
| impure subroutine | s_mpi_finalize |
| The subroutine finalizes the MPI execution environment. | |
| subroutine | s_mpi_sendrecv_variables_buffers (q_comm, mpi_dir, pbc_loc, nvar, pb_in, mv_in) |
| The goal of this procedure is to populate the buffers of the cell-average conservative variables by communicating with the neighboring processors. | |
| subroutine | s_mpi_decompose_computational_domain |
| The purpose of this procedure is to optimally decompose the computational domain among the available processors. This is performed by attempting to award each processor, in each of the coordinate directions, approximately the same number of cells, and then recomputing the affected global parameters. | |
| subroutine | s_mpi_sendrecv_grid_variables_buffers (mpi_dir, pbc_loc) |
| The goal of this procedure is to populate the buffers of the grid variables by communicating with the neighboring processors. Note that only the buffers of the cell-width distributions are handled in such a way. This is because the buffers of cell-boundary locations may be calculated directly from those of the cell-width distributions. | |
| impure subroutine | s_finalize_mpi_common_module |
| Module deallocation and/or disassociation procedures. | |
Variables | |
| integer, private | v_size |
| real(wp), dimension(:), allocatable, private | buff_send |
| This variable is utilized to pack and send the buffer of the cell-average primitive variables, for a single computational domain boundary at the time, to the relevant neighboring processor. | |
| real(wp), dimension(:), allocatable, private | buff_recv |
| buff_recv is utilized to receive and unpack the buffer of the cell- average primitive variables, for a single computational domain boundary at the time, from the relevant neighboring processor. | |
| integer(kind=8) | halo_size |
The module serves as a proxy to the parameters and subroutines available in the MPI implementation's MPI module. Specifically, the purpose of the proxy is to harness basic MPI commands into more complicated procedures as to accomplish the communication goals for the simulation.
| impure subroutine m_mpi_common::mpi_bcast_time_step_values | ( | real(wp), dimension(0:num_procs - 1), intent(inout) | proc_time, |
| real(wp), intent(inout) | time_avg ) |
| impure subroutine m_mpi_common::s_finalize_mpi_common_module |
Module deallocation and/or disassociation procedures.
| impure subroutine m_mpi_common::s_initialize_mpi_common_module |
The computation of parameters, the allocation of memory, the association of pointers and/or the execution of any other procedures that are necessary to setup the module.
| impure subroutine m_mpi_common::s_initialize_mpi_data | ( | type(scalar_field), dimension(sys_size), intent(in) | q_cons_vf, |
| type(integer_field), intent(in), optional | ib_markers, | ||
| type(levelset_field), intent(in), optional | levelset, | ||
| type(levelset_norm_field), intent(in), optional | levelset_norm, | ||
| type(scalar_field), intent(in), optional | beta ) |
| subroutine m_mpi_common::s_initialize_mpi_data_ds | ( | type(scalar_field), dimension(sys_size), intent(in) | q_cons_vf | ) |
| impure subroutine m_mpi_common::s_mpi_abort | ( | character(len=*), intent(in), optional | prnt, |
| integer, intent(in), optional | code ) |
The subroutine terminates the MPI execution environment.
| prnt | error message to be printed |
| impure subroutine m_mpi_common::s_mpi_allreduce_integer_sum | ( | integer, intent(in) | var_loc, |
| integer, intent(out) | var_glb ) |
The following subroutine takes the input local variable from all processors and reduces to the sum of all values. The reduced variable is recorded back onto the original local variable on each processor.
| var_loc | Some variable containing the local value which should be reduced amongst all the processors in the communicator. |
| var_glb | The globally reduced value |
| impure subroutine m_mpi_common::s_mpi_allreduce_max | ( | real(wp), intent(in) | var_loc, |
| real(wp), intent(out) | var_glb ) |
The following subroutine takes the input local variable from all processors and reduces to the maximum of all values. The reduced variable is recorded back onto the original local variable on each processor.
| var_loc | Some variable containing the local value which should be reduced amongst all the processors in the communicator. |
| var_glb | The globally reduced value |
| impure subroutine m_mpi_common::s_mpi_allreduce_min | ( | real(wp), intent(in) | var_loc, |
| real(wp), intent(out) | var_glb ) |
The following subroutine takes the input local variable from all processors and reduces to the minimum of all values. The reduced variable is recorded back onto the original local variable on each processor.
| var_loc | Some variable containing the local value which should be reduced amongst all the processors in the communicator. |
| var_glb | The globally reduced value |
| impure subroutine m_mpi_common::s_mpi_allreduce_sum | ( | real(wp), intent(in) | var_loc, |
| real(wp), intent(out) | var_glb ) |
The following subroutine takes the input local variable from all processors and reduces to the sum of all values. The reduced variable is recorded back onto the original local variable on each processor.
| var_loc | Some variable containing the local value which should be reduced amongst all the processors in the communicator. |
| var_glb | The globally reduced value |
| impure subroutine m_mpi_common::s_mpi_barrier |
Halts all processes until all have reached barrier.
| subroutine m_mpi_common::s_mpi_decompose_computational_domain |
The purpose of this procedure is to optimally decompose the computational domain among the available processors. This is performed by attempting to award each processor, in each of the coordinate directions, approximately the same number of cells, and then recomputing the affected global parameters.
| impure subroutine m_mpi_common::s_mpi_finalize |
The subroutine finalizes the MPI execution environment.
| impure subroutine m_mpi_common::s_mpi_gather_data | ( | real(wp), dimension(counts), intent(in) | my_vector, |
| integer, intent(in) | counts, | ||
| real(wp), dimension(:), intent(out), allocatable | gathered_vector, | ||
| integer, intent(in) | root ) |
| impure subroutine m_mpi_common::s_mpi_initialize |
The subroutine initializes the MPI execution environment and queries both the number of processors which will be available for the job and the local processor rank.
| impure subroutine m_mpi_common::s_mpi_reduce_maxloc | ( | real(wp), dimension(2), intent(inout) | var_loc | ) |
The following subroutine takes the first element of the 2-element inputted variable and determines its maximum value on the entire computational domain. The result is stored back into the first element of the variable while the rank of the processor that is in charge of the sub- domain containing the maximum is stored into the second element of the variable.
| var_loc | On input, this variable holds the local value and processor rank, which are to be reduced among all the processors in communicator. On output, this variable holds the maximum value, reduced amongst all of the local values, and the process rank to which the value belongs. |
| impure subroutine m_mpi_common::s_mpi_reduce_min | ( | real(wp), intent(inout) | var_loc | ) |
The following subroutine takes the inputted variable and determines its minimum value on the entire computational domain. The result is stored back into inputted variable.
| var_loc | holds the local value to be reduced among all the processors in communicator. On output, the variable holds the minimum value, reduced amongst all of the local values. |
| impure subroutine m_mpi_common::s_mpi_reduce_stability_criteria_extrema | ( | real(wp), intent(in) | icfl_max_loc, |
| real(wp), intent(in) | vcfl_max_loc, | ||
| real(wp), intent(in) | rc_min_loc, | ||
| real(wp), intent(out) | icfl_max_glb, | ||
| real(wp), intent(out) | vcfl_max_glb, | ||
| real(wp), intent(out) | rc_min_glb ) |
The goal of this subroutine is to determine the global extrema of the stability criteria in the computational domain. This is performed by sifting through the local extrema of each stability criterion. Note that each of the local extrema is from a single process, within its assigned section of the computational domain. Finally, note that the global extrema values are only bookkeept on the rank 0 processor.
| icfl_max_loc | Local maximum ICFL stability criterion |
| vcfl_max_loc | Local maximum VCFL stability criterion |
| Rc_min_loc | Local minimum Rc stability criterion |
| icfl_max_glb | Global maximum ICFL stability criterion |
| vcfl_max_glb | Global maximum VCFL stability criterion |
| Rc_min_glb | Global minimum Rc stability criterion |
| subroutine m_mpi_common::s_mpi_sendrecv_grid_variables_buffers | ( | integer, intent(in) | mpi_dir, |
| integer, intent(in) | pbc_loc ) |
The goal of this procedure is to populate the buffers of the grid variables by communicating with the neighboring processors. Note that only the buffers of the cell-width distributions are handled in such a way. This is because the buffers of cell-boundary locations may be calculated directly from those of the cell-width distributions.
| mpi_dir | MPI communication coordinate direction |
| pbc_loc | Processor boundary condition (PBC) location |
| subroutine m_mpi_common::s_mpi_sendrecv_variables_buffers | ( | type(scalar_field), dimension(1:), intent(inout) | q_comm, |
| integer, intent(in) | mpi_dir, | ||
| integer, intent(in) | pbc_loc, | ||
| integer, intent(in) | nvar, | ||
| real(stp), dimension(idwbuff(1)%beg:, idwbuff(2)%beg:, idwbuff(3)%beg:, 1:, 1:), intent(inout), optional | pb_in, | ||
| real(stp), dimension(idwbuff(1)%beg:, idwbuff(2)%beg:, idwbuff(3)%beg:, 1:, 1:), intent(inout), optional | mv_in ) |
The goal of this procedure is to populate the buffers of the cell-average conservative variables by communicating with the neighboring processors.
| q_cons_vf | Cell-average conservative variables |
| mpi_dir | MPI communication coordinate direction |
| pbc_loc | Processor boundary condition (PBC) location |
| impure subroutine m_mpi_common::s_prohibit_abort | ( | character(len=*), intent(in) | condition, |
| character(len=*), intent(in) | message ) |
|
private |
buff_recv is utilized to receive and unpack the buffer of the cell- average primitive variables, for a single computational domain boundary at the time, from the relevant neighboring processor.
|
private |
This variable is utilized to pack and send the buffer of the cell-average primitive variables, for a single computational domain boundary at the time, to the relevant neighboring processor.
| integer(kind=8) m_mpi_common::halo_size |
|
private |