MFC
Exascale flow solver
Loading...
Searching...
No Matches
Code Architecture

This page explains how MFC's source code is organized, how data flows through the solver, and where to find things. Read this before diving into the source.

Three-Phase Pipeline

MFC runs as three separate executables that communicate via binary files on disk:

pre_process ──> simulation ──> post_process
(grid + (time (derived
initial advance) quantities +
conditions) visualization)
Phase Entry Point What It Does
Pre-Process src/pre_process/p_main.f90 Generates the computational grid and initial conditions from patch definitions. Writes binary grid and state files.
Simulation src/simulation/p_main.fpp Reads the initial state and advances the governing equations in time. Periodically writes solution snapshots.
Post-Process src/post_process/p_main.fpp Reads snapshots, computes derived quantities (vorticity, Schlieren, etc.), and writes Silo/HDF5 files for VisIt or ParaView.

Each phase is an independent MPI program. The simulation phase is where nearly all compute time is spent.

Directory Layout

src/
common/ Shared modules used by all three phases
pre_process/ Pre-process source (grid generation, patch construction)
simulation/ Simulation source (solver core, physics models)
post_process/ Post-process source (derived quantities, formatted I/O)

Shared modules in src/common/ include MPI communication, derived types, variable conversion, and utility functions. They are compiled into each phase.

Key Data Structures

Two arrays carry the solution through the entire simulation:

Variable Contents When Used
q_cons_vf Conservative variables: \(\alpha\rho\), \(\rho u\), \(\rho v\), \(\rho w\), \(E\), \(\alpha\) Storage, time integration, I/O
q_prim_vf Primitive variables: \(\rho\), \(u\), \(v\), \(w\), \(p\), \(\alpha\) Reconstruction, Riemann solving, physics

Both are vector_field types (defined in m_derived_types), which are arrays of scalar_field. Each scalar_field wraps a 3D real array sf(0:m, 0:n, 0:p) representing one variable on the grid.

The index layout within q_cons_vf depends on the flow model:

For model_eqns == 2 (5-equation, multi-fluid):
Index: 1 .. num_fluids | num_fluids+1 .. +num_vels | E_idx | adv_idx
Meaning: alpha*rho_k | momentum components | energy | volume fractions

Additional variables are appended for bubbles, elastic stress, magnetic fields, or chemistry species when those models are enabled. The total count is sys_size.

The Simulation Loop

The simulation advances the solution through this call chain each time step:

p_main (time-step loop)
└─ s_perform_time_step
├─ s_compute_dt [adaptive CFL time step]
└─ s_tvd_rk [Runge-Kutta stages]
├─ s_compute_rhs [assemble dq/dt]
│ ├─ s_convert_conservative_to_primitive_variables
│ ├─ s_populate_variables_buffers [MPI halo exchange]
│ └─ for each direction (x, y, z):
│ ├─ s_reconstruct_cell_boundary_values [WENO]
│ ├─ s_riemann_solver [HLL/HLLC/HLLD]
│ ├─ s_compute_advection_source_term [flux divergence]
│ └─ (physics source terms: viscous, bubbles, etc.)
├─ RK update: q_cons = weighted combination of stages
├─ s_apply_bodyforces [if enabled]
├─ s_pressure_relaxation [if 6-equation model]
└─ s_ibm_correct_state [if immersed boundaries]

What happens at each stage

  1. Conservative → Primitive: Convert stored q_cons_vf to q_prim_vf (density, velocity, pressure) using the equation of state. This is done by m_variables_conversion.
  2. MPI Halo Exchange: Ghost cells at subdomain boundaries are filled by communicating with neighbor ranks. Handled by m_mpi_proxy.
  3. WENO Reconstruction (m_weno): For each coordinate direction, reconstruct left and right states at cell faces from cell-average primitives using high-order weighted essentially non-oscillatory stencils.
  4. Riemann Solver (m_riemann_solvers): At each cell face, solve the Riemann problem between left and right states to compute intercell fluxes. Available solvers: HLL, HLLC, HLLD.
  5. Flux Differencing (m_rhs): Accumulate the RHS as \(\partial q / \partial t = -\frac{1}{\Delta x}(F_{j+1/2} - F_{j-1/2})\) plus source terms (viscous stress, surface tension, bubble dynamics, body forces, etc.).
  6. Runge-Kutta Update (m_time_steppers): Combine the RHS with the current state using TVD Runge-Kutta coefficients (1st, 2nd, or 3rd order SSP).

Module Map

MFC has ~80 Fortran modules organized by function. Here is where to look for what:

Solver Core

Module Role
m_rhs Assembles the right-hand side of the governing equations using finite-volume flux differencing, Riemann solvers, and physical source terms
m_time_steppers Total-variation-diminishing (TVD) Runge–Kutta time integrators (1st-, 2nd-, and 3rd-order SSP)
m_weno WENO/WENO-Z/TENO reconstruction with optional monotonicity-preserving bounds and mapped weights
m_riemann_solvers Approximate and exact Riemann solvers (HLL, HLLC, HLLD, exact) for the multicomponent Navier–Stokes equations
m_muscl MUSCL reconstruction with interface sharpening for contact-preserving advection
m_variables_conversion Conservative-to-primitive variable conversion, mixture property evaluation, and pressure computation

Physics Models

Module Role
m_viscous Computes viscous stress tensors and diffusive flux contributions for the Navier–Stokes equations
m_surface_tension Computes capillary source fluxes and color-function gradients for the diffuse-interface surface tension model
m_bubbles Shared bubble-dynamics procedures (radial acceleration, wall pressure, sound speed) for ensemble- and volume-averaged models
m_bubbles_EE Computes ensemble-averaged (Euler–Euler) bubble source terms for radius, velocity, pressure, and mass transfer
m_bubbles_EL Tracks Lagrangian bubbles and couples their dynamics to the Eulerian flow via volume averaging
m_bubbles_EL_kernels Kernel functions (Gaussian, delta) that smear Lagrangian bubble effects onto the Eulerian grid
m_qbmm Quadrature-based moment methods (QBMM) for polydisperse bubble moment inversion and transport
m_hyperelastic Computes the left Cauchy–Green deformation tensor and hyperelastic stress source terms
m_hypoelastic Computes hypoelastic stress-rate source terms and damage-state evolution
m_phase_change Phase transition relaxation solvers for liquid-vapor flows with cavitation and boiling
m_chemistry Multi-species chemistry interface for thermodynamic properties, reaction rates, and transport coefficients
m_acoustic_src Applies acoustic pressure source terms including focused, planar, and broadband transducers
m_body_forces Computes gravitational and user-defined body force source terms for the momentum equations
m_pressure_relaxation Pressure relaxation for the six-equation multi-component model via Newton–Raphson equilibration and volume-fraction correction

Boundary Conditions

Module Role
m_cbc Characteristic boundary conditions (CBC) for slip walls, non-reflecting subsonic inflow/outflow, and supersonic boundaries
m_compute_cbc Characteristic boundary condition (CBC) computations for subsonic inflow, outflow, and slip walls
m_boundary_common Noncharacteristic and processor boundary condition application for ghost cells and buffer regions
m_ibm Ghost-node immersed boundary method: locates ghost/image points, computes interpolation coefficients, and corrects the flow state
m_igr Iterative ghost rasterization (IGR) for sharp immersed boundary treatment
m_ib_patches Immersed boundary patch geometry constructors for 2D and 3D shapes
m_compute_levelset Computes signed-distance level-set fields and surface normals for immersed-boundary patch geometries

I/O and Startup

Module Role
m_start_up Reads and validates user inputs, loads existing grid/IC data, and initializes pre-process modules
m_data_output Writes grid and initial condition data to serial or parallel output files
m_data_input Reads raw simulation grid and conservative-variable data for a given time-step and fills buffer regions
m_delay_file_access Rank-staggered file access delays to prevent I/O contention on parallel file systems

Infrastructure

Module Role
m_derived_types Shared derived types for field data, patch geometry, bubble dynamics, and MPI I/O structures
m_global_parameters Defines global parameters for the computational domain, simulation algorithm, and initial conditions
m_mpi_common MPI communication layer: domain decomposition, halo exchange, reductions, and parallel I/O setup
m_mpi_proxy Broadcasts user inputs and decomposes the domain across MPI ranks for pre-processing
m_constants Compile-time constant parameters: default values, tolerances, and physical constants
m_precision_select Working-precision kind selection (half/single/double) and corresponding MPI datatype parameters
m_helper Utility routines for bubble model setup, coordinate transforms, array sampling, and special functions
m_helper_basic Basic floating-point utilities: approximate equality, default detection, and coordinate bounds
m_compile_specific Platform-specific file and directory operations: create, delete, inquire, getcwd, and basename
m_fftw Forward and inverse FFT wrappers (FFTW/cuFFT/hipFFT) for azimuthal Fourier filtering in cylindrical geometries
m_nvtx NVIDIA NVTX profiling API bindings for GPU performance instrumentation
m_model Binary STL file reader and processor for immersed boundary geometry
m_finite_differences Finite difference operators for computing divergence of velocity fields
m_checker Checks pre-process input file parameters for compatibility and correctness
m_checker_common Shared input validation checks for grid dimensions and AMD GPU compiler limits
m_sim_helpers Simulation helper routines for enthalpy computation, CFL calculation, and stability checks
m_derived_variables Derives diagnostic flow quantities (vorticity, speed of sound, numerical Schlieren, etc.) from conservative and primitive variables

Pre-Process

Module Role
m_grid Generates uniform or stretched rectilinear grids with hyperbolic-tangent spacing
m_icpp_patches Constructs initial condition patch geometries (lines, circles, rectangles, spheres, etc.) on the grid
m_initial_condition Assembles initial conditions by layering prioritized patches via constructive solid geometry
m_assign_variables Assigns initial primitive variables to computational cells based on patch geometry
m_check_patches Validates geometry parameters and constraints for initial condition patches
m_check_ib_patches Validates geometry parameters and constraints for immersed boundary patches
m_perturbation Perturbs initial mean flow fields with random noise, mixing-layer instabilities, or simplex noise
m_simplex_noise 2D and 3D simplex noise generation for procedural initial condition perturbations
m_boundary_conditions Applies spatially varying boundary condition patches along domain edges and faces

MPI Parallelization

The computational domain is decomposed into subdomains via MPI_Cart_create. Each rank owns a contiguous block of cells in (x, y, z). Ghost cells of width buff_size surround each subdomain and are filled by halo exchange before each RHS evaluation.

On GPUs, the same domain decomposition applies. GPU kernels operate on the local subdomain, with explicit host-device transfers for MPI communication (unless GPU-aware MPI / RDMA is available).

Adding New Physics

To add a new source term or physics model:

  1. Create a module in src/simulation/ (e.g., m_my_model.fpp)
  2. Add initialization/finalization subroutines called from m_start_up
  3. Add RHS contributions called from the dimensional loop in m_rhs:s_compute_rhs
  4. Add parameters to m_global_parameters and input validation to m_checker
  5. Add a module-level brief (enforced by the linter in lint_docs.py)
  6. Add the module to docs/module_categories.json so it appears in this page

Follow the pattern of existing modules like m_body_forces (simple) or m_viscous (more involved) as a template.

Page last updated: 2026-02-22