Geological carbon sequestration will play a critical role in decarbonization and in facilitating the transition to clean energy systems. Because is highly mobile, ensuring its safe and permanent injection into subsurface geological formations involves monitoring over larger spatial domains and longer time periods than is typical for hydrocarbon reservoirs. This can benefit from simulation tools capable of modeling key trapping mechanisms, particularly those optimized for speed and scalability on high-performance computing systems. Using isothermal versions of the SPE11B and SPE11C benchmark cases, we conduct a mesh refinement study simulating injection into kilometer-scale rock formations at centimeter resolution with the GEOS open-source simulation framework. We focus on how mesh refinement improves the accuracy of convective mixing in both two- and three-dimensional simulations. The computational costs associated with achieving a converged solution highlight the need for predictive upscaling techniques. A systematic performance scaling analysis—including both CPU and GPU architectures—complements the results section.
Geological carbon sequestration (GCS) consists of injecting carbon dioxide into deep geological formations for permanent storage. GCS has the potential to offer a scalable decarbonization solution for our energy system (Krevor et al. 2023). It involves capturing carbon dioxide () emissions from concentrated sources (power plants and industrial facilities) or from the atmosphere (direct air capture), and injecting it into underground geological formations such as saline aquifers or depleted hydrocarbon reservoirs. If deployed at scale, GCS can dispose of large amounts of safely and permanently, preventing its release into the atmosphere. GCS is an enabling technology in the decarbonization of electricity production, helping to meet the increasing energy needs of the world and of sectors with hard to abate emissions, such as the steel or cement industries (International Energy Agency 2025).
To evaluate and deploy GCS projects, engineers and architects must quantify the impact of injecting large amounts of into deep porous formations during and after operations. One of the most important trapping mechanisms to evaluate the storage capacity of deep saline aquifers is the convective dissolution of the injected in the resident brine. When and brine mix, dissolves and increases the brine density, thereby triggering gravitational instabilities and the formation of -rich brine fingers that drive convective flow. These convective movements help to further dissolve the plume, with -rich brine slowly sinking against the base rock, while the lighter -poor brine rises to allow more dissolution to occur. High-fidelity numerical simulations, which represent one of several useful tools in assessing the impact of injection, must account for these convective dissolution phenomena that continue to occur long after the end of injection operations. Previous studies have demonstrated that fine grid resolution is required to minimize numerical dispersion errors and capture small-scale convective dissolution dynamics that influence large-scale fingering structures and late-time behavior (Riaz et al. 2006; Xu et al. 2006; Pau et al. 2010; Elenius and Johannsen 2012). Simulating the evolution of over centuries in basin-scale geological formations therefore poses a computational challenge due to the complexity of plume dynamics and the high grid resolution required for accuracy.
In this study, we investigate the impact of mesh resolution on convective mixing dynamics in isothermal simulations of geological storage. Our analysis is based on the 11th Society of Petroleum Engineers Comparative Solution Project (SPE11-CSP) with a focus on two models: SPE11B, representing a vertical two-dimensional (2D) field-scale transect, and SPE11C, a full three-dimensional (3D) field-scale model (Nordbotten et al. 2023). The SPE11-CSP provided an opportunity to collegially exchange ideas about GCS modeling and evaluate the quality and performance of numerical simulations. Recent studies have highlighted the significant effect of both grid resolution and orientation (Holme et al. 2025; Hadjisotiriou et al. 2025). To complement the finger mesh convergence analysis conducted for SPE11B, a further 2D study is also discussed. Note that we adhere to the benchmark setup outlined in SPE11, except for the assumption of isothermal conditions. Although thermal effects–particularly those arising from cold injection–can be significant, we adopted an isothermal formulation to reduce computational cost and allow for the exploration of much finer spatial resolutions. Thermal effects are often localized and may not substantially influence bulk system behavior. Therefore, we believe the isothermal results provide valuable and complementary insights into the overall dynamics. Our findings demonstrate that extreme mesh refinement alone is insufficient to achieve convergence in the SPE11B problem. The idealized case highlights the importance of adequate diffusion or dispersion mechanisms to resolve convective mixing dynamics. These results underscore the need for upscaling techniques to bridge the gap between computational feasibility and physical accuracy in reservoir-scale simulations.
For this work we utilize GEOS, a performance-portable, open-source simulation framework focused on subsurface energy and storage applications (Settgast et al. 2024). The framework, designed to facilitate the modeling of tightly coupled multi-physics processes relevant to subsurface reservoir systems, was initially developed to target hydraulic fracture propagation in enhanced geothermal systems (Settgast et al. 2017). The platform has undergone major redevelopment through large-scale computing initiatives and expanded collaborations (Settgast et al. 2017). Recent efforts focus on simulating tightly coupled thermal, hydraulic, and mechanical processes for applications such as geological carbon storage, supported by advanced numerical methods and scalable solvers (Bui et al. 2021; Magri et al. 2025). A performance portability model built on modern programming abstractions enables efficient execution across diverse computing platforms, from CPUs to exascale GPU systems (Beckingsale et al. 2019, 2020).
In this paper, we begin by presenting the mathematical model employed here, followed by its discretization into a numerical form. The model is then applied to the 2D SPE11B case to simulate, visualize and quantify the role of convective processes in determining storage capacity. To emphasize the impact of fine-scale dynamics, we compare eleven mesh resolutions, down to 21 cm grid blocks and 696.7 million degrees of freedom (DoF), and show how storage capacity evolves across these scales. This case also allows us to evaluate the scaling performance of the implementation. Next, we extend the analysis to the full 3D SPE11C case using three resolution models. We examine how storage capacity responds to model refinement and provide a detailed assessment of the numerical behavior of the formulation and implementation. We conclude by discussing the importance of capturing three-dimensional convective finger developments at a fine resolution to estimate plume extension and storage potential with accuracy, during and after injection.
In this section, we present the formulation implemented in GEOS to model compositional multiphase flow and transport of a two-phase, two-component system in porous media. The fluid is modeled as a -brine fluid that includes two components, and , transported by one or two fluid phases: the -rich wetting liquid phase (brine) and the -rich nonwetting gas phase. Note that the -rich gas phase can be in supercritical, liquid or gas state. We use the subscripts and to denote the liquid and gas phase, respectively. In this work, the water component is present only in the water phase, while the component can be present in the gas phase and be dissolved in the brine phase. In contrast to the original definition of the SPE11-CSP benchmark, two simplifications are introduced: (i) thermal effects are neglected, that is the temperature, although varying spatially with depth, does not change with time; and (ii) dispersion effects are not accounted for. Although thermal effects–particularly those associated with cold injection–are important, isothermal simulations remain valuable for several reasons. They adequately capture key dissolution mechanisms, provide a reasonable approximation of plume migration, and allow for robust assessment of pressure changes associated with injection. This approach significantly reduces computational cost while preserving the essential physics needed for scenario evaluation and therefore makes a worthwhile contribution in that respect.
We consider a nonwetting gas phase pressure (), global component densities () formulation. Here, is the vector that collects the global density of each mobile component , with , and the phase component fraction, mass density and saturation, respectively.
Denoting by and the domain and the time interval of interest, the initial-boundary value problem of interest read as follows: Find , such that:
subject to appropriate suitable boundary and initial conditions.
In Eqs. (1a) and (1b), the following quantities are introduced:
is the mass flux for component , which consists of an advective part, , and a diffusive part, . In particular, accounts for all the phase flux contributions based on the Darcy-Muskat law, namely
with , , and the relative permeability, viscosity, and pressure of phase , respectively. The wetting liquid phase pressure is calculated from the relationship , where is the capillary pressure, a given function of the saturation . the second-order permeability tensor, and represents the gravity vector. The diffusive flux is defined as:
where is the diffusivity in phase .
In addition to Eq. (1b), the fluid properties are updated using the additional closure relation
and thermodynamic equilibrium constraints based on the fugacity of each component in each of the phases, which determine how the different components partition into the different phases.
Assuming a constant known temperature, the update procedure involves the following steps. First, the phase volume fractions () and phase component fractions () are computed as a function of pressure (), and total component fractions () using flash calculations. Then, phase densities () and phase viscosities () are computed as a function of pressure and the updated phase component fractions. Following this, the phase relative permeabilities () are computed as a function of the updated phase volume fractions (). More detail can be found in the online documentation of GEOS (Settgast et al. 2024).
For the partitioning of the component into the wetting (water) and nonwetting (gas) phases, we follow Duan and Sun (2003), who present a thermodynamic model for the solubility of in brine. In this case, the salinity of the water is assumed to be zero. Density of the gas phase follows Span and Wagner (1996) whereas gas viscosity follows Fenghour et al. (1998). For an isothermal analysis, the first is essentially a function of pressure, while the second is just a constant value. For the zero-salinity water phase, pure water properties obtained from the NIST database (Lemmon et al. 1998) are used for the liquid density and viscosity. The modification from (Garcıa 2001) is applied to the pure water properties to account for the presence of dissolved .
The constitutive relations for relative permeability and capillary pressure are modeled as proposed in (Nordbotten et al. 2023):
in which is the maximum relative permeability, is the entry pressure, is the residual saturation, and and are the Brooks and Corey shape exponents (Brooks and Corey 1966), respectively. Note that these relations are obtained via linear interpolation of tabulated values provided for each facies.
Initially, the domain is assumed to be fully saturated with pure water. As the simulations are considered isothermal, no thermo-hydrostatic equilibrium step is required and the temperature field is initially imposed honoring a geothermal gradient of 25 K/km. The pressure distribution is initialized using hydrostatic equilibrium procedure by solving a classical ODE system (see, e.g., Lie (2019)).
The no-flow boundary conditions are complemented by imposing pore volume multipliers defined as
where
m in facies 2 to 5, is a volume-per-area ratio,
is a prescribed distance that controls the size of the buffer volumes,
is the spatial coordinate, and
denotes the closest point on the boundary to
(Nordbotten et al. 2023). This formulation introduces a large control volume near the lateral boundaries, effectively mimicking the presence of a connected aquifer without requiring explicit modeling of a larger domain. In the refinement studies that follow, the discrete facies indicator function is either preserved from the coarsest level or re-discretized according to the current level of mesh refinement. To ensure consistency across mesh resolutions, a fixed value of
is used in most cases. This results in a single-cell layer on each side of the domain for the coarsest mesh, while encompassing multiple cells as the mesh is refined.
The nonlinear system of equations Eqs. (1a) and (1b) is discretized using a fully-implicit time-marching scheme (backward Euler). The classical finite-volume approach used in reservoir simulation is adopted to discretize in space both mass balance equations (1a) and the saturation constraint (1b). The discrete approximation of inter-cell component mass flux is based on a linear two-point flux approximation (TPFA) scheme combined with standard single-point upstream weighting (Aziz and Settari 1979).
Let be the set of control volumes in the mesh used to discretize the domain of interest, i.e. . Let be a partition of the time domain , such that the finite-difference approximation to a time-dependent quantity evaluated at time is denoted by . Introducing the following cell-centered, piecewise-constant approximations for and :
with , , and the coefficient vectors containing the unknown degrees of freedom a time , the fully discrete form of the initial boundary value problem (1) may be stated as: Given the discrete solution at time , find such that:
Here, denotes the discrete total mass conservation residual function, defined as the sum of the and mass balance residuals. The terms and represent the nonlinear discrete residual functions for the mass balance and the saturation constraint, respectively. A Newton-Krylov method is employed to solve the nonlinear system (8), incorporating a backtracking approach to enhance robustness. The solution at time is obtained as follows. Starting from an initial guess , for until convergence
with the Jacobian matrix associated with the nonlinear residual function , and an appropriate line-search parameter. At each nonlinear iteration , the method involves solving the linear system defined in Eq. (9a), where the system matrix is block-structured Jacobian evaluated at the current iterate:
The linear system is solved using a restarted version of the Generalized Minimal Residual (GMRES) method with right-preconditioning (Saad and Schultz 1986) for a relative convergence tolerance equal to
. The preconditioning strategy relies on the MultiGrid Reduction (MGR) framework (Ries et al. 1983), as implemented in hypre—a high-performance linear solver and preconditioner package (Falgout and Yang 2002). MGR provides a flexible framework for designing physics-informed algebraic preconditioning algorithms and is employed in this work to realize the Constrained Pressure Residual (CPR) strategy (Wallis 1983; Wallis et al. 1985; Cao et al. 2005). By construction, MGR relies on sparse matrix–matrix multiplications during the coarse-grid reduction and interpolation phases. While these operations introduce additional communication costs, they are offset by the substantial gains in numerical scalability and robustness compared to alternative preconditioners such as single-level domain decomposition (e.g., additive Schwarz). Further details on the CPR implementation within hypre can be found in (Wang et al. 2017; Bui et al. 2020, 2021; Magri et al. 2025).
The SPE11B case is a re-scaling of the laboratory experiment Fluidflower (Flemisch et al. 2024). Spanning 8.4 km in lateral length and 1.2 km thick, it has a nominal depth of approximately 2 km. It consists of 7 horizontal facies, which exhibit different intrinsic properties, residual water saturations, and entry-pressures. It also features a slanted heterogeneous barrier, which spans across multiple facies on its bottom half and two symmetrical structures, either highly permeable (on the right side) or impermeable (on the left side) on its top half. It has a first injector active for 50 years which is placed just below the first retention anticline, such that accumulates until it reaches the heterogeneous barrier. It has a second injector with a 25 years delayed start, placed just above the first anticline apex, which will fill multiple layers and eventually accumulate under the cap rock. Both injectors operate at a fixed injection rate of 0.035 kg/(sm) equivalent to around 1,103 tons of injected per year and are shut down after 50 years of operation. The field is monitored up to 1,000 years. A detailed description and values for all heterogeneous parameters can be found in Nordbotten et al. (2023).
Due to buoyancy, the injected migrates upward and accumulates at the top of the target reservoir. Instability at the interface between the -rich gas phase and the resident brine leads to the formation of convective fingers of dense -rich brine. Capturing this density-driven convection in our simulations is important, as dissolution is one of the primary trapping mechanisms. Theoretical estimates for the grid resolution necessary to resolve gravity-induced fingers have been reported in the literature (Riaz et al. 2006; Xu et al. 2006; Elenius and Johannsen 2012; Kovscek et al. 2024). In Table 1, we provide estimates of the critical wavelength using expressions from Xu et al. (2006):
This estimation assumes a permeability anisotropy ratio of 0.1. The diffusion coefficient is m/s, viscosity is Pas, and density difference is 10 kg/m. The values obtained for the critical wavelength vary by facies. For Facies 5, where significant plume migration takes place, this computation suggests a resolution of the order of 0.1 m (Pruess and Zhang 2008; Pau et al. 2010).
We study the convergence with respect to the spatial discretization. In this work, we employed regular Cartesian grids without local refinement to avoid assumptions about possible migration paths and ensure an unbiased representation of flow and transport. Adaptive mesh refinement was not employed, as our framework—while optimized for multiphysics coupling and scalable high-performance computing—does not currently offer native support for this capability. We however acknowledge the potential benefits of these approaches for improving efficiency and local resolution (for example Faigle et al. (2014); Solovský and Firoozabadi (2025)), but their integration is beyond the scope of this study. As a first experiment, we tested the effect of refining the mesh in lateral and vertical directions in isolation against uniform grid refinement. To this end, we selected the coarse mesh (block size of 10 10 m) as the base resolution (Fig. 1) and adopted three strategies: refinement only in the direction (with fixed m), only in the direction (with fixed m) and in both, i.e., uniform refinement. The amount of dissolved as a fraction of the total mass of injected at 500 years is reported in Fig. 2. Evidently, the discrepancy between the predictions of the three approaches grows with refinement, and a significant difference is observed for or refinement strategies with respect to the uniform way. Therefore, we focus exclusively on uniform refinement for the rest of the convergence studies for SPE11B.
To conduct an extensive refinement study, we performed a series of uniform subdivisions from a coarsest gridblock size of 10 Õ 10 m down to approximately 20.8 Õ 20.8 cm (i.e., 40,320 Õ 5,760 cells) as shown in Table 2. Such aggressive refinement leads to extremely large meshes, which can be computationally demanding. While adaptive or static refinement techniques may not require such fine-grained models, our approach avoids the need for prior knowledge of where refinement is needed and is simpler to implement within the current framework. To highlight the important effect of facies discontinuities, two series of numerical experiments have been performed. In the first series, facies boundaries are considered to be a continuous function of space, hence redistributed onto each refined mesh. In the second series, facies are set from their discrete values on the coarsest mesh. The analysis of the injection dynamics is split into the usual injection/migration phase as these simulation periods exhibit different dominant physical phenomena.
| Refinement | Cell dimensions [m] | Mesh size | Total number
of cells | Degrees of
freedom
|
| base | 10.00 10.00 | 840 120 | 100,800 | 302,400 |
| 22 | 5.00 5.00 | 1680 240 | 403,200 | 1,209,600 |
| 33 | 3.33 3.33 | 2520 360 | 907,200 | 2,721,600 |
| 44 | 2.50 2.50 | 3360 480 | 1,612,800 | 4,838,400 |
| 66 | 1.67 1.67 | 5040 720 | 3,628,800 | 10,886,400 |
| 88 | 1.25 1.25 | 6720 960 | 6,451,200 | 19,353,600 |
| 1212 | 0.833 0.833 | 10080 1440 | 14,515,200 | 43,545,600 |
| 1616 | 0.625 0.625 | 13440 1920 | 25,804,800 | 77,414,400 |
| 2424 | 0.417 0.417 | 20160 2880 | 58,060,800 | 174,182,400 |
| 3232 | 0.313 0.313 | 26880 3840 | 103,219,200 | 309,657,600 |
| 4848 | 0.208 0.208 | 40320 5760 | 232,243,200 | 696,729,600 |
A series of maps of the dissolved is reported in Fig. 3. With increasing refinement, we observe that the number of gravitational fingers grows while their width decreases. This effect is especially pronounced in Facies 5, where the bulk of the injected from Well 1 accumulates. We also observe that the main mechanism triggering the gravity fingers is the staircase-like boundary between the gas phase and the resident brine, because of the geometry of the facies and the choice of a Cartesian mesh. These observations are consistent with recent studies on finger formation for SPE11B (Holme et al. 2025).
The convergence of numerical simulation is assessed from the dissolved mass of integrated over the domain, as it is one of the key estimators of the storage process. The dissolved mass is monitored both at two characteristic times (Fig. 4) and over the simulation time span (Fig. 5). As shown in Fig. 4a, at 50 years (corresponding to the end of the injection period), there is negligible variation in the amount of dissolved for grid resolutions finer than 1 m, suggesting that a grid-converged solution has been achieved. However, at 500 years, well after the end of injection, further mesh refinement continues to yield increased dissolution (Fig. 4b). This is also evident from the time evolution of dissolution presented in Fig. 5, where coarse mesh simulations consistently underestimate dissolution during the late post-injection phase. Notably, the discrepancy between coarse and fine resolution predictions becomes more pronounced with increasing mesh refinement, indicating that numerical convergence has not yet been achieved for the fully developed convective fingers that predominantly govern the dissolution process.
To further study finger formation, we conducted separate experiments with simpler geometry. A 2D square domain measuring m in the -plane was considered for the convergence study. To mimic the conditions of SPE11B, constitutive relations for relative permeability, capillary pressure and rock properties (porosity and permeability) from facies 5 were adopted. Furthermore, the top 5 meters of the domain were initialized with an overall component fraction of , while the remainder of the domain was filled with pure water, corresponding to . To preserve similar finger formation across grid resolutions, a small perturbation in was applied just below the CO-rich layer by sampling values from a uniform distribution in the range . This perturbation is kept fixed with refinement. All domain boundaries were subject to no-flow (Neumann) boundary conditions.
We investigate the convergence behavior of gravity-driven fingering for three diffusion coefficients— m/s—across a range of grid block resolutions, from 100 cm 100 cm down to 1 cm 1 cm. Fig. 7 presents the spatial distribution of the component fraction in the aqueous phase () for each case. As anticipated, lower diffusion coefficients result in a greater number of gravity fingers as a result of enhanced mixing. Furthermore, visual inspection indicates that the fingering patterns converge at different spatial and temporal resolutions depending on the diffusion value: approximately 12.5 cm for m/s, 6.3 cm for m/s, and 1.5 cm for m/s.
Convergence in terms of the global mass fraction of dissolved, shown in Fig. 6, further supports these findings quantitatively. Table 3 compares the theoretical estimates of the critical wavelength derived from Eq. (10) (with estimates in both and directions accounting for permeability anisotropy) with the observed resolutions required to capture the gravity fingers. Evidently, the observed convergence resolutions are approximately an order of magnitude smaller than the theoretical predictions.
Overall, this study, conducted in a simplified geometry, suggests that for subsurface properties similar to those of SPE11B Facies 5 and a diffusion coefficient of m/s, achieving accurate resolution of fingering dynamics would require a grid block size as fine as 1 cm.
| Diffusion [m/s] | Critical wavelength (in ) [m] | Critical wavelength (in ) [m] | Observed Convergence [m] |
| 1e-8 | 5.16 | 0.516 | 0.125 |
| 5e-9 | 2.58 | 0.258 | 0.063 |
| 1e-9 | 0.52 | 0.052 | 0.015 |
The simulation runs described in this section were performed on Microsoft® Azure’s HBv3 virtual machines Microsoft (2025), with each compute node consisting of 448 GB of DDR4-3200 DRAM and two 64-cores AMD® EPYC™ 7V73X (Milan-X) processors, equipped with 768 MB of L3 cache each, based on the 3D V-Cache™ technology. Each node provides 120 physical cores for user workloads, with an additional 8 reserved for the virtualization layer (hypervisor). The DRAM is evenly partitioned across four memory NUMA domains that support a peak bandwidth of 350 GB/s. The internode communication is handled by the NVIDIA® HDR InfiniBand fabric at 200 Gb/s of full-bisection bandwidth.
The results span problem sizes shown in Table 2, corresponding to core counts ranging from 1 to 2,304 and node counts ranging from 1 to 20. In particular, runs up to 64 cores were performed within a single physical compute node, and therefore benefit from fast intranode data movement, based on AMD’s Infinity Fabric™. These cases do not strictly fall under the traditional definition of weak scaling, but rather serve to characterize how the solver infrastructure behaves under increasing problem and resource loads. From 144 cores onward, the experiments scale across multiple nodes, introducing the full cost of distributed memory communication and data distribution. To better understand simulation run-time composition, we observe that most of the total execution time is spent in the linear solver phase, especially at larger scales. In fact, profiling data for representative problem sizes (not shown) confirm that linear solver calls account for more than 80% of the total timestep computation. This highlights the linear solver as the dominant performance bottleneck and strongly motivates our focus on analyzing and improving solver and preconditioner performance.
Fig. 8 illustrates the weak scaling behavior of the linear solver, revealing a distinct performance between the construction of the MGR preconditioner (setup phase) and the application of the Krylov solver (solve phase). Overall, the linear solver component demonstrates good scalability: the solver time increases by a factor of 10 (from 0.2 s to 2.6 s) despite a increase in problem size (DoF). This reflects the high throughput and algorithmic scalability of the underlying Krylov solver and preconditioner infrastructure. Particularly, the mild growth in solver time despite the introduction of global synchronization and communication in the inter-node regime (beyond 64 cores) confirms the suitability of this solver stack for large-scale simulations. Remarkably, the setup phase exhibits more favorable scaling. Despite the inherent complexities of distributed matrix assembly and multilevel preconditioner construction, setup time increases only by a factor of , from 0.18 s at 0.3M DoF to 1.06 s at 696.7M DoF, corresponding to the largest scale test on 2,304 CPU cores. This less pronounced growth across a increase in problem size underscores the effectiveness of our parallel algorithms for the memory-intensive and communication-heavy setup phase, despite challenges arising from the increasing depth and complexity of the coarse grid hierarchy in the preconditioner (MGR) and more expensive communication patterns (e.g., sparse matrix-matrix multiplication across nodes). While superlinear scaling in setup is expected in multigrid-based solvers due to the increasing granularity of coarse-level operations, the absolute cost remains modest in comparison to total time step runtime. Nonetheless, the results also highlight opportunities for further improvement through techniques such as setup reuse across time steps, communication-avoiding algorithms, and better overlapping between communication and computation.
Fig. 9 provides insight into convergence behavior. The average number of Newton iterations per timestep increases moderately with problem size, from 3.1 to 6.6, while the average number of linear iterations per Newton step increases from 11.1 to 16.3. This modest growth indicates stable nonlinear convergence and a well-conditioned linearized system across scales. However, the slow rise in both metrics does suggest increasing stiffness or slight deterioration in coarse-level effectiveness of the multilevel preconditioner, particularly in the high-core count, large-problem regime. In this context, solver iteration counts remain within acceptable bounds, but they motivate further exploration of even more scalable preconditioning strategies.
Fig. 10, on the other hand, illustrates the impact of mesh refinement on temporal resolution by presenting the average timestep size during and after the injection period across a range of grid resolutions. As the grid is refined from cm to cm, a clear trend emerges: the average timestep size decreases progressively, primarily driven by convergence considerations of the nonlinear solver. Although not discussed in detail in the present manuscript, we note that in addition to spatial resolution, timestep size can also influence the accuracy of the solution through added numerical diffusion. This reflects the need for finer temporal resolution to accurately resolve the onset and evolution of gravity-driven fingering.
The SPE11C case is a 3D extrusion of SPE11B along a parabolic shape. The domain spans an area of 8.4 5 km with a uniform thickness of 1.2 km. The parabola has a maximum elevation at a depth of approximately 150 m. Well 1 is a straight well, whereas Well 2 follows the parabolic extrusion of the geometry and therefore remains within the same facies unit. The initial conditions, rock properties, and injection schedule are identical to those of SPE11B, with 50 years of active injection followed by 950 years of post-injection. The injection rate differs from SPE11B. To fill the 3D volume, the injection rate of SPE11C is 50 kg of per second, or approximately 1.58 megatons per annum (Mtpa). The formulation and governing equations are the same as those for SPE11B. A complete description of the geometric setup and the placement of the well is given by Nordbotten et al. (2023). Similarly to SPE11B, the boundary conditions incorporate a buffer volume to simulate the inclusion of the domain as part of a larger aquifer structure. The implementation of the buffer volumes adheres to Eq. (6). A finite value of , based on the distance of the cell center from the boundary, is applied to the boundary cells within the relevant facies.
Simulations of SPE11C were performed on three mesh sizes, a “coarse” mesh and two refined “medium” and “fine” meshes. Rectilinear meshes were used in both cases, with parts of the mesh rendered inactive to follow the geometric structure. The wells for SPE11C are located in the 3,000 m interval in the middle of the domain between m and m. This region was selected for fine gridding with 10 10 10 m grid blocks. The grid block size in the dimension outside this interval was geometrically increased to reduce the total number of grid blocks. The mesh sizes, number of cells, and number of degrees of freedom are reported in Table 4. Vertical-only refinement was chosen for the fine model to keep the overall mesh size manageable while improving resolution in the direction most relevant to the downward migration of fingers. This approach sacrifices lateral accuracy but was considered appropriate given the dominant vertical dynamics that were expected.
| Mesh | Cell dimensions [m] | Mesh size | Total number
of cells | Number of
active cells | Degrees of
freedom
|
| Coarse | 20 20 20 | 4,633,200 | 3,955,011 | 11,865,033 | |
| Medium | 10 10 10 | 37,094,400 | 29,790,400 | 89,371,200 | |
| Fine | 10 10 1.667 | 222,566,400 | 178,742,400 | 536,227,200 |
Fig. 11 shows the spatial distribution of dissolved in water at 600 years using the fine-resolution mesh. The visualization captures the development of gravity-induced fingers that emerge due to density-driven instabilities as -enriched water becomes denser than the surrounding brine. These fingers extend vertically downward from the main dissolution front, forming intricate and well-resolved patterns that reflect the enhanced resolution of the fine mesh. The image illustrates the importance of high spatial resolution in accurately capturing the onset and evolution of convective mixing processes even during the post-injection phase of storage. The spatial distribution of the dissolved can also be seen in Fig. 12 at the end of the injection period. Two orthogonal cross sections are presented to highlight the lateral extent of dissolution along both the x- and y-directions.
The 3D simulation shares several key features with its 2D counterpart. In both cases, gravity-driven fingering arises due to the density contrast between -rich brine and the surrounding fluid. The onset time and vertical propagation of these fingers are qualitatively similar, and start after an initial diffusion-dominated phase. Both simulations require high spatial resolution to capture the fine-scale structure of the fingers accurately; coarser grids tend to suppress or smear these features.
However, notable differences emerge as a result of dimensionality. In 2D, fingers appear as narrow, vertically aligned streaks with limited lateral interactions. In contrast, 3D simulations reveal more complex, branching structures that exhibit lateral spreading, merging, and competition, resulting in more chaotic and intricate flow patterns. This added dimension enhances mixing and dissolution rates due to increased interfacial area and multidirectional flow. The 2D model underestimates these effects, leading to more conservative predictions of dissolution. To get a more detailed view of the fingering, Fig. 13 presents a 2D cross-section at m, comparing the coarse, medium and fine resolution models of the mass fraction of dissolved in water at two time snapshots (50 years and 600 years). The plots illustrate the evolution of gravity-driven fingering resulting from the dissolution of in the aqueous phase.
In all the models, gravity-induced fingers emerge as denser -rich water sinks through the brine. At 50 years, the dissolved maps for the three mesh resolutions are quite similar, with barely a hint of gravity fingers nucleation. It should be noted that for the medium and fine models there is a spill of the gas to the left of the structure, which migrates upward through a fault to the upper reaches of the model domain. This spill is not evident in the coarse model which captures the least detail in the structure. However, in 600 years, convective fingers have extended significantly, spread laterally, and more differences between resolutions appear. This spill is more pronounced in the fine model, particularly at 600 years, where the enhanced resolution allows for a better representation of the fault-guided migration pathway and associated dissolution patterns. The fine-resolution model reveals intricate and well-defined finger structures, capturing fine-scale instabilities and enhanced lateral variability. The medium model shows a similar overall pattern but with less spatial detail and more blurred transitions, indicating a significant impact of numerical diffusion due to lower resolution.
To further assess the effect of the 3D formulation, Fig. 14 shows a vertical cross section at m, in the location of Well 2. The figure shows the mass fraction of dissolved at 50 years (end of injection), and 600 years (post-injection period). These cross sections show the differences between the coarse, medium and fine model results and their evolution in time. In all models, the plume exhibits a finger-like pattern extending downward, characteristic of gravitational instabilities in a stratified fluid. Gaps appear in the central region of the plume at both time points. These are not artifacts, but are caused by complex 3D lateral flow dynamics. As -rich water sinks, it displaces and mixes with surrounding water, allowing -poor brine to intrude laterally into the plume, creating these low-concentration zones. This visualization underscores the importance of 3D flow processes in the long-term evolution of subsurface plumes and highlights how density-driven convection and lateral migration shape the distribution of dissolved gases over time.
Finally, we compare the evolution over time of the dissolved mass of for the coarse, medium and fine meshes of SPE11C. As shown in Fig. 15a, the convective dissolution phenomena are sustained for longer on the fine mesh, ultimately leading to a net 7.5% increase in dissolved mass over the medium mesh. The small model shows the lowest dissolution rates, consistent with its coarser resolution, which limits its ability to capture detailed convective mixing. In Fig. 15b a comparison of SPE11B and SPE11C is shown in terms of the fraction of injected dissolved in the water phase. This contrasts the dissolved mass fractions between the SPE11B and SPE11C models. Each model is simulated at different spatial resolutions, with the characteristic grid block sizes indicated. Despite using grids of comparable resolution, the 2D model consistently underestimates the fraction of dissolved . This arises from the inherent limitations of 2D simulations in capturing the full complexity of convective dissolution. In 3D, the flow field can develop with more spatial variability in the fingering patterns, which enhances the interfacial area between and brine, thereby accelerating dissolution. The 2D model, constrained by its planar geometry, cannot replicate these 3D structures, leading to a less efficient representation of the dissolution process.
One of the key deviations from the original SPE11 description was the impact of thermal effects. To explore this qualitatively, we performed a comparative analysis using the SPE11C model. Specifically, we examined the influence of injecting at a lower temperature, as was done in the official SPE11-CSP submission, and compared it with an isothermal case.
The comparison was conducted using a mesh with a resolution of , which was also used for a separate isothermal simulation included in the analysis with the results shown in Fig. 16. There are some notable differences but the overall migration patterns appear similar in both cases. In particular, the extent of fingering, upward migration, and the quantity of dissolved are comparable.
However, a notable difference arises in the density distribution of the water near the injection region. The injection of cold locally reduces the water density, which in turn enhances the downward movement of -rich water compared to the isothermal scenario.
This effect is illustrated in Fig. 16, where the phase mass density shows a distinct low-density plume near the well. As a result, more dissolved accumulates at lower depths in the thermal model, as shown in Fig. 16. This leads to a reduction in the amount of that escapes through the fault in the left.
In general, the thermal effect introduces a localized perturbation in the simulation results, mainly influencing the vertical distribution of dissolved near the injection site. However, the isothermal simulation remains valuable as it captures the broader migration trends and global behavior of the plume, offering a reliable baseline for large-scale assessments.
Results for the SPE11C model were obtained from experiments on two distinct HPC platforms: (i) TotalEnergies’ Pangea 4, and (ii) the Frontier supercomputer at Oak Ridge National Laboratory. Only the medium and fine models were simulated on Frontier. On Pangea 4, simulations were carried out using a CPU-only configuration of GEOS, while on Frontier, we leveraged the GPU-accelerated version of GEOS to take full advantage of the system’s heterogeneous architecture. Pangea 4 is an HPC cluster where each compute node is equipped with two 96-core AMD EPYC 9654 (Genoa) processors, providing a total of 192 physical cores per node, 768 GB of DDR5-4800 memory, and peak memory bandwidth of 920 GB/s per node. The nodes are interconnected with a low-latency NVIDIA Quantum-2 InfiniBand network operating at 200Gb/s (NDR200). Frontier is an HPE Cray EX system and the first exascale supercomputer. Each of its nodes contains a 64-core AMD EPYC 7A53s (Trento) CPU, optimized for HPC, and four AMD Instinct™ MI250X graphics processing units (GPUs) containing 128 GB of HBM2e RAM for a peak bandwidth of 3.2 TB/s. The nodes are also equipped with 512 GB of DDR4 memory and a peak bandwidth of 205 GB/s, while interconnect is provided by 4x HPE Slingshot 200 Gbps (25 GB/s) NICs for a total node-injection bandwidth of 800 Gbps (100 GB/s).
The simulations were configured to assess weak scaling on both platforms, although with different workloads per rank. On Pangea 4, the coarse mesh simulation was executed on 12 MPI ranks on a single physical CPU, while the medium and fine mesh cases were mapped to full processors totaling 96 (1 compute node) and 576 MPI ranks (3 compute nodes) respectively. This setup partitions the problem into approximately 0.9 million DoF per rank. On Frontier, the GPU-accelerated runs for the coarse. medium and fine meshes utilized 2, 16 and 96 AMD Instinct MI250X GPUs (2, 4 and 24 compute nodes), respectively. Since each MI250X card is treated as two logical GPU devices, this corresponds to 4, 32 and 192 MPI ranks, with each rank mapped to a single logical GPU. This configuration results in a constant workload of approximately 2.8 million DoF per rank for the Frontier simulations.
Fig. 17 compares the time-stepping behavior of the coarse, medium and fine reservoir models across the Pangea 4 simulations. The illustrated simulation interval is divided into an injection period (0-50 years), and a subsequent post-injection period (50-1,000 years). The evolution of the time step sizes over the course of a simulation is shown in Fig. 17a for the coarse (green), medium (red) and fine (blue) models. During the injection period, all models exhibit a general trend of increasing time step sizes as the system stabilizes under the influence of continuous injection. In the post-injection period, the divergence between the models becomes more pronounced. In both periods the small model is able to reach the imposed time-step size limits. The coarse model is also able to reach very large time steps in the post injection period, whereas the fine model maintains significantly smaller and more variable time steps, particularly toward the end of the simulation. This information is also summarized in Fig. 17b which shows the average time step sizes during the two intervals. During the injection phase, the coarse model achieves an average time step of 41.2 days, the medium model achieves an average time step of 26.1 days, while the fine model requires half of this with an average step size of 13.8 days. In the post-injection phase, this difference is more pronounced with the small model averaging 661.7 days, the coarse model averaging 387.1 days and the fine model 83.8 days. These results indicate that the fine model requires significantly smaller time steps to maintain numerical stability and convergence, particularly in the post-injection period. This is attributed to the need to resolve the finer-scale features of the viscous fingers that emerge as the plume dissolves and migrates.
Fig. 18a and Fig. 18b illustrate the average number of nonlinear (Newton) iterations per time step and linear solver iterations per Newton iteration for the two reservoir models on Pangea 4 and Frontier, respectively. The number of Newton iterations per time step remains nearly constant at four for all simulations, demonstrating the robustness of the nonlinear solution procedure and the effectiveness of the selected timestep sizes. In contrast, the number of linear iterations per Newton iteration increases moderately by a factor of for both cluster results with refinement between the medium and fine models. This increase is attributed to the greater heterogeneity and contrast introduced by finer spatial resolution, particularly due to the emergence of more detailed viscous fingering patterns. Despite this increase, the linear iteration count remains within the same order of magnitude, indicating that the solver maintains good scalability and efficiency even for high-resolution models.
Fig. 19a and Fig. 19b detail the computational performance of the linear solver. The vertical axis shows the average time in seconds with two key indicators: linear solver setup and solve phase times per nonlinear iteration. Despite the substantial growth in problem size, the solver demonstrates strong scalability across both systems. On Pangea 4, the average setup time shows only a modest increase when moving from the medium (89.4M DoF) to the fine (536.2M DoF) mesh, while the average solve time increases by about a factor of 1.7, primarily due to the higher number of iterations required at finer resolution. A similar trend is observed on Frontier, where setup times remain nearly flat, and solve times also increase by approximately 1.6x between medium and fine meshes. These results confirm that the solver maintains robust performance characteristics even at large scales, with iteration growth rather than setup overhead driving the overall increase in runtime. In addition, the increase in total simulation time observed for the fine model is not solely due to solver costs. The higher resolution also resolves more intricate dissolution structures and dynamic interactions, which demand smaller time steps to preserve numerical stability and accuracy. As a result, even though the solver itself scales efficiently, the overall runtime grows with the added physical complexity of the modeled process.
This study demonstrates the critical role of high-resolution numerical simulations in modeling geological carbon sequestration, particularly the convective dissolution of in a saline aquifers. Based on the GEOS simulation framework, this research utilizes the SPE11B (2D) and SPE11C (3D) benchmark cases to evaluate the effects of mesh refinement on the accuracy of plume evolution and storage predictions.
The results show that mesh resolution significantly influences the ability to capture gravity-driven fingering, one of the mechanism for long-term trapping. In both 2D and 3D simulations, finer meshes reveal more intricate and numerically refined finger structures, which enhanced the dissolution rate and storage efficiency of . However, even with extreme refinement, full spatial convergence is not achieved in long-term simulations, highlighting the computational challenges of modeling such complex subsurface processes. Numerical experiments on a simpler geometry highlight the interaction between mesh size and physical diffusion and showed predictions on the resolution that is required to fully resolve the phenomenon of gravity fingering. The comparison between 2D and 3D models reveal that 3D simulations provide a more detailed representation of lateral flow dynamics and interfacial mixing. This underscores the limitations of 2D models in capturing the full complexity of subsurface flow and the necessity of 3D modeling for more complete assessments. Consequently, compared to their 3D counterparts, 2D simulations underestimate solubility due to this inability to capture complex lateral flow interactions.
From a computational point of view, the results highlight the robust performance and versatility of the employed open source framework. The simulations presented in this work have demonstrated successful execution on a wide spectrum of HPC environments, from cloud-based virtual machines to on-premise CPU clusters and leadership-class, GPU-based supercomputers. The weak-scaling analysis, in particular, showcases the linear solver’s efficiency. For the SPE11C problem, a six-fold increase in degrees of freedom led to only a marginal increase in the computational cost per iteration. This scalability is evident on both CPU and GPU architectures, with the GPU implementation delivering substantial performance improvements, proving a capability to effectively leverage modern heterogeneous systems.
To further advance the predictive capabilities and computational efficiency of storage simulations, several avenues of future research can be explored. Developing predictive upscaling techniques that preserve key physical phenomena, such as convective fingering and phase partitioning, will be essential to bridge the gap between fine-scale accuracy and field-scale feasibility. Incorporating thermal and dispersion effects into the modeling framework will allow for more realistic simulations, especially in heterogeneous formations where temperature gradients and molecular diffusion play a significant role. These enhancements will improve the accuracy of long-term basin-scale simulations for storage with both physical fidelity and computational tractability.
The method presented in this work was implemented in the GEOS open source simulation framework (Settgast et al. 2024). The simulations presented were performed using version 1.0.1 of the GEOS software, and examples of SPE11 input data are included as part of the GEOS documentation (GEOS Development Team 2025).
This work was funded by TotalEnergies and Chevron through the FC-MAELSTROM Project. Portions of this work were performed under the auspices of the U.S. Department of Energy by Lawrence Livermore National Laboratory under Contract DE-AC52-07NA27344. We thank François Hamon for preliminary work on the SPE11B and SPE11C cases. We thank Olav Møyner for his insights and fruitful discussions.
Aziz, K. and Settari, A. 1979. Petroleum Reservoir Simulation. Elsevier Applied Science Publishers, London, UK.
Beckingsale, D. A., Burmark, J., Hornung, R., Jones, H., Killian, W., Kunen, A. J., Pearce, O., Robinson, P., Ryujin, B. S., and Scogland, T. R. W. 2019. Raja: Portable performance for large-scale scientific applications. In 2019 IEEE/ACM International Workshop on Performance, Portability and Productivity in HPC (P3HPC), pages 71–81. doi: 10.1109/P3HPC49587.2019.00012.
Beckingsale, D. A., McFadden, M. J., Dahm, J. P. S., Pankajakshan, R., and Hornung, R. D. 2020. Umpire: Application-focused management and coordination of complex hierarchical memory. IBM Journal of Research and Development, 64(3/4):15:1–15:10. doi: 10.1147/JRD.2019.2954403.
Brooks, R. H. and Corey, A. T. 1966. Properties of porous media affecting fluid flow. Journal of the irrigation and drainage division, 92(2):61–88.
Bui, Q. M., Hamon, F. P., Castelletto, N., Osei-Kuffuor, D., Settgast, R. R., and White, J. A. 2021. Multigrid reduction preconditioning framework for coupled processes in porous and fractured media. Computer Methods in Applied Mechanics and Engineering, 387:114111. doi: 10.1016/j.cma.2021.114111.
Bui, Q. M., Osei-Kuffuor, D., Castelletto, N., and White, J. A. 2020. A scalable multigrid reduction framework for multiphase poromechanics of heterogeneous media. SIAM Journal on Scientific Computing, 42(2):B379–B396. doi: 10.1137/19M1256117.
Cao, H., Tchelepi, H. A., Wallis, J. R., and Yardumian, H. E. 2005. Parallel scalable unstructured CPR-type linear solver for reservoir simulation. In Proceedings - SPE Annual Technical Conference and Exhibition, Dallas, TX, USA. doi: 10.2118/96809-MS.
Duan, Z. and Sun, R. 2003. An improved model calculating co2 solubility in pure water and aqueous nacl solutions from 273 to 533 k and from 0 to 2000 bar. Chemical Geology, 193(3):257–271. doi: https://doi.org/10.1016/S0009-2541(02)00263-2.
Elenius, M. T. and Johannsen, K. 2012. On the time scales of nonlinear instability in miscible displacement porous media flow. Computational Geosciences, 16(4):901–911. doi: 10.1007/s10596-012-9294-2.
Faigle, B., Helmig, R., Aavatsmark, I., and Flemisch, B. 2014. Efficient multiphysics modelling with adaptive grid refinement using a MPFA method. Computational Geosciences, 18(5):625–636. doi: 10.1007/s10596-014-9407-1.
Falgout, R. D. and Yang, U. 2002. hypre: a library of high performance preconditioners. In Lecture Notes in Computer Science, pages 632–641. doi: 10.1007/3-540-47789-6_66.
Fenghour, A., Wakeham, W. A., and Vesovic, V. 1998. The viscosity of carbon dioxide. Journal of physical and chemical reference data, 27(1):31–44.
Flemisch, B., Nordbotten, J. M., Fernø, M., Juanes, R., Both, J. W., Class, H., Delshad, M., Doster, F., Ennis-King, J., Franc, J., et al. 2024. The FluidFlower validation benchmark study for the storage of CO. Transport in Porous Media, 151(5):865–912.
Garcıa, J. E. 2001. Density of aqueous solutions of CO. Earth Sciences Division, Lawrence Berkeley National Laboratory, Berkeley.
GEOS Development Team 2025. GEOS Documentation. https://geosx-geosx.readthedocs-hosted.com/en/latest/index.html. Accessed: 2025-09-03.
Hadjisotiriou, G., Sass, J., Wapperom, M., Novikov, A., and Voskov, D. V. 2025. Spe11: Convergence study and extension to realistic physics. In SPE Reservoir Simulation Conference, 25RSC. SPE. doi: 10.2118/223922-ms.
Holme, K., Lie, K.-A., Møyner, O., and Johansson, A. 2025. Grid-orientation effects in the 11th SPE comparative solution project using unstructured grids and consistent discretizations. In SPE Reservoir Simulation Conference, 25RSC. SPE. doi: 10.2118/223885-ms.
International Energy Agency 2025. Global Energy Review 2025. Accessed: 2025-05-28.
Kovscek, A. R., Nordbotten, J. M., and Fernø, M. A. 2024. Scaling up FluidFlower results for carbon dioxide storage in geological media. Transport in Porous Media, 151(5):975–1002. doi: 10.1007/s11242-023-02046-9.
Krevor, S., de Coninck, H., Gasda, S. E., Singh Ghaleigh, N., de Gooyert, V., Hajibeygi, H., Juanes, R., Neufeld, J., Roberts, J. J., and Swennenhuis, F. 2023. Subsurface carbon dioxide and hydrogen storage for a sustainable energy future. Nature Reviews Earth & Environment, 4:102–118. doi: 10.1038/s43017-022-00376-8.
Lemmon, E. W., McLinden, M. O., and Friend, D. G. 1998. Thermophysical properties of fluid systems. In Linstrom, P. J. and Mallard, W. G., editors, NIST Chemistry webbook, NIST Standard reference database, volume 69. National Institute of Standards and Technology, Gaithersburg MD, 20899, USA.
Lie, K.-A. 2019. An Introduction to Reservoir Simulation Using MATLAB/GNU Octave: User Guide for the MATLAB Reservoir Simulation Toolbox (MRST). Cambridge University Press. doi: 10.1017/9781108591416.
Magri, V., Castelletto, N., Osei-Kuffuor, D., Settgast, R., Cusini, M., Tobin, B., Aronson, R., and Tomin, P. 2025. A Multigrid Reduction Framework for Efficient and Scalable Multiphysics Simulations. In Proceedings - SPE Reservoir Simulation Conference, Galveston, TX, USA. doi: 10.2118/223898-MS.
Microsoft 2025. Hbv3-series - azure virtual machines. https://learn.microsoft.com/en-us/azure/virtual-machines/sizes/high-performance-compute/hbv3-series. Accessed: June 22, 2025.
Nordbotten, J. M., Fernø, M., Flemisch, B., Kovscek, A. R., and Lie, K.-A. 2023. The 11th Society of Petroleum Engineers comparative solution project: Problem definition. SPE Journal, 29(5):2507–2524. doi: 10.2118/218015-PA.
Pau, G. S., Bell, J. B., Pruess, K., Almgren, A. S., Lijewski, M. J., and Zhang, K. 2010. High-resolution simulation and characterization of density-driven flow in CO storage in saline aquifers. Advances in Water Resources, 33(4):443–455.
Pruess, K. and Zhang, K. 2008. Numerical modeling studies of the dissolution-diffusion-convection processduring co2 storage in saline aquifers. Technical report, Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States). doi: 10.2172/944124.
Riaz, A., Hesse, M., Tchelepi, H. A., and Orr, F. M. 2006. Onset of convection in a gravitationally unstable diffusive boundary layer in porous media. Journal of Fluid Mechanics, 548(1):87. doi: 10.1017/s0022112005007494.
Ries, M., Trottenberg, U., and Winter, G. 1983. A note on MGR methods. Linear Algebra and its Applications, 49:1 – 26. doi: 10.1016/0024-3795(83)90091-5.
Saad, Y. and Schultz, M. H. 1986. GMRES: A Generalized Minimal Residual Algorithm for Solving Nonsymmetric Linear Systems. SIAM J. Sci. Stat. Comput., 7(3):856–869. doi: 10.1137/0907058.
Settgast, R. R., Aronson, R. M., Besset, J. R., Borio, A., Bui, Q. M., Byer, T. J., Castelletto, N., Citrain, A., Corbett, B. C., Corbett, J., Cordier, P., Cremon, M. A., Crook, C. M., Cusini, M., Fei, F., Frambati, S., Franc, J., Franceschini, A., Frigo, M., Fu, P., Gazzola, T., Gross, H., Hamon, F., Han, B. M., Hao, Y., Hasanzade, R., Homel, M., Huang, J., Jin, T., Ju, I., Kachuma, D., Karimi-Fard, M., Kim, T., Klevtsov, S., Lapene, A., Magri, V. A. P., Mazuyer, A., N’diaye, M., Osei-Kuffuor, D., Povolny, S., Ren, G., Semnani, S. J., Sherman, C. S., Rey, M., Tchelepi, H. A., Tobin, W. R., Tomin, P., Untereiner, L., Vargas, A., Waziri, S., Wen, X., White, J. A., and Wu, H. 2024. GEOS: A performance portable multi-physics simulation framework for subsurface applications. Journal of Open Source Software, 9(102):6973. doi: 10.21105/joss.06973.
Settgast, R. R., Fu, P., Walsh, S. D., White, J. A., Annavarapu, C., and Ryerson, F. J. 2017. A fully coupled method for massively parallel simulation of hydraulically driven fractures in 3-dimensions. International Journal for Numerical and Analytical Methods in Geomechanics, 41(5):627–653. doi: 10.1002/nag.2557.
Solovský, J. and Firoozabadi, A. 2025. Dynamic adaptive and fully unstructured tetrahedral gridding: Application to CO2 sequestration with consideration of full fluid compressibility. Journal of Computational Physics, 521:113556. doi: https://doi.org/10.1016/j.jcp.2024.113556.
Span, R. and Wagner, W. 1996. A new equation of state for carbon dioxide covering the fluid region from the triple-point temperature to 1100 K at pressures up to 800 MPa. Journal of physical and chemical reference data, 25(6):1509–1596.
Wallis, J. R. 1983. Incomplete Gaussian Elimination as a Preconditioning for Generalized Conjugate Gradient Acceleration. In Proceedings - SPE Reservoir Simulation Symposium, San Francisco, CA, USA. doi: 10.2118/12265-MS.
Wallis, J. R., Kendall, R. P., and Little, T. E. 1985. Constrained Residual Acceleration of Conjugate Residual Methods. In Proceedings - SPE Reservoir Simulation Symposium, Dallas, TX, USA. doi: 10.2118/13536-MS.
Wang, L., Osei-Kuffuor, D., Falgout, R., Mishev, I., and Li, J. 2017. Multigrid Reduction for Coupled Flow Problems with Application to Reservoir Simulation. In Proceedings - SPE Reservoir Simulation Conference, Montgomery, TX, USA. doi: 10.2118/182723-MS.
Xu, X., Chen, S., and Zhang, D. 2006. Convective stability analysis of the long-term storage of carbon dioxide in deep saline aquifers. Advances in Water Resources, 29(3):397–407. doi: 10.1016/j.advwatres.2005.05.008.