The process of mapping ocean bathymetry from measurements of surface waves is ill-posed: it is characterized by sensitivity to small amounts of noise in the system and is susceptible to the instability inherent in the inversion process (Ozisik and Orlande, 2000). Data assimilation is one such inversion process, where observations of a true state are combined with a mathematical model in order to recover missing data governing the system evolution. The verification of a variational adjoint-based scheme of the kind developed in this paper is challenging, especially since there is no general analytical solution available for the nonlinear shallow water equations, even in the case of flat bathymetry. Despite this, accurate representation of bathymetry data is essential when predicting tsunamis, since bathymetry variation modifies speed, direction, and stability of the propagating wave (Craig and Sulem, 2009). Bathymetry data can be either static or dynamic, where the latter includes shifts in the ocean floor due to seismic activity.
The high number of degrees of freedom in this problem makes it difficult to determine the criteria for optimal bathymetry reconstruction. We must consider the shallow water system governing tsunami propagation, as well as parameters such as the amplitude and shape of the bathymetry, and the initial conditions of the surface waves. This is in addition to calibration of the optimization scheme, and deriving optimal configurations of the observation network.
In this study we address two complementary questions:
- How accurately can bathymetry data be reconstructed from surface wave measurements, and what determines the accuracy?
- How accurate does the bathymetry data need to be in order to model the surface waves accurately?
We attempt to address these questions for an idealised 1-D case, as a first step towards more realistic models. We quantify the key relationships between the initial conditions and the bathymetry amplitude relative to the average fluid depth. We also analyse the effect of the number and locations of the observations on the convergence properties of the assimilation. We implement a Sobolev gradient smoothing technique (effectively a low-pass filter) within our optimization scheme, and illustrate its ability to reduce the small-scale noise present in the bathymetry reconstruction. We then investigate the consequences of errors in the bathymetry data on the resulting surface wave by observing trends in the surface wave propagation error as the amplitude of the initial condition, the amplitude of the bathymetry, and the number of observation points is varied.
Section 2 provides a review of the effect of bathymetry on surface waves, and efforts to map ocean bathymetry, highlighting empirical, numerical and theoretical approaches. In Sec. 3, we provide a concise overview of the shallow water system and derive the first order adjoint data assimilation scheme using optimal control theory, and summarise the algorithm. Section 4 presents preliminary results for different choices of initial conditions and exact bathymetry for the data assimilation scheme. This investigation reveals small-scale noise in the optimal reconstruction. To reduce this noise, Sec. 5 proposes a low pass filter, which effectively removes higher frequencies in our reconstruction by increasing the regularity of our estimate at each iteration, raising it from the space to We discuss results of the smoothed optimisation scheme and illustrate the removal of noise in the reconstructed bathymetry in several test cases. Section 6 analyses the relationships between the amplitudes of the initial conditions and bathymetry relative to the average depth, and attempts to formulate a relationship summarising certain necessary conditions for convergence. We also analyse the effect of number of observation points on the optimal reconstruction. Finally, we provide a sensitivity analysis of the surface wave to errors in the bathymetry reconstruction.
We observe that the low pass filter effectively reduces small-scale noise in the bathymetry reconstruction for different bathymetry shapes. Additionally, a necessary condition for convergence is that the amplitude of the initial conditions be at least two orders of magnitudes smaller than the amplitude of the bathymetry. Convergence is significantly improved by increasing the number of observation points, however this is not a sufficient condition for convergence. Finally, we show qualitatively that the surface wave exhibits low sensitivity to bathymetry reconstruction error, motivating further insight into the sensitivity of the reconstruction error and resulting surface wave error to model parameters. In Sec. 7 we use global sensitivity analysis (GSA) to derive sensitivity indices quantifying the influence of (i) bathymetry position relative to the observations, and (ii) the amplitudes of the initial condition and bathymetry, on the error in bathymetry reconstruction and the surface wave. We focus our analysis on a localised surface wave propagating over a compact bathymetry. Using density-based GSA, we quantify and subsequently rank the influence of these input factors on the bathymetry and surface wave errors, respectively.
We summarise our conclusions in Sec. 8, and provide insights that motivate further analyses for the full 2-D model and more realistic settings for tsunami models.
Review of bathymetry effects and previous work
Bathymetry can have a significant impact on propagating shallow water waves, by altering the depth-dependent wave-speed where is the total fluid depth as a function of mean depth H, free surface perturbation bathymetry A demonstration of bathymetry effects is given in Fig. 1, where we show the surface wave with and without bathymetry. The speed and height of the surface wave are modified by the bathymetry. This is because a tsunami’s energy flux remains relatively constant, and so as the tsunami’s speed varies, so does its height (shoaling). In two dimensions the direction of propagation is also changed, since the characteristics are no longer straight lines. Thus bathymetry can significantly impact the arrival time of tsunami waves, and coastal communities can receive different amounts of tsunami energy based on local bathymetry effects. An accurate and detailed map of the bathymetry is therefore an essential component of accurate tsunami models.
Attempts to create an accurate map of oceanic bathymetry have been made by direct measurements, and indirectly by using information from propagating surface waves. Direct measurement includes platforms like ship-based high frequency sonar. However many of these methods are often either too costly or have poor spatial resolution. Often it is easier to measure waves propagating on the free surface, and use this information to create a map of the bottom topography from classical wave theory. This is an inverse problem. With improvements in high resolution satellite imagery in recent years, there is increasing interest in using inverse methods to provide accurate reconstructions of bathymetry information.
Wunsch (1996) provides a detailed review of inverse methods for ocean circulation models, describing methods for both deep and shallow water. In coastal regions, depth-inversion methods have been refined to account for observational data in areas with large interference from human activities and muddy water (Ge et al., 2020). Additionally, inverse methods are also routinely used in open channel flow modelling, where bed topography is approximated using surface measurements (Gessese and Sellier, 2012).
A longstanding approach to solving this inverse problem uses the dispersion relation of surface waves. Earlier works such Lubard et al. (1980) used measurements of the frequency-wavenumber spectrum made via optical images, obtained using cameras mounted on an oceanographic research tower. Since then, various methods using dispersion relations have been investigated, where bathymetry is measured by fitting the theoretical dispersion relation for gravity waves (where depth is a system parameter), and derived using inversion formulas.
More recent works include Dugan (1997), extended by Piotrowski and Dugan (2002), where image sequences of shoaling ocean waves taken from an aircraft are used to retrieve maps of water depth via the linear dispersion relation. The accuracy of this method can be as high as 5% if the waves are reasonably linear. However, Grilli (1998) builds on Dugan (1997), arguing that the latter is limited due to neglect of amplitude dispersion effects, which accumulate through increasing nonlinearity as waves approach breaking in shallow water. He compares the linear frequency dispersion to a third order polynomial relationship between wave speed c as a function of wavenumber k and depth h, showing that due to amplitude dispersion effects, linear wave theory may greatly under estimate c, and lead to poorer estimates of bathymetry inversion formulas based on a linear dispersion relation.
These inversion algorithms are calibrated based on results of simulated periodic waves over mild slopes in a two-dimensional ‘numerical wave tank’. This fully nonlinear based on potential flow (FNPF) numerical wave tank methodology was developed by Grilli and Subramanya (1996) with wave generation and absorption methods, to calculate speed and height variation for a number of shoaling waves over slopes ranging from to as shown in Grilli (1998). It was demonstrated in Grilli (1998) to have higher accuracy in coastal simulations than the nonlinear shallow water equations.
Tsai and Yue (1996) also demonstrated how FNPF numerical tanks allow calculation of ‘numerically exact’ properties of shoaling wave up to breaking point, and can provide accurate representation of surface waves independent of nonlinearity parameters.
While these methods are based on an empirical formula for the 2-D nonlinear inversion problem in idealised cases, Nicholls and Taber (2009) derive an inversion formula for bathymetry analytically, using the nonlinearity of the 1-D governing Euler equations for ideal fluid flow to detect bathymetry information. The governing equations for the surface wave are expressed as a Hamiltonian system, and a Dirichlet-to-Neumann operator (DNO) is applied to the system in order to remove some implicit dependencies. The result is a single equation of the wave height at the surface in terms of the bathymetry, and subsequently an inversion is derived. However, because their inversion formula is linear, they are required to assume a small amplitude for the bathymetry and their numerical results only consider normalised bathymetry amplitudes of 0.07 and 0.025.
Additionally, surface waves can be also used to characterise a rapid dynamic change in bathymetry. Jang et al. (2010) take a similar approach to Grilli (1998) to the related problem of measuring a sudden shift in the sea floor, e.g. due to seismic activity, using measurements of the surface waves. Their inversion formula, however, is based on the same approach as Dugan (1997): using the linear dispersion relation with bathymetry as a parameter, and using transforms to show that the problem becomes one of solving an integral equation involving the known surface wave data. The uniqueness of this solution is demonstrated. However, analysis shows that there is a lack of stability in the measurement of the bottom displacement, and a question as to whether it depends continuously on the wave elevation. They overcome this using regularisation methods iteratively as a stabilisation technique, and show numerical convergence to the integral solution.
Each of the aforementioned methodologies has its strengths and weaknesses. Piotrowski and Dugan (2002) and Grilli (1998) both discuss practical measurement techniques of surface waves, whereas theoretical approaches such as Nicholls and Taber (2009), and Jang et al. (2010) show high degree of convergence, but assume full knowledge of the surface wave, and do not address the complexities involved in obtaining accurate measurements in a real-world scenario, such as noise or incomplete measurements. Nicholls and Taber (2009) state that their future objectives aim to find an effective way of extracting wave fields from full observational data.
In summary, while the theoretical results from such models are promising, their applicability to real-world measurements has not been established. In the present study we attempt to combine theoretical results with realistic assumptions. We consider 1-D geometry as a first step to validate the basic approach and investigate fundamental questions. Assuming a finite set of observations, we analyse the effects of different bathymetry features on convergence, as well as the effect of different amplitudes and shape of the initial condition. These effects have not been considered in detail in the reviewed works.
Another issue is the difference between the inversion formulas derived for linear and nonlinear systems. Grilli (1998), and Nicholls and Taber (2009) give results accounting for nonlinearity and the resulting dispersion effects, whereas Piotrowski and Dugan (2002), and Jang et al. (2010) are restricted to linear dispersion relations for gravity waves. However, there remains the question of whether the empirical formulation of the inversion as derived by Grilli (1998) is as rigorous as the analytically derived solution of Nicholls and Taber (2009). Both assume periodic waves, but the practical limitations on the accuracy of the free surface data make it difficult to assess the relative efficacy of these two methods. Ultimately, a complete evaluation of these approaches will depend on research which systematically compares inversion formulas based on the nonlinear governing equations for free surface wave propagation, as well as addresses the practical issues of collecting realistic field wave data.
In the present study, we approach the bathymetry estimation problem from a variational data assimilation perspective, with the goal of formulating an algorithm for the 1-D nonlinear shallow water system that predicts bathymetry from a small set of observations. We neglect rotational Coriolis effects and noise and assume static bathymetry with lateral boundary conditions. Our approach does not involve inversion of the dispersion relation. Instead, we formulate an optimisation problem, seeking to minimise the error between observations and forecasts of the free surface wave. The analytical derivation of the variational algorithm differs from the empirical techniques introduced by Grilli (1998) and is derived for the infinite-dimensional case. The use of sparse observations aims to provide a relatively more realistic set of assumptions than Nicholls and Taber (2009), who require complete observations of surface wave fields.
We use the conclusions of this idealised case to better understand the role of model parameters and the observation operator on bathymetry reconstruction. Our goal is to take the qualitative results observed here, and use them as a benchmark for more rigorous sensitivity analyses based on second order adjoint and GSA approaches. Consequently, we focus on highlighting results and techniques than can improve existing methods for accurate tsunami prediction.
Derivation of the adjoint based data assimilation scheme
The nonlinear shallow water equations (SWE) are a coupled system of equations for travelling free surface waves. They are derived from the two-dimensional Euler equations, under the assumption that the wavelength λ of free surface waves is much larger than the total ocean depth h, allowing us to average the Euler equations over the vertical dimension. The fluid column height becomes where H is the average depth, η is the perturbation of the free surface, and β is time-independent sea floor perturbation from zero, i.e. the ‘bathymetry’. Appropriate vertical averaging gives the irrotational, constant density incompressible one-dimensional nonlinear shallow water equations,
(Note that in the absence of the Coriolis effect the nonlinear waves in a shallow water system are non-dispersive.) We assume that the initial conditions are compactly supported, and that the boundary conditions are periodic on some domain Our objective is to implement a variational data assimilation scheme constrained by (3.1) in order to estimate the bathymetry We wish to derive an optimal estimate of the bottom topography using a finite number of observations of the free surface perturbation, for all times To simplify further, we normalise the system (3.1) by the average height H and the gravitational acceleration g such that the wave speed
We quantify our objective as the partial differential equation (PDE) constrained minimisation of some cost function
This is equivalent to solving the gradient minimization problem
As direct computation of this optimization problem is too computationally expensive, we formulate an algorithm that allows us to extract an explicit expression for the gradient and subsequently find the minimiser more efficiently. As a first step, we formulate the first variation of given some arbitrary perturbation of size ε. This is given by the Gâteaux derivative,
Using a perturbation expansion in ε about on the term and truncating to we can reformulate (3.5) as
As the Gâteaux derivative is a directional derivative in the direction of the perturbation we can express (3.6) as the inner product between and
Then the following forms of are equivalent,
Therefore, finding an equivalent expression for the last term in (3.10) will enable us to extract an expression for This is achieved by formulating the Lagrangian associated with the linearised system for with some arbitrary adjoint variables
Integrating by parts in time and space reduces (3.11) to
Due to periodicity, the boundary terms vanish. If we choose the adjoint variables as the solution to
(where the Dirac delta term in 3.13a is the observation operator mapping η onto the observation space), then (3.12) is reduced to
Losch and Wunsch (2003) use a similar adjoint based minimisation for their bathymetry detection analysis, however they do not consider the infinite-dimensional case as we have here. The benefits of our approach is that it is independent of the discretisation used in its numerical implementation.
To verify that our formulation for is correct, we define the Kappa test
The minimiser yielding is computed using a steepest descent algorithm given some starting guess Using a line minimisation algorithm to find the optimal step size at each iteration, this can be summarised as
The optimal bathymetry reconstruction is the fixed point of this iterative scheme. The steps for the process are outlined in Algorithm 1.
Data Assimilation Algorithm for Bathymetry Estimation
1: Pick an initial estimate for
2: Solve the initial value problem for from t = 0 to t = T.
3: Solve the adjoint problem for backwards in time from t = T to t = 0 to find
4: Approximate at every point in spatial domain Ω.
6: Compute the optimal time step τn through a line minimisation algorithm.
7: Use a gradient descent algorithm to compute
8: Repeat until for some small ε ().
Initial results using L2 gradients
To verify the numerical implementation of Algorithm 1, we consider three cases, characterised by different true bathymetries and initial conditions These cases are summarized in Table 2, and shown in Fig. 1. They were chosen to analyse convergence in scenarios where the support of and the support of overlap or are disjoint. Additionally, we want to evaluate the effect of a surface wave with compactly supported initial conditions (Cases I and II) or periodic initial conditions (Case III). We consider Gaussian and sandbar profiles for the bathymetry, similar to Nicholls and Taber (2009), as a 1-D approximation for peaks and ridges characterising ocean bathymetry. The main application of this study is tsunami modelling given some optimal reconstruction of missing bathymetry data, hence we are primarily interested in a non-periodic propagating surface wave, as in Case I and Case II. However, including the periodic initial conditions Case III in our analysis helps us understand the effects of the observation operator and model parameters on the optimal reconstruction.
We implement these schemes using a standard second order finite difference-finite volume approximation on a staggered grid in space, and a four stage third order Runge–Kutta scheme (Spiteri and Ruuth, 2002) in time. The resolution of the spatial grid is N = 512, and the spatial domain is Velocity u(x, t) is located at cell edges and the bathymetry and free surface perturbation are located at cell nodes. Periodic boundary conditions are imposed at x = L and where L = 3. We assume we have no background information for bathymetry a priori, and set
The system is integrated for In Kevlahan et al. (2019) the variational data assimilation system for the initial conditions reconstruction is integrated for where the final time T = 2 is chosen such that the free surface wave does not reach the boundary. This was appropriate when our objective was to reconstruct the initial conditions of a tsunami wave in real time. For bathymetry assimilation, as we have no constraints on the assimilation time, the final time is chosen to be where boundary effects were present. Periodic boundary conditions are artificial in the sense that they are not a substitute for realistic ocean conditions and are not intended to simulate coastlines. However they are appropriate in our idealised 1-D case as they simplify the dynamics, and do not violate underlying principles. In our case we restrict the choice of free surface wave to (i) a travelling Gaussian, and (ii) a sinusoidal wave. The latter is periodic and is not significantly altered by the boundary conditions. For the Gaussian initial conditions, the wave is effectively reflected at the boundary, and consequently is a wave travelling in the opposite direction. We observed that there were no errors generated at the boundaries that propagated inside the domain and impacted results. The main conclusions of this study were not altered by increasing the assimilation time from T = 2 to however convergence of the reconstructed bathymetry improved with the longer assimilation time.
The results in Fig. 2 illustrate the convergence of the data assimilation scheme for each case outlined in Table 2, as well as the convergence of the kappa test (3.17). In all cases we consider a relative L2 error of less than 10% as the threshold for ‘converged’ bathymetry. Let us first consider the convergence of the cost function (3.2). Ultimately, the purpose of the optimization scheme is to minimise the error between the observations of the true height perturbation and the approximate solution given the optimal bathymetry Minimising the cost function is necessary for accurate reconstruction of the bathymetry. However, due to the ill-posed nature of the problem, convergence of the cost function is not sufficient to guarantee accurate bathymetry reconstruction. Indeed, the results highlight the difference between these two objectives. We see in Fig. 2b, that the relative decrease in the cost function over 500 iterations of Algorithm 1 is greatest for Case I, at and thus the assimilation algorithm converges successfully for this case. In contrast, the relative decrease in the cost function for Cases II and III is not less than and respectively. However, the errors in the reconstructed bathymetry corresponding to these cost functions in Fig. 2c have very different behaviour.
It is clear that for each case convergence of the cost function does not necessarily correspond to the true bathymetry Only in Case I does the reconstructed bathymetry converge to the true bathymetry, although even in this case the error is relatively large, 0.04. Figure 2d–f shows the reconstructed bathymetry for each case. It is immediately clear that the primary source of error is small-scale noise. For Case I we see that, although the Gaussian bathymetry is well-resolved, there is still small-scale noise present in the tails. However, for Case III, where we assume periodic initial conditions the error in the reconstructed bathymetry is large, and we conclude that the observability of the bathymetry by sensors measuring a sinusoidal propagating surface wave is significantly lower than that of a travelling Gaussian wavefront, as in Case I.
For Case II, it is interesting to note that the noise is larger scale on the front slope of the bathymetry (x < 0) than the back slope of the bathymetry (x < 0). Since the observation points are placed to the right hand side of the Gaussian initial conditions (as shown in Fig. 1), there are no observations of the left propagating wave. Additionally, unlike Cases I and III, the observations are not positioned before the support of the bathymetry. This suggests that observations for x > 0 may not be able to sufficiently capture the bathymetry effects on the left-propagating wave. We provide a more quantitative analysis of sensitivity to the placement of the observations and the resulting effect on reconstruction error in Sec. 7.
We expect that if the kappa test converges, the error of the associated variational algorithm should also converge (even though this is not necessarily true). However, we see in Fig. 2a that the best convergence for the kappa test is for Case III, whereas Fig. 2c shows that Case III actually has the least accurate bathymetry reconstruction.
In the following section we show that an optimisation scheme where the gradient of the cost function (3.16) exists in is not smooth enough to obtain classical solutions to (3.1), and thus in Sec. 5 we derive analytically a low pass filter by smoothing our gradient such that and provide results of the numerical implementation. Using smoother H2 gradients largely eliminates the small scale errors in the reconstructed bathymetry we have seen here using L2 gradient.
Smoothing using Sobolev gradients
In system (3.1), the bathymetry is incorporated via the term. Thus, a classical solution to this system requires smoothness not just of β, but also of its first derivative. Because of this, we require the gradient (3.16) to be in the Sobolev space which imposes smoothness conditions on as well as β. The following derivation of the corresponding Sobolev gradient is adapted from Protas (2008).
is a Sobolev space equipped with the inner product
In order to extract the gradient using equivalence of inner products (as we did in 3.15), we need to isolate the term in (5.2). We use integration by parts as before, on the second and third term in (5.2).
We impose periodic conditions in space to eliminate the resulting boundary terms, and subsequently we have
Since this holds for every arbitrary perturbation the process of smoothing the gradient from to is equivalent to solving an inhomogeneous boundary value problem, which in Fourier space simplifies to
This is effectively a low pass filter applied to the L2 gradient. We can make this filter as aggressive as needed by ‘tuning’ l1 and l2, where the case is equivalent to the L2 gradient. For our smoothed data assimilation algorithm, we set and calibrate l2 as this gives us the desired regularity and reduces the number of degrees of freedom in the problem. We now consider the numerical implementation of this updated optimization scheme, summarised in Algorithm 2.
Data Assimilation algorithm with low pass filter for bathymetry estimation.
1: Pick initial estimate for
2: Solve the initial value problem for from t = 0 to t = T.
3: Solve adjoint problem for backwards in time from t = T to t = 0 to find
4: Approximate at every point in spatial domain Ω.
6: Apply low pass filter (5.4) to to get
7: Compute the optimal time step τn through a line minimisation algorithm.
8: Use a gradient descent algorithm to compute
9: Repeat until for some small ε ().
Before we present the results of the data assimilation scheme, we illustrate the efficacy of the filtering in Fig. 3. We compare the gradient in L2 with the gradient in H1 and H2 obtained after smoothing for Case III. The purpose of this comparison is to illustrate that requiring is more effective at reducing noise than H1 smoothing, where the H1 inner product is equivalent to (5.1) with In both cases, we try to choose optimal values of l1 (for H1) and l2 (for H2) to filter out higher frequencies that contribute to the noise in the bathymetry reconstruction, without also getting rid of necessary information for observing the bathymetry propagated by the lower wavenumbers. We observe in Fig. 3 that the H1 gradient filters out the noise, but also reduces the amplitude of the signal peaks, whereas the H2 gradient filters the noise and is closer to the original signal shape. We therefore conclude that H2 Sobolev smoothing is optimal.
The results of Algorithm 2 are given in Fig. 4. The plots of Fig. 4a–c show the results of the kappa test, the reduction in the cost function, and the relative error in the reconstruction for all cases, respectively. The first thing we note is that in each case, the error is lower for the results with H2 smoothing compared to results in Fig. 2. Especially with Cases I and II, we observe the error decreases by at least an order of magnitude. The reconstructed bathymetry shown in (d) and (e) illustrates how the noise has been greatly reduced, and for Case I is almost negligible. For Case II we see some noise remaining on the plateau of the sandbar. However, this is nevertheless a drastic improvement compared with the unfiltered result. This increase is reflected in the kappa test results given in Fig. 4a for Case I and II, although the increased convergence does not scale proportionally with the error, as observed previously in Fig. 2.
However, for Case III there is no increase in convergence for the kappa test in Fig. 4a, and while the reconstruction of has less small-scale noise, we observe that amplitude of the Gaussian in Fig. 4f is significantly smaller than the exact bathymetry. These results did not improve even with a more restrictive choice of l2, leading us to conclude that additional factors, such as the system parameters and placement of the sensors may affect the observability of the non-localised bathymetry. In Sec. 6 we attempt to analyse some of these effects.
Necessary conditions on the model parameters and the observation operator
In the previous section we showed that using smoother H2 gradients is necessary to avoid small scale errors in the reconstructed bathymetry. We now conduct a qualitative analysis to understand the effect of model parameters and the observation operator on the convergence of the data assimilation. The qualitative results observed here motivate the more rigorous analyses in Khan and Kevlahan (2020) and Sec. 7, using second order analysis (SOA) and GSA, respectively. All subsequent results are for
Necessary conditions on parameters
For the purposes of this study, we restrict our parameter analysis to understanding the relationship between the amplitude of the initial conditions the amplitude of the true bathymetry and the average depth H, which we have normalised to 1. The number of observation points Nobs = 45, and they are positioned as in Fig. 1 with As this research is focused on improvement in tsunami prediction, our objective is to understand how surface waves propagate over bathymetry, and these factors play an important part when considering the scales involved. Tsunamis are characterised by their long wavelengths, often reaching 100–150 km in the deep oceans, and their relatively small amplitude of 0.1–1 m, making them often imperceptible. When approaching coastlines the amplitude of the surface wave can be m whereas the wavelength may still be up to 2 km. As the energy flux of the wave speed is dependent on depth-dependent wave speed the amplitude and smoothness of the bathymetry can have a big impact on tsunami propagation (Damen et al., 2005). Additionally, in deeper water when h is larger the effects of bathymetry are smaller. Consequently we also investigate the effectiveness of free surface observations in reconstructing bathymetries with amplitudes much smaller than the average depth H.
We therefore give special consideration to Case I, Gaussian initial conditions with Gaussian bathymetry, as shown in Fig. 1a. We wish to highlight how convergence is affected when (i) the ratio is increased, and (ii) the normalised bathymetry amplitude varies. As H = 1, for the latter we may consider values of
We investigated convergence of the kappa test and relative error in the reconstructed bathymetry when is (‘large’), (‘small’), and (‘very small’), respectively. The experiments are conducted for and for The results of the six configurations are presented in Table 3. For each case we provide the error in the kappa test and the relative L2 error in the reconstruction. We set a tolerance for the kappa test, such that all values of are non-convergent. The entries highlighted in red indicate cases where the optimisation algorithm became unstable and the gradient failed to converge to zero or became unbounded.
We observe that there are two cases where convergence was achieved (the only cells not highlighted in red): (i) when (‘large’), (‘very small’), and (ii) when (‘very small’), (‘large’). We note that for accurate and stable results we require either to be small or to be very small (). However, when both are very small () the results are non-convergent. We also note that convergence fails when both are large (). Additionally, we see that when the bathymetry reconstruction error is relatively low, despite the kappa test error being higher than the set tolerance, suggesting that the latter may be relaxed to permit
Consequently, the only admissible cases where the reconstruction error converges are when (i) the surface waves are small in amplitude compared to the amplitude of the bathymetry, or (ii) when the bathymetry amplitude is small enough that the ratio between and H is or smaller as a consequence.
Having explored the relationship between and convergence, we now consider the effects of the amplitude of the bathymetry when is fixed. We analyse convergence of Algorithm 2 for relative bathymetry amplitudes ranging from 1% of the average depth to These results are summarised in Fig. 5, and we have included results for Cases II and III for more insight. In all cases we fix to be 0.001 and Nobs = 45. We see that for Case I, Fig. 5 indicates that the error is highest when is small, however it shows a steady decrease even when is 30% of H. Case III shows a similar trend, even though the error is two orders of magnitude larger than for Case I. For Case II, the error remains stable at approximately for all values of
As bathymetry effects on the surface wave decrease when the fluid depth h is large, it is reasonable that higher amplitudes of bathymetry lead to a slight decrease in the reconstruction error, as observability of the bathymetry by the surface wave may improve. However, to consider the effects of higher amplitudes of the bathymetry in tsunami models, it is reasonable to analyse the error in the surface wave error given the reconstructed bathymetry as increases. Results for this analysis are given in Sec. 6.3.
Based on the results shown in Table 3, we suggest that for situations like Case I (i.e. a localised surface wave propagating over a compact bathymetry feature of a similar size) a necessary condition for the data assimilation scheme to recover the true bathymetry is that be at most when bathymetry amplitudes are large (). We observed in Fig. 5 that increasing the bathymetry amplitude (when the initial condition amplitude is sufficiently small) does not affect the error in the bathymetry, however further analysis is needed on the resulting error in surface wave, given the reconstructed bathymetry, in order to quantify the effects of bathymetry in tsunami propagation.
The observation operator (i.e. the number and location of observations of sea surface height) used in our optimisation scheme is significant, as we have direct control over the number and location of sea surface sensors in real world simulations. We are therefore interested in configuring observation points to optimise convergence to the exact bathymetry.
We consider the effect of the number of observation points on convergence. In Fig. 6 we present the results of Algorithm 2 for observation points. Each configuration is a set of equidistant points, with the first point placed at For Cases I and II the first point is within the support of the Gaussian initial condition. We show the convergence of both the cost function and the relative L2 error. In all three test cases, the general conclusion is that more observation points produces better convergence. For Cases I and II the error decreases by approximately two orders of magnitude as Nobs is increased from 5 to 45. However, none of the results for Case III (periodic surface wave over Gaussian bathymetry) converge, and the error remains greater than 10% for all values of Nobs.
To better understand the effects of Nobs, in Fig. 7 we present the reconstructed bathymetry for each case, with Nobs = 5 and Nobs = 45, respectively. We observe in Fig. 7a that for Case I (Gaussian surface wave over Gaussian bathymetry) even with Nobs = 5, the bathymetry is well-resolved, and increasing the number of observations successfully eliminates some small-scale noise at the base of the Gaussian.
In Fig. 7b it is clear that for Case II (Gaussian surface wave over a sandbar) Nobs = 5 is not enough to resolve the bathymetry shape, especially for the bathymetry features in the region x < 0. However, increasing Nobs to 45 leads to accurately reconstructed bathymetry. The only significant error is a small amount of noise at the plateau of the sandbar profile near x = 0. For Case III, we note that while neither of the reconstructed bathymetries in Fig. 7c converge to the true bathymetry in Fig. 1c, convergence is much better qualitatively for Nobs = 45. However, the peak of the Gaussian is not fully resolved and the amplitude of the reconstruction with Nobs = 45 is smaller than the amplitude of the true bathymetry.
Based on these results, we conclude that a sufficiently large number of observations Nobs is a necessary condition for optimal convergence. However, as indicated by the reconstructions in Fig. 7c, it is not a sufficient condition for convergence in all configurations. Despite this, results in Fig. 7a demonstrate that observations of a localised surface wave are able to reconstruct both a Gaussian bathymetry and a Sandbar bathymetry with relatively small error in convergence. The following question remains: how does the size of the reconstruction error of the bathymetry affect prediction of the free surface wave?
In the context of tsunami modelling, our priority is accurate prediction of the free surface wave given the reconstructed bathymetry. Therefore, motivated by results observed thus far, in Sec. 6.3 we analyse the L2 error in the surface wave given the reconstructed bathymetry, as Nobs is increased. Additionally, a rigorous sensitivity analysis of the surface wave error to observations is presented in Khan and Kevlahan (2020) using second order variational techniques, and in Sec. 7 using GSA. The objective of this analysis is to explore further, not only the effect of the number of observation, but also the position of the observations in the spatial domain.
Sensitivity of propagating surface wave to to bathymetry reconstruction error
Our final analysis concerns the sensitivity of the surface wave to the bathymetry, in particular, to errors in the bathymetry data. Given some optimal reconstruction we wish to gauge the sensitivity of the propagating surface wave to the errors in the reconstruction. Our objective is to address the following question: how does the reconstruction error in the bathymetry impact the propagating surface wave? If our goal is to use reconstructions of the bathymetry to generate more accurate predictions of tsunami waves, then the main consideration is not the error between the optimal bathymetry reconstruction and the true bathymetry, but the resulting error in In other words, we only need the reconstructed bathymetry to be ‘good enough’ to produce sufficiently accurate surface waves.
Thus far we have qualitatively assessed the convergence to the exact bathymetry as we varied the observation operator, and the amplitudes of the initial conditions and bathymetry and We now address the corresponding error in the surface wave given the reconstructed bathymetry as a function of and Nobs.
The results for each are summarised in Figs. 8, 9, and 10, respectively. The L2 error in the bathymetry is plotted alongside the resulting L2 error (in space and time) in the surface wave. Figure 8 shows the respective errors as a function of bathymetry amplitude, where In each of the three cases, we observe the error in the surface wave for is almost 2 orders of magnitude lower than the error in the bathymetry estimation. As the relative amplitude of the bathymetry increases, so does the surface wave error, even though the bathymetry error remains relatively constant. This is intuitive, as we observe that since the effect of bathymetry decreases as the fluid depth increases (a consequence of smaller amplitudes ), the height is affected more by larger bathymetry amplitudes. As such, errors in the reconstruction of larger bathymetry perturbations are more likely to be observed in the resulting surface wave. Therefore, we consider a necessary condition for optimal convergence of the surface wave. Overall, the results suggest there is low sensitivity of the surface wave to bathymetry reconstruction errors as varies.
Figure 9 presents the L2 error in the bathymetry alongside the resulting L2 error in the surface wave, as a function of the number of observation points. We consider and plot the resulting errors for each of the three cases. is fixed at 0.1 and We observe that for Cases I and III, the error in the surface wave is almost two orders of magnitude smaller than the bathymetry error, and both decrease proportionally as the number of observation points increases. For Case II, the difference in the errors is for smaller values of Nobs, however this difference increases as we increase the number of the observation points, suggesting the sensitivity of the surface wave decreases with more observation points for a sandbar profile. A reason for this may be that for Case II there are no observation points located before the support of the bathymetry, and consequently a smaller value of Nobs may not be sufficient to create a an accurate profile, resulting in increased error in the reconstruction (as observed in Fig. 7). Consequently, the error in the surface wave increases as well. However, as more observation points are added in Fig. 7 the noise in the bathymetry is smoothed, and the smoother profile may explain the relative larger decrease in the surface wave error, up to two orders of magnitude smaller than the bathymetry error.
Finally, Fig. 10 shows the bathymetry reconstruction and surface wave errors as a function of the initial conditions amplitude We have already observed a correlation in the amplitudes and and we concluded in Sec. 6.1 for optimal convergence we require Therefore, in Fig. 10c we present results for small values of and For each case we observe that the error in the surface wave is orders of magnitude lower than the bathymetry error for most values of Both errors are relatively high for smaller values of but as they both decrease. This suggests that the free surface wave is more sensitive to bathymetry reconstruction error when is very small.
To summarise, we observed that across variations in (i) the number of observation points, and (ii) model parameters such as and the error in the surface wave was orders of magnitude lower than the bathymetry reconstruction error, suggesting low sensitivity of the surface wave to bathymetry reconstruction error. The consequences of this low sensitivity are significant. It is essential to understand the sensitivity of the surface wave to the bathymetry for two distinct but related reasons. First, the assimilation described in this study is contingent upon the assumption that the surface wave (and observations) are affected by the bathymetry. We have demonstrated this is true in Fig. 1. Secondly, the variation in the surface wave error in Fig. 8 shows that changes in bathymetry amplitude and shape do significantly affect surface wave propagation. However, we observe that the sensitivity of the surface wave to the bathymetry reconstruction error is low. This means that the surface wave is not sensitive to the details of the bathymetry: a roughly accurate reconstruction of bathymetry should be sufficient in most cases.
We conclude that, although bathymetry does significantly modify surface wave propagation, for tsunami modelling we only need the bathymetry to be accurate enough that the surface wave is modelled correctly, and that we can derive criteria for when the bathymetry reconstruction is reconstructed sufficiently well. Additionally, we also do not require the optimal reconstruction to be unique. These implications motivate a more rigorous analysis of surface wave sensitivity, and we explore this further in Sec. 7.
Global sensitivity analysis (GSA)
Based on the observed qualitative results in Sec. 6.3, in this section, we use GSA to more rigorously quantify the influence of model parameters on the error in the bathymetry reconstruction, and the resulting error in the surface wave, respectively. We focus our analysis on a localised surface wave propagating over a compact bathymetry, as in Case I. Using GSA we aim to quantify, and subsequently rank the influence of these parameters on the bathymetry and surface wave errors, respectively.
Sensitivity analysis methods, derivation, and sampling considerations
Saltelli et al. (2010) define GSA as the set of mathematical techniques used to assess the propagation of uncertainty in a numerical model. In practice, a set of synthetic indices are derived that quantify the relative contribution to the output variance from different input parameters. These are known as ‘sensitivity’ indices. Liu and Homma (2009) propose that good sensitivity indices should exhibit the following properties:
- They are global, and consider the influence of model inputs on the entire output range.
- They are quantifiable, and thus can be computed and reproduced numerically.
- They are not conditional on any assumed input values.
- The value of the sensitivity index for some input gives an easy interpretation of the sensitivity of the model to the parameter.
- They are consistent across multiple samples and simulations.
- Ideally, they are moment-independent, and do not rely on a specific quantitative measure of the output distribution.
GSA methods can be categorised as one-at-a-time methods (OAT), where output variations are induced by varying one input at a time, or all-at-a-time methods (AAT), where all input factors are varied simultaneously. GSA methods are widely used in earth systems modelling (ESM). An example is the work conducted by the Modelling, Observations, Identification for Environmental Sciences (MOISE, 2014) project on variance-based sensitivity analysis on a marine ecosystem model of the Ligurian sea.
A concise review of the contributions of GSA on advancement of ESM is provided in Wagener and Pianosi (2019). They highlight the surge in computational capacity for Earth and climate models, and address the resulting problem of increased interaction between model components and parameters, even when representing a relatively low number of physical processes. This issue is particularly problematic in ESM where incomplete knowledge and the lack of relevant exact solutions makes model validation very difficult. In addition, such models are forced with noisy and inaccurate observations. Typical uses of GSA include:
- Ranking the influence of the parameters on model output from highest to lowest.
- Creating a threshold criteria for sensitivity allowing us to determine parameters of negligible influence (screening).
- Finding thresholds in the input parameter values that map into specific output regions (factor mapping).
Thus, the primary objective of this study is to use GSA to analyse the data assimilation results bathymetry detection presented in this paper, in order to rank, screen and factor map the influence of its parameters.
Our objective is to measure the relative L2 reconstruction error in the bathymetry
In Sec. 6.3 we observed that the surface wave error was orders of magnitude lower than the bathymetry error (7.1), suggesting low sensitivity to the bathymetry error, as a function of the bathymetry amplitude Based on this initial observation, we wish to derive a more rigorous way to measure sensitivity of results to parameters (amplitude of the surface wave), and ψ (amplitude and position of the bathymetry, respectively) such that
We focus on Case I, where we have compactly supported Gaussian initial conditions centred at zero, and a localised Gaussian bathymetry centred at
The key property of the parameter ψ is that it determines the location of the bathymetry relative to the observation points. When ψ is small, a smaller proportion of observation points observes the surface wave before it interacts with the bathymetry, and for ψ approximately less than 1, the observations do not span the entire bathymetry support. We wish to discover whether the influence of shifting the bathymetry position ψ relative to the fixed observation points is significant.
Density based sensitivity analysis (DBSA)
Since skewness in the output distribution is common in non-linear and environmental models(Pianosi and Wagener, 2015), we use a subset of sensitivity analyses that does not use moments of the model output, such as the expected value, variance or skewness, as a measure of uncertainty. Such methods are collectively known as density based sensitivity analysis’(DBSA). The idea is to measure sensitivity through variations of the probability density function that occur when the influence of a certain input factor is removed. This is done by computing the difference between unconditional PDFs generated by varying all parameters, and conditional PDFs obtained when fixing individual input parameters at a particular nominal value (Pianosi et al., 2016). The sensitivity index is then computed as a statistic based on this divergence.
For the present analysis, we follow the method outlined in Pianosi and Wagener (2015), who improve on this idea by characterising output distributions by their cumulative density functions (CDFs) instead. Known as the PAWN method, density-based sensitivity indices are calculated using conditional and unconditional CDFs, using the fact that these are more efficiently derived than PDFs. We give a brief overview of the PAWN method, the divergence statistic used to compute the indices, and the sampling strategy used in the numerical implementation.
Let us define the unconditional cumulative density distribution of our output Y (where Y can be either or ) as and the conditional cumulative density distribution when the input parameter xi is fixed as As represents the case where there is no variability resulting from xi, the distance between the two functions and represents the variability in the output induced by xi. This distance is proportional to the influence of xi, i.e. if = then xi has zero influence on the variability of output Y. In the PAWN method, this distance is computed using the Kolmogorov–Smirnov (KS) statistic
As KS depends on the value of xi, the PAWN sensitivity index Ti is calculated using a statistic (such as the median or maximum) over all values of xi,
Pianosi and Wagener (2015) observe that the sensitivity index Ti satisfies all of the properties of a ‘good’ sensitivity index, as it is global, quantitative and model-independent. It has the advantage that it is also moment-independent. By definition it is normalised to take values between 0 and 1.
In the numerical implementation, empirical CDFs are used to compute and using a sample of the parameter space. PAWN has a big advantage over the sample size requirements in variance-based sensitivity analysis (VBSA), as a smaller sample size can be used to effectively approximate the CDFs due to their regularity properties. is approximated using Nu model evaluations obtained by sampling over the entire parameter space, while is approximated using Nc model evaluations derived by sampling over the non-fixed parameters only, while keeping xi fixed. Additionally, the conditioning value xi used to compute Ti in (7.6) is replaced by n randomly-sampled values for the fixed input xi. Thus, the total number of model evaluations is Generally these values are chosen by trial and error. Pianosi and Wagener (2015) suggest a reasonable choice for n is between 10 and 50.
For our analysis, we set Nu = 200, Nc = 150, and n = 15, giving total of 6950 model evaluations. The computation of the density-based indices is then a quick post-processing step. To verify whether the sample size is adequate, bootstrapping can be used to analyse robustness of sensitivity index estimates, as in the previous section. We present the results for the DBSA sensitivity indices approximated using the PAWN method in the following section.
Results of DBSA
The number of evaluations used in the analysis was adjusted to exclude sample points where the data assimilation did not converge (i.e. where the necessary ratio between the amplitudes of the initial conditions and the bathymetry did not hold). Nc was adjusted to 145, and 6725 evaluations were used to approximate Ti. For the initial analysis we chose the maximum as the statistic used in 7.6.
Figure 11 presents the initial results for the sensitivity indices Ti for each input parameter. Figure 11a gives the result for model output and Fig. 11b for has the greatest sensitivity to position of the bathymetry ψ (). Interestingly, we can see in Fig. 11b that the amplitude of the initial conditions and position of the bathymetry ψ have relatively low influence on the error in the surface wave compared to β, at 13% and 22%, respectively. On the other hand, the amplitude of the bathymetry is highly influential for the surface wave error with
We need to check these results for robustness and convergence before we can accept them as conclusive. Bootstrapping is done over 700 re-samples, and confidence intervals are derived. These can be seen in Fig. 12. We present the distances between upper and lower bounds for each parameter in Table 4, and subsequently compute for each.
We observe a great improvement in the statistics relative to the VBSA results, where and The largest margin of error from the mean across all input factors and both outputs is If we set a tolerance level of 0.05 for the error margin, then there is only a single instance () where the error margin is higher. We hypothesise that this could be further improved by taking a slightly larger sample size.
The convergence of the sensitivity indices is given in Fig. 13. Across both errors and the most influential parameters (position ψ and amplitude of the bathymetry) appear to converge. For the other less influential parameters in each case, convergence is still relatively smooth. However, we see that the index values are still decreasing slightly even at the largest sample, suggesting that a larger number of samples might be needed for convergence to a fixed point. Nevertheless, given the results of the robustness analysis and convergence observed thus far, we conclude that we may still accurately gauge the influence of the parameters on our model outputs with the current sample size, especially in order to determine the most influential factors. Consequently, we proceed to consider ranking, screening, and factor mapping of the input parameters.
The PAWN method can be used effectively to screen for non-influential input factors. Figure 14 presents an overview of the different values of KS over the 15 randomly chosen sample points xi for each input parameter. To determine which input parameters are most influential, the two-sample KS test (Pianosi and Wagener, 2015) has been implemented, which allows us to reject the hypothesis that and are the same if
We observe that for both outputs and is non-influential across all 15 choices for the fixed value. KS statistics for fall below the threshold at fixed values approximately between 0.1 and 0.25 for the model This corroborates results from Sec. 2, where we observed that the bathymetry reconstruction error was higher when the amplitude of the bathymetry was either too big or too small relative to the average depth H. In the present analysis the maximum KS statistic is used to compute Ti for each input parameter, and so is still considered influential on It is interesting to contrast this result with the KS statistics for and output as shown in Fig. 14b. While we see a similar trend, where the KS statistic is higher if the bathymetry amplitude is at the lower or upper end of the sample interval, the resulting variation is much higher, and all KS statistics are above the influence threshold value.
In a similar manner, while the position of the bathymetry ψ has relatively low KS statistics across all fixed values of the surface wave error (approximately half of the values are above the cutoff with no clear trend), it has relatively much higher KS values, and subsequently higher influence on the bathymetry reconstruction error as observed in the third panel in Fig. 14a. Once again, KS statistics are much higher at values closer to the endpoints of the parameter space for ψ, We suggest this indicates that the placement of the bathymetry relative to the observation points has a significant influence on the error in the bathymetry reconstruction. This may be because as the bathymetry Gaussian placement varies over the interval, so does the time and position where the bathymetry is first observed by the measurement points.
Thus, based on the threshold level for a confidence interval and using the maximum KS statistic to compute Ti, we observe that for both bathymetry reconstruction error and surface wave error the amplitude of the initial conditions can be considered non-influential. We note that the choice of a different statistic in (7.6), such as the mean or median, may change the value of Ti. We provide a comparison of the different indices in Table 5. Given the threshold value of 0.147 (), we can see that by the choice of a mean or median statistic, does not affect whether or not a parameter is non-influential for the bathymetry reconstruction error
As we can see from Table 5, the overall ranking of the parameters is not affected by the choice of statistic used for The only exception is the parameter which is influential only with the max statistic. The position of the bathymetry, ψ, is the most influential parameter for and the amplitude of the initial conditions is the least influential. Similarly, for model output results in Table 6 indicate that across all choice of statistics, is the most influential parameter and is non-influential.
So far we have used the results of the PAWN sensitivity analysis to rank and screen our input factors. A third analysis that can prove insightful is factor mapping: we can compute sensitivity indices based on a sub-interval of our output Y given the current sample. For example, over all model evaluations, the maximum surface wave error is and the minimum is Similarly the maximum bathymetry reconstruction error is 0.265 and the minimum is To determine the influence of input parameters when the values of and are relatively high, we define a parameter M such that we can compute sensitivity indexes for all input-output samples for the cases Y > M, and separately. We choose M = 0.015 for output and M = 0.001 for The results are given in Fig. 15a and 15b.
While there is not much difference in terms of ranking between the left (Y > M) and right () panes for either or it is noteworthy that for the surface wave error in Fig. 15b the influence of the bathymetry amplitude decreases when (right panel). This is insightful, as it suggests that the bathymetry is more influential when the errors in the surface wave are large. However, in both intervals of the influence of is high (above 70%). Additionally, the effect of the bathymetry position ψ when greater than 0.001 is 29%, and subsequently is greater than the influence threshold value. On the other hand, the influence of the initial conditions amplitude and bathymetry position ψ drops below the threshold value when
For the bathymetry reconstruction error we see in Fig. 15a that the influence of the bathymetry position ψ is approximately doubled when the bathymetry reconstruction error is less than 0.015. The lower sensitivity of each of the input parameters when the bathymetry reconstruction error is higher could be suggestive of other significant influences on the model error, that cannot be attributed to these parameters alone.
In summary, for the most influential parameter is the amplitude of the bathymetry, whereas the amplitude of the surface wave initial conditions is categorised as non-influential. For the bathymetry reconstruction error bathymetry position ψ is the most influential parameter, while the initial conditions amplitude was determined to be non-influential. Additionally, is influential on bathymetry reconstruction error only when the error is larger than 0.15, and ψ is only influential on the surface wave error when the error is larger than 0.001.
Conclusion and future directions
This study is a first step in understanding better the role of observations and model parameters in variational bathymetry assimilation. We are limited somewhat by the lack of an analytical solution for the shallow water system with non-zero bathymetry, but these computational results provide some key insights to guide further analyses. The 1-D geometry considered in this analysis is intended to be a foundation for extensions to more realistic 2-D cases.
We derived a variational data assimilation algorithm to reconstruct the bathymetry from a set of free surface wave observations by minimising a functional representing the least squares error between observations and forecast solutions. We observed that resulted in small-scale noise in the bathymetry reconstruction and impacted convergence. Consequently, we showed that smoother H2 gradients are necessary to avoid small-scale errors in the reconstructed bathymetry. Using the H2 gradients, we accurately reconstructed the bathymetry for test cases with a Gaussian initial conditions, and (i) a Gaussian bathymetry, and (ii) a sandbar profile bathymetry.
We then analysed the relationship between the normalised bathymetry and initial conditions amplitudes and to understand how they influence convergence. Based on a qualitative analysis of a localised surface wave propagating over a compact bathymetry feature of a similar size, we suggest that a necessary condition for convergence of the assimilation to the true bathymetry is that the ratio of the surface wave amplitude to the bathymetry amplitude be at most when bathymetry amplitudes are Additionally, we observe that a relatively large number of observations Nobs is necessary for convergence. We found Nobs = 45 was the optimal number of observations for the cases we considered.
Perhaps the most significant conclusion of this study was that the surface wave has relatively low sensitivity to errors in the reconstructed bathymetry. We showed that the free surface error was orders of magnitude smaller than the bathymetry reconstruction error as a function of Nobs, and respectively. Reconstructing the bathymetry with a relative error of about 10% is sufficiently accurate for surface wave modelling in most cases.
If this conclusion can be verified in higher dimensions and in more realistic configurations (e.g. including turbulence and the Coriolis effect), this result will enhance tsunami forecast models by the quantifying exact tolerance levels for the error in the bathymetry necessary for accurate surface wave modelling. Additionally, tolerances for smaller scales in bathymetry reconstruction may be quantified.
To further investigate the sensitivity of the reconstruction error and resulting surface wave error to model parameters, we used a density-based method (DBSA) to empirically compute indices that rely on the variation between unconditional and conditional cumulative density functions when particular inputs are fixed, as a measure of sensitivity. We used the PAWN algorithm Pianosi and Wagener (2015) to derive sensitivity indices for the three input parameters, and ψ (bathymetry amplitude, surface wave amplitude and position of the bathymetry relative to the observations, respectively), and the model outputs and Our objective was to rank the the influence of the inputs from highest to lowest, screen for non-influential parameters, and finding thresholds in the input parameter values that map into specific output regions (factor mapping). We summarised the conclusions in Table 7.
Our results showed that for the model output (the error in the surface wave given the reconstructed bathymetry), the most influential parameter was the amplitude of the bathymetry, whereas the amplitude of the surface wave initial conditions was categorised as non-influential. This confirms conclusions from Sec. 6.3, where we observed that the surface wave error increased as the amplitude of the relative bathymetry became larger. We considered specific output regions for both the bathymetry reconstruction error and the surface wave error. For the latter we consider relative L2 errors and and found that the influence of all three parameters (the bathymetry and initial conditions amplitudes and and the bathymetry position ψ), decreases when
The bathymetry position ψ was the most influential parameter on the bathymetry reconstruction error while the initial conditions amplitude was determined to be non-influential. The influence of ψ for small errors was approximately 85%. However, this dropped by almost half for large errors Overall, the bathymetry reconstruction error was less sensitive to each of the input parameters when the bathymetry reconstruction error was large. This suggests that other parameters that are not considered in this analysis may have a significant influence for larger values of the bathymetry reconstruction error.
While the locations of the observation points were kept fixed relative to the computational domain, the position, ψ, of the bathymetry relative to the observation locations was varied. The parameter ψ therefore determines the point where the bathymetry is observed by the measurement points, as well as how many observation points observe the surface wave before it interacts with the bathymetry. We found that ψ was influential on the surface wave error for all statistics, except when the error in the surface wave was less than 0.001. As our objective is to determine which parameters are responsible for large errors in the surface wave, we conclude that the placement of the observation points relative to the reconstructed bathymetry is influential on the surface wave accuracy. Further analysis on the best location for the observations relative to the bathymetry is conducted using second order adjoint sensitivity analysis in Part II (Khan and Kevlahan, 2020).
A key observation is that for both bathymetry reconstruction error and the surface wave error, the amplitude of the initial conditions was non-influential for each sub-region. However, the lack of sensitivity to can be explained by the fact that all values where the amplitude ratio condition was violated (and subsequently the assimilation was non-convergent) were removed from the sample used to derive sensitivity indices. Therefore, we cannot conclude that is non-influential in the bathymetry reconstruction and the resulting free surface wave, on the basis of these results alone.
The inclusion of more parameters in the sensitivity analysis, like the final control time t = T, or the resolution of the numerical approximation (both of which would impact the accuracy of the reconstructed bathymetry) should be addressed in future work. Additionally, quantifying the sensitivity of the bathymetry reconstruction to interaction between various inputs would also be valuable, especially considering the correlation between and observed in Sec. 6. Sensitivity analyses of the accuracy of the observations themselves would also provide insight. In atmosphere and ocean models, observation measurements are usually noisy, and therefore a measure of their sensitivity to the observation operator would be illuminating for future work (Wagener and Pianosi, 2019).
The present analysis proved insightful for an idealised model of data assimilation for the 1-D shallow water equations with Gaussian initial conditions and bathymetry. In order to use these results for more realistic forecasts, the inevitable next step is to extend this approach to a full 2-D model, and include multiple forms of bathymetry, such as sandbar or ridge formations. The data assimilation algorithm for reconstructing bathymetry from surface wave observations presented here in 1-D has demonstrated the potential of this approach for more realistic 2-D applications.
Our focus on how the bathymetry and the initial conditions shapes affect reconstruction of the bathymetry from free surface observations is a novel contribution to the literature reviewed in Sec. 2. Additionally, we hope the sensitivity analyses and results observed here can pave the way for more refined approaches to incorporate bathymetry estimates in tsunami models. Extending this approach to 2-D, and verifying the main conclusions observed here, would contribute significantly improve the representation of bathymetry in tsunami models.