Ze Meng, Lei Zhou, Baosheng Li, Jianhuang Qin, Juncheng Xie. Erratum to: Acta Oceanologica Sinica (2022) 41(10): 119–130DOI: 10.1007/s13131-022-2023-3The atmospheric hinder for intraseasonal sea-air interaction over the Bay of Bengal during Indian summer monsoon in CMIP6[J]. Acta Oceanologica Sinica. doi: 10.1007/s13131-022-2131-0
Citation:
Xiao Han, Jingwei Yin, Yanming Yang, Hongtao Wen, Longxiang Guo. Under-ice ambient noise in the Arctic Ocean: observations at the long-term ice station[J]. Acta Oceanologica Sinica, 2020, 39(9): 125-132. doi: 10.1007/s13131-020-1652-7
Ze Meng, Lei Zhou, Baosheng Li, Jianhuang Qin, Juncheng Xie. Erratum to: Acta Oceanologica Sinica (2022) 41(10): 119–130DOI: 10.1007/s13131-022-2023-3The atmospheric hinder for intraseasonal sea-air interaction over the Bay of Bengal during Indian summer monsoon in CMIP6[J]. Acta Oceanologica Sinica. doi: 10.1007/s13131-022-2131-0
Citation:
Xiao Han, Jingwei Yin, Yanming Yang, Hongtao Wen, Longxiang Guo. Under-ice ambient noise in the Arctic Ocean: observations at the long-term ice station[J]. Acta Oceanologica Sinica, 2020, 39(9): 125-132. doi: 10.1007/s13131-020-1652-7
Acoustic Science and Technology Laboratory, Harbin Engineering University, Harbin 150001, China
2.
Key Laboratory of Marine Information Acquisition and Security (Harbin Engineering University), Ministry of Industry and Information Technology, Harbin 150001, China
3.
College of Underwater Acoustic Engineering, Harbin Engineering University, Harbin 150001, China
4.
Laboratory of Ocean Acoustic and Remote Sensing, Third Institute of Oceanography, Ministry of Natural Resources, Xiamen 361005, China
Funds:
The National Natural Science Foundation of China under contract Nos 61631008, 61901136 and 51779061; the National Key Research and Development Program of China under contract No. 2018YFC1405904; the Fok Ying-Tong Education Foundation under contract No. 151007; the Opening Funding of Science and Technology on Sonar Laboratory under contract No. 6142109KF201802; the Innovation Special Zone of National Defense Science and Technology.
Under-ice ambient noise in the Arctic Ocean is studied using the data recorded by autonomous hydrophones at the long-term ice station during the 9th Chinese National Arctic Research Expedition. Time-frequency analysis of two 7-s-long ice-induced noise samples shows that both ice collision and ice breaking noise have many outliers in the time-domain (impulsive characteristic) and abundant frequency components in the frequency-domain. Ice collision noise lasts for several seconds while the duration of ice breaking noise is much shorter (i.e., less than tens of milliseconds). Gaussian distribution and symmetric alpha stable (sαs) distribution are used in this paper to fit the impulsive under-ice noise. The sαs distribution can achieve better performance as it can track the heavy tails of impulsive noise while Gaussian distribution fails. This paper also analyzes the meteorological variables during the under-ice noise observation experiment and deduces that the impulsive ambient noise was caused by the combined force of high wind speed and increasing atmosphere temperature on the ice canopy. The Pearson correlation coefficients between long-term power spectral density variations of under-ice ambient noise and meteorological variables are also studied in this paper.
Objective analysis, a kind of techniques for gridding observations, has developed and evolved for many decades. Historically, its main purpose is to provide initial conditions for operational prediction models and aid in the diagnostic studies in atmospheric and oceanic field. Although the use for the first purpose has all but disappeared today due to the springing-up of other more sophisticated schemes such as optimal interpolation (OI) and variational methods, objective analysis schemes are still widely used for diagnostic purposes and many studies have employed various incarnations of them to investigate a diverse range of researches.
As a branch of objective analysis, successive correction menthod (SCM) represented by Cressman (1959) and Barnes (1964, 1973) have received more attention than others and remain popular today. Cressman (1959) used a series of scans with decreasing radius of influence to retrieve a broad spectrum of wavelengths from the observations. Its major contribution is introducing the practice of building details from longer to shorter waves. Barnes (1964) argued that such a scheme suffered from the disadvantage of tending to smooth out all small variations in the field. Moreover, an unstable iteration may occur in the Cressman scheme and thus an additional dissipation scheme has to be performed (e.g., Seaman, 1983; Seaman and Hutchinson, 1985; Lu and Browning, 1998). To maximize details resolved by observations, Barnes (1964) proposed a scheme similar to the Cressman method but using a different weighting function (Gaussian-type) with the weight factor (radius of influence) fixed for all passes of scan. This algorithm was then replaced by Barnes (1973) employing only two passes (one initial pass and one correction pass) with a diminished smoothing factor for the second (correction) pass. The most attractive feature of the Barnes scheme is its well-known response characteristics. By choosing the smoothing parameters, one can ascertain which range of wavelengths will be retained in the final analysis. However, the response function of the Barnes (1964, 1973) is derived under the assumption that the observations are continuous and unbounded (infinite). Practically, it is best applicable to reasonably uniform data distributions. If data are irregularly distributed, the phase of the response function will change and signals may be distorted in the analyzed field. The more sparsely and irregularly distributed the data, the less results are in accordance with the theoretical predictions (Achtemeier, 1986; Pauley and Wu, 1990; Buzzi et al., 1991). This raises the difficulty of choosing the appropriate parameters in a particular application to achieve an optimum analysis. In fact, in the situation of sparse and irregular data distribution, no single selection of the parameters can produce the most accurate analysis for all wavelengths.
As we know, an effective objective analysis scheme should at least be able to retrieve resolvable long wavelengths in data-spare areas and preserve details in data-dense areas. If multi-wavelengths are extracted simultaneously without an effective mechanism, the analysis can be seriously contaminated by noises, arising from observational errors or irregular data distribution. A practical way to relieve this problem is analyzing first for larger scales and then for shorter scales. The more accurate the long wavelengths, the less impact the noises may have on the analyzed field. Therefore, when applied to irregularly spaced data, it is advisable for analysis scheme sequentially decrease the smoothing factors as Cressman method does in order to retain the most accurate analysis of the longer wavelengths. Based upon this idea, some variational successive corrections approaches, such as the multi-grid approach, the multi-scale diffusion filter approach, etc, have proposed by scholars as listed in Table 1. Among them, Xie et al. (2005, 2010) proposed a multi-scale 3D-VAR implemented by the multi-grid technique, using a sequence of grids with different resolutions to correct different wavelengths. The analysis is interpolated between two consecutive grid level and then enter into a new analysis cycle. Strictly speaking, the multi-grid 3D-VAR is not a traditional sense of 3D-VAR because it solves a series of 3D-VAR respectively on different grid levels. If the background filed is neglected and only the observational field is considered, the multi-grid 3D-VAR then evolves into a variational objective analysis scheme. Then one problem arise, is it possible for a variational objective analysis method to handle all spatial scales of observations in a single iterative procedure, rather than solve certain number of variational problems?
In this paper, a variant of SCM that can satisfy the above requirement, called SMRF, is proposed. Its main idea persists with other successive correction schemes in extracting multi-scale information from observations. Unlike these other schemes, it uses a variational optimization technique to minimize the difference between the estimated and the observed field. It is actually a combination of SCM and a minimization algorithm. We incorporate scales information into a minimization algorithm by using a recursive filter at each iteration to retrieve desired wavelengths successively. As a result, apart from the advantage in multi-scale information extraction, this scheme gains extra benefits from the minimization procedure: first, the inherent convergence property is guaranteed; second, the weighting parameters can be automatically determined by a line search algorithm without manual interventions; the last, it can analyze the data in all scales at one time.
The paper is organized as follows. The background knowledge related to the topic of this paper is briefed in Section 2, including the SCM and gradient-based minimization algorithms. In Section 3, in view of the relationship between the SCM and gradient-based minimization algorithms, the SMRF scheme is proposed by incorporating scales information into a minimization procedure. In Section 4, a single-observation experiment and an idealized SIC assimilation experiment are performed to evaluate the new scheme. The conclusions are summarized in Section 5.
2.
Necessary background
In this section, the SCM and the gradient-based minimization algorithms are briefly introduced.
2.1
Basics of SCM
The SCM is a kind of empirical approaches to correct the first-guess field by a linear combination of residual difference between the predicted and the observed values. In other words, the initial estimation field is gradually modified with the actual observation field until the correction factor is no greater than the given error value, at which time the revision process can be considered as the end. The formula is as follows:
where $ i,j $ is index of the analyzed grid point, the superscript $ n $ denotes the n-th iteration, $ {G}_{i,j}^{n} $is the analyzed value at the n-th iteration, and $ {G}_{i,j}^{0} $ corresponds to the first-guess field, $ {C}_{i,j}^{n} $ is the correction factor. The expression of correction factor is:
where $ M $ is the total number of observations within the circular areas of $ i,j $ point with radius $ R $, $ s $ denotes the s-th observation, $ {Q}_{s} $ is difference between the predictedand the observed values, and $ {W}_{s}^{n} $ is a weight function at the n-th iteration. The SCM can correct an analysis from longer to shorter wavelengths by changing the weight functions during iterations.
2.2
Basics of gradient-based minimization methods
The basic problem is to minimize a cost function as follows
$$
\min J\left(x\right),
$$
where $ x $ is the controlling variable, typically $ x\in {R}^{n} $, but this can also be subject to constraints. To numerically approximate the solution, a sequence $ {\left\{{x}_{n}\right\}}_{n=1}^{\infty } $ should be constructed so that $ {x}_{n} $→$ {x^*} $, where $ J\left({x^*}\right)=\rm{min}J\left(x\right) $. Many kinds of algorithms existed for this problem, one is known as gradient-based, in which the sequence $ {\left\{{x}_{n}\right\}}_{n=1}^{\infty } $ is constructed iteratively by choosing a search direction $ {p}_{n} $ at each iteration and minimizing $ J\left(x\right) $ along this direction. This reduces the problem essentially to a sequence of one-dimensional problem and $ {x}_{n} $ is given by the basic recurrence:
$$
{x}_{n+1}\!=\!{x}_{n}\!+\!{l}_{n}{p}_{n},
$$
(3)
where $ {l}_{n} $ is the step length, $ {p}_{n} $ is usually constructed using gradient information and we call $ {p}_{n} $ a descent direction if $\nabla {J}_{n}\!\cdot\! {p}_{n}\! < \! 0$, where $ \nabla J_n $ is the gradient of $ J $ with respect to $ x $ at $ {x}_{n} $. Based on the conjugate gradient optimization theory, $ {p}_{n} $ is fomulated as a product of $ \nabla J_n $ and a positive definite matrix $ {E}_{n} $, namely $ {p}_{n}\!=\!-{E}_{n}\nabla {J}_{n} $. If $ {E}_{n} $ is simplified to a unit matrix, Eq. (3) is the well-known steepest descent algorithm. Once the descent direction $ {p}_{n} $ is selected, the step length $ {l}_{n} $ can be determined through a line search algorithm (Moré and Thuente, 1994) to insure a sufficient decrease of the cost function along this direction.
3.
The SMRF scheme
To retrieve multi-scale information resolved by observations, a variant of SCM scheme using variational technique, called SMRF, is developed. It is a combination of SCM and minimization algorithms.
3.1
Similarities between the SCM and the minimization algorithm
Actually, the recursion formulated by Eq. (3) is also a procedure of successive correction. Considering the following problem that minimizes the difference between the estimated and the observed values
where $ x $ is the analyzed field, $ {x}^{{\rm{o}}} $ is the observed field, $ {{H}} $ is an interpolation operator from analysis space to observation space, $ {{R}} $ is the observational error covariance matrix, (·)T indicates transpose, and (·)–1 indicates inversion, the gradient of $ J\left(x\right) $ is:
Apparently, $ \nabla J\left(x\right) $ actually represents the residual difference between the observed value $ {x}^{{\rm{o}}} $ and the estimated value $ x $ on analysis grid. In Eq. (3), if we choose $ {p}_{n}=-{{{E}}}_{n}\nabla J\left({x}_{n}\right) $ ($ {{{E}}}_{n} $ is a positive definite matrix) as the descent direction, Eq. (3) becomes
where $ {w}_{n}={l}_{n}{{{E}}}_{n}{{{H}}}^{{\rm{T}}}{{{R}}}^{-1} $ is the weight for the $ n{\rm {-}}\rm{th} $ iteration. Equation (6) has the same form as the successive correction procedure except that the weights $ {w}_{n} $ are different and are obtained in different ways.
3.2
Problems with minimization algorithms
Once the gradient is obtained according to Eq. (5), the problem Eq. (4) then can be solved by using such a minimization algorithm as the steepest descent, the LBFGS, or the conjugate gradient method. However, this problem is usually ill-posed due to the scarcity and the irregular distribution of the observations. Further, without an effective mechanism of transmitting observational signals, the analysis will lose its coherent long-wave feature in data-void areas. From a minimization viewpoint, we will reveal in this portion that the underlying cause lies in the “flawed” gradient $ \nabla J\left(x\right) $ arising from the irregular data distribution.
For simplicity, $ {{R}} $ is assumed to be an identity matrix, Eq. (5) then becomes
Given $ n $ analyzed grid points and $ m $ observational locations, then $ x $ is a vector of length $ n $, $ {x}^{{\rm{o}}} $ is a vector of length $ m $. We also assume that the observations are right located at the analyzed grid points and $ m\!<\! n $. In such a case, the analyzed grid points can always be indexed by a certain order of observational locations so that $ {{H}} $ has the following form
Note that the last $ n{\rm{-}}m $ columns of $ {{H}} $ are all zero vectors. Accordingly, the last $ n{\rm{-}}m $ elements of $ {{{H}}}^{{\rm{T}}}\left({x}^{{\rm{o}}}\!-\!{{{H}}x}\right) $ are zero elements. As a result, for a grid point where no measurements are available, the corresponding element of $ {{{H}}}^{{\rm{T}}}\left({x}^{{\rm{o}}}\!-\!{{{H}}x}\right) $ at that position is definitely equal to zero, while for those observed grid points, the corresponding elements of $ {{{H}}}^{{\rm{T}}}\left({x}^{{\rm{o}}}\!-\!{{{H}}x}\right) $ remain their actual values. That is, the distribution of $ {{{H}}}^{{\rm{T}}}\left({x}^{{\rm{o}}}\!-\!{{{H}}x}\right) $, and thus $ \nabla J\left(x\right) $, is spatially incoherent. Though from a mathematical viewpoint, $ \nabla J\left(x\right) $ obtained in this way is unquestionable, this phenomenon is unreasonable in a physical sense because it is merely caused by the irregular data distribution.
If this “flawed” gradient is introduced into a general gradient-based minimization algorithm, it’s not strange that the analysis will deviate far from what we anticipate. Taking the steepest descent algorithm, for example, the estimate is updated at the $ i{\rm{-}}\rm{th} $ iteration by
As indicated above, $ \nabla J\left({x}_{0}\right) $ is spatially incoherent in data-void areas and therefore $ {x}_{1} $ will also involve amounts of erroneous small scales in these regions. The same issue runs through all later iterations, leading to a long-wave loss in data-void areas. As for those other gradient-based minimization algorithms, such as the LBFGS and the conjugate gradient method, the same problem exists, for a similar reason.
3.3
Variational form of SCM
Recognizing the defect of the conventional minimization algorithms in solving an ill-posed problem, and recalling the resemblance of a SCM and a minimization algorithm, we are enlightened to refer to the desirable feature of a SCM scheme in multi-scale analyzing and incorporate it into a minimization algorithm. We apply a recursive filter to the gradient of the cost function at each iteration of a minimization procedure. With the filter parameter decreasing sequentially with iterations, various scales, from longer to shorter wavelengths, can be extracted successively (see Appendix for recursion details).
We now give a brief analysis on the fundamentals of this scheme. The gradient $ \nabla J\left(x\right) $ described by Eq. (5) actually represents observational residuals at $ x $. The scheme starts by applying a recursive filter E to $ -\nabla J\left({x}_{0}\right) $ with a large enough $ \alpha $, resultant $ {{E}}\left(-\nabla J\left({x}_{0}\right)\right) $ then reasonably characterizes the “longest” wavelengths of the observational residuals at $ {x}_{0} $. Also, since the recursive filtering operator E is positive definite, $ {{E}}\left(-\nabla J\left({x}_{0}\right)\right) $ is guaranteed to be a descent direction which insures the decrease of the residual difference between the estimated and the observed values along this direction. However, just as what we have depicted in the first part of Appendix, for any wavelength the filtering process of $ {{E}}\left(-\nabla J\left({x}_{0}\right)\right) $ will lead to some amplitude loss. A reasonable analysis over data-sparse areas requires the long waves be captured as accurately as possible so that it will not interfere with the extraction of shorter wavelengths in later iterations. Therefore, to regain some of those lost information, a line search procedure is performed along this direction to find an appropriate step length $ l $. When the estimate is updated by $ {x}_{1}\!=\!{x}_{0}\!+\! l{{E}}\left(-\nabla J\left({x}_{0}\right)\right) $, the “largest” scale of the observational residuals at $ x\!=\!{x}_{0} $ is “fully” extracted and incorporated into the new estimate $ {x}_{1} $. Then $ \alpha $ is diminished appropriately, as a result, the “largest” scale of the observational residuals at $ x\!=\!{x}_{1} $ can be captured at the second iteration and incorporated into $ {x}_{2} $. As iteration proceeds, all scales, from longer to shorter wavelengths, can be pulled out successively.
Actually, this scheme is a natural extension of the Barnes SCM scheme (Barnes, 1964, 1973). But it is in a variational form with the advantage that the weights can be automatically obtained by a line search algorithm. This scheme can also be regarded as a minimization algorithm which gains an advantage over conventional minimization algorithms by accounting for various spatial scales resolved by the observations.
To further suppress observational noises, we make a slight modification to our scheme by replacing the problem described by Eq. (4) with the following:
where $ {{B}} $ is another recursive filtering operator with a very small filter parameter $ \,\beta $. Obviously, problem Eq. (4) is the special case of Eq. (10) when $ \, \beta \!=\! 0 $. For the same reason as we have explained, solving Eq. (10) directly using a conventional minimization algorithm (e.g., the steepest descent, the LBFGS and the conjugate gradient method) may not yield a well-behaved analysis, as is verified by our experiments in Section 4. Our algorithm is modified as a flow chart shown in Fig. 1.
It should be noted that the cost function defined by Eq. (10) is a counter part of that used in a 3D-VAR, representing the observational term. Therefore, the way we used in analyzing for the gradient is also suitable for a 3D-VAR scheme, see Appendix for details.
4.
Experiment designs and results
4.1
Single-observation experiment
An effective mechanism for transmitting observational information should be able to: (1) insure the accuracy of the analysis. (2) make observational signals propagate to more wide areas so that the analysis for long waves can be dramatically improved in data-sparse or data-void regions. To test the ability of the SMRF scheme, two experiments are carried out using a single observation. In the first experiment, the recursive filtering operator B with an invariant filter parameter $\, \beta $ is applied to the w and Eq. (10) is directly solved using a conventional LBFGS minimization algorithm (Liu and Nocedal, 1989). In the second experiment, another recursive filtering operator E with variable filter parameters is applied to the cost function gradient $ -\nabla J\left(w\right) $ besides the recursive filtering operator B is applied to w, and Eq. (10) is solved using the SMRF scheme, as shown in Fig. 1.
4.1.1
Data and parameters
The analysis domain covers a square region, extending 10˚ both in latitude and in longitude. The grid resolution is 0.25˚×0.25˚. We place only one observation with its value equal to 1.0 at the center of this domain. The number of filtering passes $ M $ is set to 8. The filer parameter $ \alpha $ in our scheme is chosen as the following Gaussian function:
$$
\alpha \!=\!{\alpha }_{\rm{max}}\!\cdot\! {{\rm{e}}}^{-\frac{{i}^{2}}{2{\sigma }^{2}}},\qquad i\!=\!0,1,\cdots, N,
$$
(11)
where $ i $ represents the iteration number, $ N $ is a constant to be set, $ \sigma \!=\!\dfrac{N}{4} $, $ {\alpha }_{\rm{max}}\!=\! 0\rm{.999} $. As we can see, at the beginning, $ \alpha \!=\!{\alpha }_{\rm{max}} $$\left(i\!=\! 0\right)$, then $ \alpha $ decreases with iterations and almost approaches zero (exactly, $ {{\rm{e}}}^{-8} $) when $ i\!=\! N $, so the number of iterations can be chosen to be no greater than $ N $ in practical implementations. In this experiment we set $ N\!=\! $250. The observation operator $ {{H}} $ is a simple bilinear interpolation. The initial-guess field $ {w}_{0} $ is selected to be zero. The line search algorithm is based on the study by Moré and Thuente (1994).
4.1.2
Results
Figure 2 shows the results of solving Eq. (10) by the LBFGS algorithm when filter parameter $\, \beta $=0.1 and 0.4, respectively. As can be seen, since a recursive filter makes grid points connect and interact with each other, even a single observation can transmit observational signals to neighboring grid points. Different choice of $ \,\beta $ will yield different analysis. If $\, \beta $ is relatively large, the observational signals can propagate to more wide range of areas, but the analysis will lose accuracy in practical use. If $\, \beta $ is small (e.g., $\, \beta $=0.1), the analysis approaches the observation closely but remains all most unchanged in data-void regions because observational signals cannot propagate there. Thus, maximizing the details requires a small $\, \beta $ and filling in the data-void areas with long waves needs a large $\, \beta $. It seems that there is no way to take care of both of these two aspects simultaneously. However, this can to some extent be remedied by our scheme.
Figure
2.
The spread of observational information using the LBFGS algorithm when β=0.1 (a) and β=0.4 (b).
Figure 3 tells the results of the SMRF scheme at different iterations using a small filter parameter $\, \beta $(=0.1). Figures 3a, b, c and d are for iteration 50, 100, 130, and 180, respectively. Apparently, the analysis is corrected from large scales to details. As a result, observational signals can propagate to more wide regions while at the same time the analysis does not lose its accuracy. For a further understanding of the evolving process of our analysis with iterations, the surface plots of Figs 2a and 3 are presented as Figs 4 and 5 respectively. Figure 4 shows that using the LBFGS algorithm with a small $ \,\beta $(=0.1), while the accuracy is guaranteed near the observed locations, the observation can only have an effect on a very close area around it. Figure 5 depicts that in the SMRF scheme, with the same small $\, \beta $(=0.1), the analysis starts with a coarse field and approaches the observed value gradually, and the observational signals can be transmitted to more wide areas compared with that in Fig. 4.
Figure
3.
The spread of the observational information in the SMRF scheme when β=0.1. a, b, c and d. The results at iteration 50, 100, 130 and 180, respectively.
For further verifying the effectiveness of the SMRF scheme in extracting spatial multi-scale information, a two-dimensional experiment with SSMI SIC observations is carried out. To reveal how different wavelengths are corrected sequentially, the analyzed field and the descent direction at different iterations in the SMRF scheme are explored in comparison with the counterparts solved by using the steepest descent algorithm.
4.2.1
Data and parameters
The SSMI daily SIC data are obtained from the National Snow and Ice Data Center (NSIDC), the horizontal resolution of which is 25 km × 25 km. The analysis domain covers the Arctic Ocean. The “true” state of SIC field is shown in Fig. 6a, which is constructed by the SIC observations from the SSMI on September 1, 2014. Since the spatial resolution of the analysis field is usually different from the satellite observation, we select one observation for every four analysis grid points. We also remove partial points located in the sea ice marginal ice zone to examine the validation of the SMRF scheme. As a consequence, there are 1 384 observations (Fig. 6b) remained to restore the “true” field. The observation errors are assumed to be uncorrelated and therefore a diagonal matrix is used with all diagonal elements equal to the square of the observation standard deviation, $ {\sigma }_{{\rm{o}}}^{2} $. $ {\sigma }_{{\rm{o}}} $ here has been normalized to 1.0 in order to avoid the complexity. $ \,\beta $ is chosen to be 0.2. $ N $ is set to 500. The other settings remain the same as those in the single-observation experiment above.
Figure
6.
The true SIC field of Arctic Ocean constructed based on the SSMI SIC on September 1, 2014 (a); and the locations of “observations” (b).
As can be seen from Fig. 7, the analysis results constructed by the steepest descent algorithm deeply rely on $ \,\beta $. The small (large) $ \beta $, which is related to the small (large) radius of influence, only reflects the short (long) wave information of the observations, indicating the long and short wave information cannot be resolved simultaneously. Figure 8 shows results of the steepest descent algorithm with $ \, \beta\! = $0.2 at iteration 3, 5 and 7. The descent direction is spatially incoherent in the data-void region because of the sharp variation of the gradient caused by the irregular distribution of observations (Figs 8b, d and f). Accordingly, there analysis updated along this direction tend to the incoherent structure (Figs 8a, c and e). The same problem will exist for other gradient-based minimization algorithms such as the quasi-Newton methods, LBFGS and the conjugate gradient method.
Figure
7.
Analyzed field solved by using the steepest descent algorithm β=0.2 (a), β=0.4 (b), β=0.6 (c) and β=0.8 (d), in which the iteration is 25, 162, 165 and 148, respectively.
Figure
8.
Analyzed field (left column) and the descent direction ($ -\nabla J $) (right column) solved using the steepest descent algorithm ($ \beta $=0.2) at iteration 3, 5 and 7, respectively.
The analysis results from SMRF scheme with $ \beta \!= $0.2 is very similar with the true field (Fig. 9), which avoids the incoherent spatial structure in the data-void area compared to the steepest descent algorithm. Therefore, the SMRF scheme can better account for various spatial scales resolved by the observations, and the long and short wavelength information can be extracted simultaneously from the observations (Figs 10a, c and e). This is attributed to the fact that the descent direction is built by smoothing out the sharp variation of the gradient to extract the long wave of the observational residuals. As the filtering scales $ \alpha $ decreases with iterations, the descent direction is obtained from longer to shorter wavelengths (Figs 10b, d and f). Consequently, the analyzed field adjusted along this direction can also be extracted successively.
Figure
9.
The true SIC (a) and the analysis result (b) from the SMRF scheme with β=0.2 and N=500.
Ideally, $ \alpha $ should decrease continuously with iterations. However, discrete ones are needed in practical implementations. As $ \alpha $ takes the form of Eq. (11) in our scheme, the choice of $ N $ is an issue to be considered. Figure 11 gives the analysis results with different value of $ N $. As can be seen, the desirable analysis can be achieved as $ N\!>\!\rm{15}0 $ in this experiment. It is also shown in our other experiments that the choice of $ N $ is not a problem in practice because we can usually achieve reasonable results as long as $ N $ is big enough. However, too big value is not necessary and also not recommended because of the computational cost.
Figure
11.
Analysis of the SMRF scheme (β=0.2) with different choice of N. a. b. c and d. N=10, 20, 150 and 300, respectively.
In this study, a muti-scale variational optimization technique is designed to extract spatial multi-scale information resolved by observations. In view of the similarity in form between the SCM schemes and the gradient-based algorithms, the new approach incorporates scales into the minimization algorithms. Additionally, to propagate observational signals, it applies recursive filters to the gradient of the cost function and makes filtering scales decrease with iterations to extract various scales. Based on SRMF scheme, the SIC analysis fields can be successfully reconstructed through extracting the information of the real SSMI from long to short waves in turn.
The main conclusions can be summarized as follows.
(1) This scheme is a variant of conventional SCM that can better account for resolvable multi-scale in the observations. Actually, it is a natural extension of the Barnes scheme but in a variational form, which brings us several extra benefits. First, the specification of scheme parameters is relatively easy because the weights are automatically determined by a line search algorithm. Second, the convergence is implied in a minimization procedure and the “distance” between the estimate and the observed value is diminished with iterations. The last, all wavelengths are analyzed at one time in a single iterative procedure.
(2) From a physical viewpoint, the spatial distribution of the gradient of a cost function defined in a variational problem may be unreasonable, for example, in condition that data are irregularly distributed. Use of this gradient in a conventional minimization algorithm (e.g., the steepest descent, the LBFGS, or the conjugate gradient method) will cause a poor analysis. Our scheme is a remedial approach for this issue. Though inter-comparison studies are performed in our experiments between the conventional minimization algorithms and the SMRF scheme, this is not our real purpose because it is unfair for these algorithms in solving an ill-conditioned problem. On the contrary, we simply intend to present the feasibility and the effectiveness of the combination of SCM and a minimization procedure in data-spare cases.
(3) Since the cost function defined by Eq. (10) is a counter part of that used in a 3D-VAR, representing the observational term, the problem mentioned in (2) also exists in a 3D-VAR scheme if the background error covariance matrix is not appropriately modeled, which shows the potentiality of the SMRF scheme to be extended to a 3D-VAR, as will be detailed in Appendix.
(4) SMRF aims to capture longer wavelengths as accurate as possible before analyzing for shorter wavelengths. While this can to some extent reduce the chance of the long waves being contaminated by noises, there is no way to completely avoid this. How much noise contamination is included in an analysis and how much the signals are distorted is still a question to be studied, especially with a quantitative analysis. Additionally, compared with the multi-grid method (Xie et al., 2010), the computational cost is a defect of SMRF, and further improvements are needed.
(5) The high-order recursive filter can effectively avoid the problems such as large truncation error and difficult boundary estimation caused by the cascade of first-order recursive filter used in our study. The high-order recursive filter algorithm will be enclosed in the SMRF in the future. Besides, It will be further investigated to what degree the SMRF can improve the sea ice weather forecast precision and climate prediction skill.
A1.
Response functions of the recursive filter with different value of $ \alpha $.
Audoly C, Gaggero T, Baudin E, et al. 2017. Mitigation of underwater radiated noise related to shipping and its impact on marine life: A practical approach developed in the scope of AQUO project. IEEE Journal of Oceanic Engineering, 42(2): 373–387. doi: 10.1109/JOE.2017.2673938
[2]
Bittencourt L, Lima I M S, Andrade L G, et al. 2017. Underwater noise in an impacted environment can affect Guiana dolphin communication. Marine Pollution Bulletin, 114(2): 1130–1134. doi: 10.1016/j.marpolbul.2016.10.037
[3]
Brooker A, Humphrey V. 2016. Measurement of radiated underwater noise from a small research vessel in shallow water. Ocean Engineering, 120: 182–189. doi: 10.1016/j.oceaneng.2015.09.048
[4]
Carey W M, Evans R B. 2011. Ocean Ambient Noise: Measurement and Theory. New York: Springer, 85–93
[5]
Da L L, Wang C, Han M, et al. 2014. Ambient noise spectral properties in the north area of Xisha. Acta Oceanologica Sinica, 33(12): 206–211. doi: 10.1007/s13131-014-0569-4
[6]
Deane G B, Glowacki O, Tegowski J, et al. 2014. Directionality of the ambient noise field in an Arctic, glacial bay. The Journal of the Acoustical Society of America, 136(5): EL350–EL356. doi: 10.1121/1.4897354
[7]
Ganton J H, Milne A R. 1965. Temperature- and wind-dependent ambient noise under midwinter pack ice. The Journal of the Acoustical Society of America, 38(3): 406–411. doi: 10.1121/1.1909697
[8]
Greening M V, Zakarauskas P. 1994. Spatial and source level distributions of ice cracking in the Arctic Ocean. The Journal of the Acoustical Society of America, 95(2): 783–790. doi: 10.1121/1.408388
[9]
Huang R K, Zheng H, Kuruoglu E E. 2013. Time-varying ARMA stable process estimation using sequential Monte Carlo. Signal, Image and Video Processing, 7(5): 951–958. doi: 10.1007/s11760-011-0285-x
[10]
Johannessen O M, Sagen H, Sandven S, et al. 2003. Hotspots in ambient noise caused by ice-edge eddies in the Greenland and Barents Seas. IEEE Journal of Oceanic Engineering, 28(2): 212–228. doi: 10.1109/JOE.2003.812497
[11]
Kinda G B, Simard Y, Gervaise C, et al. 2013. Under-ice ambient noise in Eastern Beaufort Sea, Canadian Arctic, and its relation to environmental forcing. The Journal of the Acoustical Society of America, 134(1): 77–87. doi: 10.1121/1.4808330
[12]
Kinda G B, Simard Y, Gervaise C, et al. 2015. Arctic underwater noise transients from sea ice deformation: Characteristics, annual time series, and forcing in Beaufort Sea. The Journal of the Acoustical Society of America, 138(4): 2034–2045. doi: 10.1121/1.4929491
[13]
Lewis J K. 1994. Relating Arctic ambient noise to thermally induced fracturing of the ice pack. The Journal of the Acoustical Society of America, 95(3): 1378–1385. doi: 10.1121/1.408576
[14]
Lewis J K, Denner W W. 1987. Arctic ambient noise in the Beaufort Sea: Seasonal space and time scales. The Journal of the Acoustical Society of America, 82(3): 988–997. doi: 10.1121/1.395299
[15]
Lewis J K, Denner W W. 1988a. Arctic ambient noise in the Beaufort Sea: Seasonal relationships to sea ice kinematics. The Journal of the Acoustical Society of America, 83(2): 549–565. doi: 10.1121/1.396149
[16]
Lewis J K, Denner W W. 1988b. Higher frequency ambient noise in the Arctic Ocean. The Journal of the Acoustical Society of America, 84(4): 1444–1455. doi: 10.1121/1.396591
[17]
Lin J H, Jiang G J, Gao W, et al. 2005. Measurements and analyses of the vertical distribution of ocean ambient noise. Haiyang Xuebao (in Chinese), 27(3): 32–38
[18]
Makris N C, Dyer I. 1986. Environmental correlates of pack ice noise. The Journal of the Acoustical Society of America, 79(5): 1434–1440. doi: 10.1121/1.393671
[19]
Makris N C, Dyer I. 1991. Environmental correlates of Arctic ice-edge noise. The Journal of the Acoustical Society of America, 90(6): 3288–3298. doi: 10.1121/1.401439
[20]
McCulloch J H. 1986. Simple consistent estimators of stable distribution parameters. Communications in Statistics-Simulation and Computation, 15(4): 1109–1136. doi: 10.1080/03610918608812563
[21]
Milne A R, Ganton J H. 1964. Ambient noise under Arctic sea ice. The Journal of the Acoustical Society of America, 36(5): 855–863. doi: 10.1121/1.1919103
[22]
Ozanich E, Gerstoft P, Worcester P F, et al. 2017. Eastern Arctic ambient noise on a drifting vertical array. The Journal of the Acoustical Society of America, 142(4): 1997–2006
[23]
Pritchard R S. 1984. Arctic Ocean background noise caused by ridging of sea ice. The Journal of the Acoustical Society of America, 75(2): 419–427. doi: 10.1121/1.390465
[24]
Roth E H, Hildebrand J A, Wiggins S M, et al. 2012. Underwater ambient noise on the Chukchi Sea continental slope from 2006–2009. The Journal of the Acoustical Society of America, 131(1): 104–110. doi: 10.1121/1.3664096
[25]
Shen X R, Zhang H, Xu Y, et al. 2016. Observation of alpha-stable noise in the laser gyroscope data. IEEE Sensors Journal, 16(7): 1998–2003. doi: 10.1109/JSEN.2015.2506120
[26]
Stoyanov S V, Samorodnitsky G, Rachev S, et al. 2006. Computing the portfolio conditional value-at-risk in the alpha-stable case. Probability and Mathematical Statistics, 26(1): 1–22
[27]
Urick R J. 1984. Ambient Noise in the Sea. Washington, DC: Undersea Warfare Technology Office, Naval Sea System Command, Department of the Navy, 3–21
[28]
Yang Q L, Yang K D, Duan S L. 2018. A method for noise source levels inversion with underwater ambient noise generated by typhoon in Deep Ocean. Journal of Theoretical and Computational Acoustics, 26(2): 1850007. doi: 10.1142/S259172851850007X
Ze Meng, Lei Zhou, Baosheng Li, Jianhuang Qin, Juncheng Xie. Erratum to: Acta Oceanologica Sinica (2022) 41(10): 119–130DOI: 10.1007/s13131-022-2023-3The atmospheric hinder for intraseasonal sea-air interaction over the Bay of Bengal during Indian summer monsoon in CMIP6[J]. Acta Oceanologica Sinica. doi: 10.1007/s13131-022-2131-0
Ze Meng, Lei Zhou, Baosheng Li, Jianhuang Qin, Juncheng Xie. Erratum to: Acta Oceanologica Sinica (2022) 41(10): 119–130DOI: 10.1007/s13131-022-2023-3The atmospheric hinder for intraseasonal sea-air interaction over the Bay of Bengal during Indian summer monsoon in CMIP6[J]. Acta Oceanologica Sinica. doi: 10.1007/s13131-022-2131-0
retrieve the longwave information over the whole analysis domain and the shortwave information over data-dense regions
Figure 1. The under-ice ambient noise observation experiment location of the long-term ice station (a), the ice condition at the edge of the long-term ice station (b), and two hydrophones used in the experiment (c).
Figure 2. The time domain waveform of the Arctic ice collision noise recorded by autonomous hydrophone at the long-term ice station during the 9th Chinese National Arctic Research Expedition (a); and the corresponding time-frequency analysis (spectrogram) of the Arctic ice collision noise shown in Fig. 2a (b).
Figure 3. The statistical characteristics of the Arctic ice collision noise shown in Fig. 2a and the fitting results by two distributions (${\rm{s}}\alpha {\rm{s}}$ distribution and Gaussian distribution) (a); and a zoom in Fig. 3a when limiting the amplitude greater than 1 000 (b).
Figure 4. The time domain waveform of the Arctic ice breaking noise recorded by autonomous hydrophone at the long-term ice station during the 9th Chinese National Arctic Research Expedition (a); and the corresponding time-frequency analysis (spectrogram) of the Arctic ice breaking noise shown in Fig. 4a (b).
Figure 5. The statistical characteristics of the Arctic ice breaking noise and the fitting results by two distributions (${\rm{s}}\alpha {\rm{s}}$ distribution and Gaussian distribution) (a); and a zoom in Fig. 5a when limiting the amplitude greater than 1 000 (b).
Figure 6. Atmosphere temperature (a) and wind speed (b), recorded on August 18 by meteorological station on the R/V Xuelong. The red lines mark the period when the ice canopy was relatively active. The black lines show the increasing trend of the two meteorological parameters.
Figure 7. Atmosphere temperature (a) and wind speed (b) recorded on August 18 by meteorological station on the R/V Xuelong from 02:50 to 17:54. Note that the meteorological data are averaged every 8 min corresponding to the length of each signal file.
Figure 8. The PSD variations along with time at different frequencies of under-ice ambient noise (a); and the Pearson correlation coefficients between the PSDs of under-ice ambient noise at different frequencies and meteorological variables (b).