ABSTRACT
Precipitation nowcasting plays a crucial role in disaster prevention and mitigation. Existing forecasting models often underutilize output data, leading to suboptimal forecasting performance. To tackle this issue, we introduce the I-ConvGRU model, a novel radar echo timing prediction model that synergizes the temporal dynamics optimization of ConvGRU with the spatial feature enhancement capabilities of RainNet. The model forecasts future scenarios by processing 10 sequential time-series images as input while employing skip connections to boost its spatial feature representation further. Evaluation of the radar echo data set from the Hong Kong Hydrological and Meteorological Bureau spanning from 2009 to 2015 demonstrates the I-ConvGRU model's superiority, with reductions of 17(3.8%) and 49(3.2%) in MSE and MAE metrics, respectively, compared with the TrajGRU model; meanwhile, the I-ConvGRU model had 52(5.8%) and 144(3.8%) lower values on the B-MSE and B-MAE metrics, respectively, than the slightly better performing TrajGRU model. Notably, it significantly improves the prediction of severe precipitation events, with the CSI and HSS metrics increasing by 0.0251(9.6%) and 0.0277(6.8%). These results affirm the model's enhanced effectiveness in radar echo forecasting, particularly in predicting heavy rainfall events.
HIGHLIGHTS
The model designed by combining recurrent neural networks and iterative networks improves the warning rate of radar echo extrapolation.
The model has strong early warning ability for heavy rainfall.
INTRODUCTION
Severe convective weather is characterized by its limited spatial extent, brief duration, intense unpredictability, and significant destructive potential (Wang et al. 2022; Chowdhury et al. 2023; Xiang et al. 2023). It often triggers catastrophic conditions, including thunderstorms, hail, strong winds, and heavy rainfall (Zhang & Melhauser 2012). Additionally, it represents a significant natural hazard in China (Zhang & Melhauser 2012) and it is also a key challenging aspect in current meteorological forecasting operations (Yan et al. 2022). While conventional numerical meteorological models, enhanced by high-performance computing and data assimilation techniques, have markedly improved weather forecasting accuracy (Bauer et al. 2015), the computational demands for predicting precipitation dynamics at a high spatiotemporal resolution are generally not well suited for real-time nowcasting operations, which require frequent updates (every 5–10 min). Furthermore, due to their inherently nonlinear characteristics, traditional dynamic framework-based prediction methods struggle to forecast severe convective weather accurately.
Forecasting severe convective weather is typically categorized into short-term forecasts and nowcasts. A short-term forecast covers 0–12 h, focusing primarily on the potential for extreme weather events within this timeframe. Conversely, nowcasting, which spans a shorter lead time of up to 2 h (Shukla et al. 2014), emphasizes immediate responses to imminent weather changes. Weather radar serves as a pivotal tool in nowcasting, offering an unparalleled spatial and temporal resolution for tracking precipitation systems and forecasting medium to small-scale severe convective events (Wang et al. 2023). Currently, radar echo extrapolation is the principal technique in nowcasting. This method extrapolates the future location and intensity of radar echoes based on current weather radar observations to predict severe convective weather (Rinehart & Garvey 1978; Chen et al. 2022). Many studies have confirmed that heuristic extrapolation of precipitation dynamics, as observed by weather radar, consistently surpasses the accuracy of numerical forecast models for short lead times (Lin et al. 2005; Yin et al. 2021). With the ongoing expansion and operation of weather radar networks, advancements in radar echo extrapolation algorithms leveraging radar observation data are crucial for minimizing the impact of meteorological disasters (Shi et al. 2018).
Current research in echo extrapolation predominantly utilizes the single-body centroid, cross-correlation, and optical flow methods. The single-body centroid method, rooted in three-dimensional thunderstorm tracking technology (Sun et al. 2014), identifies and analyzes single-body features within the echo to predict the echo's future position. This approach has proven effective for tracking monomers with strong echoes and compact volumes, and various algorithms based on this method have seen widespread application (Dixon & Wiener 1993; Johnson et al. 1998; Dai et al. 2022). The cross-correlation method, or tracking radar echoes by correlation (TREC) (Zou et al. 2019), leverages optimal spatial correlations of echoes to trace precipitation system movements. It accounts for echo size, movement direction, and deformations occurring during transit, achieving notable success in nowcasting operations (Li et al. 1995, 2013). However, each method has its constraints: the single-body centroid method is best suited for convective severe thunderstorm cells and less effective in general convective weather forecasting. The cross-correlation method excels in slowly changing stratiform cloud precipitation systems but falters with rapidly changing precipitation echoes, as rapid weather changes introduce significant errors in motion vector field calculations, diminishing forecast accuracy (Zhu & Dai 2022). The optical flow method, extensively applied in practice, originates from computer vision. A notable variant is the variational optical flow method, ROVER (Woo & Wong 2017), which calculates the optical flow field of consecutive radar images based on a stationary assumption, employing the technique suggested by Brox et al. (2004) and implementing the semi-Lagrangian advection scheme for precipitation forecasting (Tishchenko et al. 2019). Although the optical flow method has an inherent physical change mechanism, the separation between calculation and extrapolation complicates the determination of model parameters, presenting challenges in its application.
In recent years, the advent of large-scale parallel computing and the widespread adoption of graphics processing unit (GPU) devices have significantly enhanced computing capabilities. Following the establishment of a new generation of weather radar networks in China (Min et al. 2019), there has been an exponential increase in the volume of data available for training models. Consequently, deep learning has increasingly been applied to radar echo extrapolation. With its robust feature recognition capabilities, deep learning can discern complex patterns and extract meaningful insights from radar echo data, leading to more accurate extrapolation predictions. Additionally, deep learning models excel in modeling complex nonlinear relationships inherent in radar echo data, enhancing forecasts' precision and reliability (Zhang et al. 2022). The most notable deep learning models recently deployed in this field fall into three main categories: recurrent neural networks (RNNs), convolutional neural networks (CNNs), and generative adversarial networks (GANs).
In the realm of radar echo extrapolation, RNNs have seen significant application, with long short-term memory networks (LSTMs) and gated recurrent units (GRUs) being particularly prominent. Shi et al. (2015) introduced the ConvLSTM model, which integrates a convolution layer and a sampling layer to extract spatial characteristics of sequences and accurately capture the spatiotemporal dynamics of radar echo patterns. However, due to its complexity, Shi et al. (2017) later proposed more streamlined architectures, namely the Trajectory Gated Recurrent Unit (TrajGRU) and Convolution Gated Recurrent Unit (ConvGRU). The TrajGRU model, which accounts for motion trajectories, achieves superior results compared with ConvLSTM, prompting extensive research and development on the ConvGRU and ConvLSTM networks by meteorologists.
Yuan et al. (2018) proposed the model of adaptive ALO-LSTM by combining with ALO module on the basis of LSTM network. It has a better ability to adapt to the data. However, it does not address the problem that the ability of the model to capture long time dependencies still diminishes as the sequence grows. To address this problem, Jing et al. (2020) proposed a hierarchical prediction recurrent neural network (HPRNN) to solve the problem of increasing prediction error over time. MotionGRU (Wu et al. 2021) units were proposed for GRU networks to capture more complex spatiotemporal motions. However, their structures are complex compared with the ConvGRU model. The fusion for iterative networks needs to waste more computational resources.
Additionally, CNNs have been explored for their ability to extract image features efficiently and associate them with label images (Sharif Razavian et al. 2014). A standout model in this category is U-Net (Agrawal et al. 2019), which employs a fully convolutional structure with upsampling and downsampling layers, gaining acclaim for its spatial feature extraction prowess and serving as a foundational network for full CNN models. Nie et al. (2021) proposed SmaAt-UNet on the basis of U-Net network model, which introduces the channel attention module in the model. The feature extraction ability of the model is enhanced. And due to the simplicity of the structure of the U-Net model, meteorologists also used it in multi-source data fusion prediction. The RainNet model, introduced by Ayzel et al. (2020), builds on U-Net by incorporating a recursive network and demonstrates exceptional performance in nowcasting precipitation. Pan et al. (2021) proposed the FURENet model with U-Net as the backbone network, which satisfies the purpose of multivariate information input. Polarized radar parameters specific differential phase (KDP) and differential reflectivity (ZDR) are input into the model to improve the accuracy of precipitation proximity forecasting.
GANs have also been merged with RNN and CNN frameworks to enhance early warning accuracy. Tian et al. (2020) developed the GA-ConvGRU model, incorporating a GAN with the ConvGRU model and outperforming both the ConvGRU and optical flow methods. Wang et al. (2021) proposed an advanced model, ExtGAN, based on a conditional GAN, which surpasses the optical flow method and 3D-CNN models in effectiveness. Notably, the deep generative model of radar (DGMR) by Ravuri et al. (2021) stands out as a seminal contribution, significantly improving the resolution of predicted radar images and consistently outperforming both the U-Net and PySTEPS (Pulkkinen et al. 2019) models in most forecasting scenarios. And due to the wide application of MSE, MAE loss function weighting in traditional methods. It causes blurring of radar images. So many meteorologists use GAN to improve the clarity of radar image. Niu et al. (2024) introduced conditional generative adversarial network (CGAN) network on the basis of encoder-predictor network greatly improved the clarity of the model.
In summary, while the RainNet neural network exhibits certain limitations, conventional models like Rainymotion, based on optical flow (Ayzel et al. 2019), struggle to deliver accurate predictions under heavy precipitation conditions. However, the recursive method utilized within the RainNet model shows promise. In light of this, our study proposes integrating the RainNet network with the ConvGRU network to develop a novel neural network model, the I-ConvGRU (iteration convolutional gated recurrent unit model). This integration leverages the recursive method to shorten the prediction horizon and selects the ConvGRU model for its simplicity to enhance temporal feature extraction within the I-ConvGRU model. This research work offers several advantages over current mainstream methodologies: Firstly, unlike the traditional encoder-predictor, the I-ConvGRU model has a simpler structure, which does not increase the complexity of the model due to the increase of the output signal. In addition, the ConvGRU model is selected as the basic structure, and its spatiotemporal feature extraction ability is obviously higher than that of the convolutional network, and the ConvGRU network structure is relatively simple, saving some computing resources. At the same time, putting the output signal back into the network will introduce more temporal and spatial features, and improve the ability of the network to forecast heavy precipitation. Finally, some jump connections are introduced in the sampling process to help the network introduce multi-scale information. It is helpful to improve the ability of the model to extract features.
The second section of this study details the physical model, experimental data, and evaluation methods employed in deep learning. The third section presents the model's testing results and discussions. The concluding section summarizes the findings of this paper.
MODELS DATA AND METHODS
Model introduction
The I-ConvGRU model merges the strengths of traditional encoder-predictor networks with a novel iterative approach. The encoder achieves spatial reduction via downsampling layers, and convolution extracts image spatial features. The decoder then utilizes upsampling layer deconvolution to enlarge image feature dimensions, progressively enhancing resolution. The model is structured with 30 downsampling layers, 33 ConvGRU networks, 30 LeakyReLU activation functions, 33 convolution operations, and 3 upsampling layers. A crucial addition is a 1*1 convolution layer post-tensor splicing aimed at channel compression and feature fusion. The final output stage employs multi-layer convolution to extract spatial features from input, driven by data. The model uses the LeakyReLU activation function prior to linking the downsampling layer with the ConvGRU network, incorporating a minor slope in its negative part to prevent gradient vanishing issues, thereby boosting training stability and effectiveness, as well as enhancing the model's nonlinear fitting capacity (Jiang & Cheng 2019).
Test data
This formula represents the pixel value; and represents the reflectivity. The radar echo image is based on the conversion of reflectivity. This formula shows that the larger the pixel value, the greater the reflectivity, the stronger the radar echo intensity, and the greater the rainfall. The images used in the data set are 480*480 resolution images, including a 512*512 km range centered on Hong Kong. To speed up the calculation of the model, we reduce the image size to 256*256 resolution.
We divided the HKO-7 data set into three parts: training set, verification set, and test set. The period from 2009 to 2014 is divided into training and validation sets. The number of days in the training set is 812 days, and the number of days in the validation set is 50 days. The test set selects data from 2015, totaling 131 days. During the model training process, we set the batch size batch_size to 2, the learning rate of Adam to 1 × 10−4, the learning strategy to ReduceLRPlateau, iterations 100,000 times, and the model is saved every 5,000 times, which is consistent with the data of the validation set. Compare and select the best model and then test it. All models are implemented on pytorch and run on NVIDIA Tesia P40 graphics card with 24 GB of video memory.
Evaluation methods
is the weight of the nth picture at the position of the radar echo map (i, j). Different weights will be assigned to different positions. The stronger the intensity, the greater the weight to balance the precipitation. This will increase the model's attention to rainfall prediction and better predict heavy rainfall.
We treat ground observations as true values, and the result is 1 when the radar echo is higher than the threshold set by the data set and 0 otherwise. In the above formula, True Positive (TP) means (prediction = 1, true value = 1), False Negative (FN) means (prediction = 0, true value = 1), and False Positive (FP) means (prediction = 1, true value = 0) and True Negative (TN) representation (prediction = 0, true value = 0). The higher the CSI and HSS values, the better the prediction effect, and the lower the FAR value, the better the prediction effect.
The SSIM metrics x and y represent two images, respectively; and represent the luminance averages of the two images; and represent the luminance variance of the two images; is the luminance covariance of the two images; and are the stability parameters. MAX in PSNR denotes the maximum of all values and MSE denotes the root mean square error.
RESULTS AND DISCUSSION
Data analysis reveals that the I-ConvGRU model outperforms other models in terms of MSE, MAE, B-MSE, and B-MAE for predictions within a 256*256 pixel area, indicating lower discrepancies between predicted images and actual observations. Specifically, compared with the TrajGRU model, which had the next best performance, the I-ConvGRU model's MSE and MAE metrics show reductions of 3.8 and 3.2%, respectively, with B-MSE and B-MAE also decreasing by 5.8 and 3.8%. This proves the superior performance of I-ConvGRU on the test set.
To more intuitively analyze the ability of the test set model to predict pictures. In Table 1, we give the PSNR and SSIM mean values of the pictures predicted by different models on the test set. It can be seen that Rainymotion has the worst structural similarity, followed by the TrajGRU model. The I-ConvGRU model has an SSIM value of 0.7956, which is the closest to 1. Analyzing the PSNRs of the models, it can be seen that the I-ConvGRU model has a PSNR value of 21.77, which is the highest among all the models. This indicates that the I-ConvGRU model has the strongest predictive ability.
Models . | SSIM . | PSNR . |
---|---|---|
TrajGRU | 0.7874 | 21.607 |
RainNet | 0.7899 | 21.432 |
Rainymotion | 0.7481 | 21.076 |
I-ConvGRU | 0.7956 | 21.77 |
Models . | SSIM . | PSNR . |
---|---|---|
TrajGRU | 0.7874 | 21.607 |
RainNet | 0.7899 | 21.432 |
Rainymotion | 0.7481 | 21.076 |
I-ConvGRU | 0.7956 | 21.77 |
For a more detailed assessment of each model's early warning capabilities, we examined the performance of CSI, FAR, and HSS across different thresholds (as detailed in Tables 2–4). The effectiveness of CSI and HSS diminishes as the threshold increases. However, the I-ConvGRU model consistently outperforms the comparative models across these metrics, demonstrating greater CSI and HSS values. Notably, at a precipitation threshold of ≥30 mm/h, the I-ConvGRU model's CSI and HSS metrics are increased by 9.6 and 6.8%, respectively, compared with the TrajGRU model. Although its FAR values are less favorable than those of the Rainymotion model at lower precipitation thresholds (R ≥ 2 mm/h, R ≥ 5 mm/h, and R ≥ 10 mm/h), this is likely due to the optical flow method's pixel-level motion analysis, which more accurately captures motion trajectories and reduces false alarm rates. Nonetheless, the I-ConvGRU model shows superior performance across all metrics when the precipitation threshold is set at R ≥ 0.5 mm/h and R ≥ 30 mm/h, which is particularly notable during heavy rainfall events.
Algorithm . | R ≥ 0.5 . | R ≥ 2 . | R ≥ 5 . | R ≥ 10 . | R ≥ 30 . |
---|---|---|---|---|---|
TrajGRU | 0.6296 | 0.5672 | 0.4732 | 0.3661 | 0.2611 |
RainNet | 0.6201 | 0.5595 | 0.4635 | 0.3609 | 0.2338 |
Rainymotion | 0.5748 | 0.5142 | 0.4197 | 0.3068 | 0.1804 |
I-ConvGRU | 0.6401 | 0.5794 | 0.4871 | 0.3916 | 0.2862 |
Algorithm . | R ≥ 0.5 . | R ≥ 2 . | R ≥ 5 . | R ≥ 10 . | R ≥ 30 . |
---|---|---|---|---|---|
TrajGRU | 0.6296 | 0.5672 | 0.4732 | 0.3661 | 0.2611 |
RainNet | 0.6201 | 0.5595 | 0.4635 | 0.3609 | 0.2338 |
Rainymotion | 0.5748 | 0.5142 | 0.4197 | 0.3068 | 0.1804 |
I-ConvGRU | 0.6401 | 0.5794 | 0.4871 | 0.3916 | 0.2862 |
Algorithm . | R ≥ 0.5 . | R ≥ 2 . | R ≥ 5 . | R ≥ 10 . | R ≥ 30 . |
---|---|---|---|---|---|
TrajGRU | 0.2514 | 0.3548 | 0.4609 | 0.5541 | 0.6081 |
RainNet | 0.2549 | 0.3601 | 0.4741 | 0.5662 | 0.6213 |
Rainymotion | 0.2554 | 0.3153 | 0.4089 | 0.5280 | 0.6806 |
I-ConvGRU | 0.2437 | 0.3489 | 0.4557 | 0.5424 | 0.5986 |
Algorithm . | R ≥ 0.5 . | R ≥ 2 . | R ≥ 5 . | R ≥ 10 . | R ≥ 30 . |
---|---|---|---|---|---|
TrajGRU | 0.2514 | 0.3548 | 0.4609 | 0.5541 | 0.6081 |
RainNet | 0.2549 | 0.3601 | 0.4741 | 0.5662 | 0.6213 |
Rainymotion | 0.2554 | 0.3153 | 0.4089 | 0.5280 | 0.6806 |
I-ConvGRU | 0.2437 | 0.3489 | 0.4557 | 0.5424 | 0.5986 |
Algorithm . | R ≥ 0.5 . | R ≥ 2 . | R ≥ 5 . | R ≥ 10 . | R ≥ 30 . |
---|---|---|---|---|---|
TrajGRU | 0.7524 | 0.7067 | 0.6274 | 0.5229 | 0.4054 |
RainNet | 0.7436 | 0.6986 | 0.6159 | 0.5152 | 0.3663 |
Rainymotion | 0.7049 | 0.6582 | 0.5718 | 0.4498 | 0.2871 |
I-ConvGRU | 0.7602 | 0.7160 | 0.6390 | 0.5484 | 0.4331 |
Algorithm . | R ≥ 0.5 . | R ≥ 2 . | R ≥ 5 . | R ≥ 10 . | R ≥ 30 . |
---|---|---|---|---|---|
TrajGRU | 0.7524 | 0.7067 | 0.6274 | 0.5229 | 0.4054 |
RainNet | 0.7436 | 0.6986 | 0.6159 | 0.5152 | 0.3663 |
Rainymotion | 0.7049 | 0.6582 | 0.5718 | 0.4498 | 0.2871 |
I-ConvGRU | 0.7602 | 0.7160 | 0.6390 | 0.5484 | 0.4331 |
Overall, the I-ConvGRU model performs best on MSE, MAE, B-MSE, and B-MAE on the test set. The SSIM and PSNR values predicted by the I-ConvGRU model are also the largest. The pictures that proved their predictions were the most similar to the real ones. At the same time, the model has the best performance compared with other models when the threshold is above 30 mm/h. This is because the traditional method does not introduce the output information, so the prediction performance of the model is not as good as that of the I-ConvGRU method for a small range of heavy rainfall regions. At the same time, the addition of output information provides more features to the model. So its evaluation indicators are in the test set of the best performance.
In our comparative evaluation, we utilized sequences of 10 consecutive radar echoes to forecast the radar echo sequence for the next 10 time steps, effectively predicting 1 h of future data based on the previous hour's historical data. This approach aims to provide a clear and intuitive understanding of the prediction performance across different models. To accurately assess the models' capability to capture the dynamics of severe convective weather, we focused on areas characterized by strong echoes, a common indicator of such weather. Our analysis included cases that span the entire lifecycle of convective processes – namely, the genesis, development, and dissipation stages. This comprehensive approach allows us to evaluate the models' effectiveness in predicting severe convective weather events, highlighting their strengths and limitations in various stages of convective activity.
Case 1
Use the numbers 1, 2, 3, 4, and 5 to represent the echo images at different times. It can be seen from Figure 6 that as the prediction time increases, the prediction ability of the prediction images of each model becomes worse and worse, and the loss of details is greater. There are also certain differences from the ground observation images in the echo size and echo boundary. These problems inevitably appear in the results predicted by the ConvLSTM, CSAConvLSTM (Xiong et al. 2021), and SmaAt-UNet (Nie et al. 2021) models. When the prediction time is 60 min, it can be seen that the I-ConvGRU model performs better. In the strong echo area (black box), it can be seen that although the image of the optical flow method is relatively clear, it overestimates the intensity and range of rainfall, while the convolution-based method, the cumulative RainNet model, underestimates rainfall intensity and extent. However, there is a large deviation between the strong echo position predicted by the TrajGRU model based on the recurrent network and the true position. Only the I-ConvGRU model is basically consistent with the actual echo intensity and echo area.
Case 2
CONCLUSIONS
This study innovatively merges the temporal dynamics of the ConvGRU model with the loop iteration strengths of the RainNet model to forge the I-ConvGRU model. This hybrid approach marries the temporal feature extraction capabilities of the ConvGRU network with the iterative optimization advantages of RainNet, enabling a more granular optimization of model parameters through the iterative input of 10 sequential time-series images. This process iteratively forecasts future images, which are then cycled back into the network after removing the initial input image, thus refining the parameters with each iteration. Traditional encoder-predictor networks primarily rely on circular convolutions and fall short of capturing spatial characteristics. To address this, the new model introduces skip connections after each sampling to enrich the model's spatial feature representation.
Comprehensive analysis yields the following conclusions:
(1) Visualization of two case studies reveals that the I-ConvGRU model excels in predicting strong echo signals, closely matching the intensity and boundaries of ground-observed echoes. While Rainymotion displays clearer images, its echo boundaries and size predictions significantly deviate from actual observations. This is because the essence of the optical flow method is to predict the movement of pixels. It will not change the size of the pixel value. Similarly, the TrajGRU and RainNet models struggle with accurate echo size predictions. Neither of these two models uses the output signal. Therefore, the echo size feature is insufficient.
(2) Comparative testing indicates that the I-ConvGRU model achieves a 3.8 and 3.2% reduction in MSE and MAE, respectively, versus the TrajGRU model; it also sees improvements in B-MSE and B-MAE by 5.8 and 3.8%. Moreover, critical meteorological metrics, CSI and HSS, improve by 9.6 and 6.8% for precipitation rates ≥30 mm/h compared with the TrajGRU model, showcasing superior early warning performance, especially in heavy rainfall scenarios. This is because the traditional encoder-predictor network makes less use of the output signal and cannot extract enough strong echo features.
However, although the I-ConvGRU model improves the accuracy of precipitation proximity forecasting, it, like other deep learning models, faces the situation that it loses too much detail in the later stages of the forecast and is unable to maintain the resolution as in the optical flow method, which leads to a significant decrease in resolution. This is because the widespread use of MSE or MAE loss functions smooths the prediction results, resulting in blurred extrapolated radar images; and the forecasting ability for light rain is weak, which is also reflected in the RainNet model. And there is no guarantee that the best warning rate is maintained in every threshold case. These drawbacks are inevitable in HPRNN (Jing et al. 2020) and ConvLSTM (Shi et al. 2015). Finally, the uneven distribution of HKO-7 data set may affect the early warning result to some extent. In future experiments, in order to increase the resolution, we consider introducing GANs and new loss function boosting models to improve the forecasting ability for light rain and consider using data sets of heavy precipitation to improve the prediction performance of the model. Also consider adding information from multiple input variables to the model for prediction, such as speed and temperature. Humidity and other variables because speed can be more specifically related to time and space. Temperature and humidity because storms, rainfall formation are inextricably linked to temperature. Also, physical constraint equations are considered to optimize the warning rate in combination with deep learning. Physical conditions help the model to better understand the physical meaning of the data. Deep learning helps to learn the nonlinear relationships of the data. Combining the two with each other can better capture the patterns and features in the data and improve the prediction accuracy.
ACKNOWLEDGEMENTS
We thank the reviewers for their constructive comments and editorial suggestions that significantly improved the quality of this paper.
FUNDING
This work was sponsored by the National Natural Science Foundation of China (U2342216), Sichuan Provincial Central Leading Local Science and Technology Development Special Project (2023ZYD0147), the Project of the Sichuan Department of Science and Technology (2023NSFSC0244, 2023NSFSC0245), the Open Grants of China Meteorological Administration Radar Meteorology Key Laboratory (2023LRM-A01), and the National Key R&D Program of China (2023YFC3007501).
DATA AVAILABILITY STATEMENT
All relevant data are available from an online repository or repositories. The HKO-7 dataset used in this study is from the Hong Kong Observatory at https://github.com/sxjscien-ce/HKO-7/tree/master/hko_data (accessed on 20 November 2021).
CONFLICT OF INTEREST
The authors declare there is no conflict.