Hostname: page-component-cd9895bd7-jn8rn Total loading time: 0 Render date: 2024-12-26T22:03:57.279Z Has data issue: false hasContentIssue false

Short-term forecasting of typhoon rainfall with a deep-learning-based disaster monitoring model

Published online by Cambridge University Press:  20 July 2023

Doyi Kim
Affiliation:
Earth Intelligence, SI Analytics, Daejeon, Republic of Korea
Yeji Choi*
Affiliation:
Earth Intelligence, SI Analytics, Daejeon, Republic of Korea
Minseok Seo
Affiliation:
Earth Intelligence, SI Analytics, Daejeon, Republic of Korea
Seungheon Shin
Affiliation:
Earth Intelligence, SI Analytics, Daejeon, Republic of Korea
Hyun-Jin Jeong
Affiliation:
Department of Astronomy and Space Science, College of Applied Science, Kyung Hee University, Gyeonggi-do, Republic of Korea
*
Corresponding author: Yeji Choi; Email: yejichoi@si-analytics.ai

Abstract

Accurate and reliable disaster forecasting is vital for saving lives and property. Hence, effective disaster management is necessary to reduce the impact of natural disasters and to accelerate recovery and reconstruction. Typhoons are one of the major disasters related to heavy rainfall in Korea. As a typhoon develops in the far ocean, satellite observations are the only means to monitor them. Our study uses satellite observations to propose a deep-learning-based disaster monitoring model for short-term typhoon rainfall forecasting. For this, we consider two deep learning models: a video frame prediction model, Warp and Refine Network (WR-Net), to predict future satellite observations and an image-to-image translation model, geostationary rainfall product (GeorAIn) (based on the Pix2PixCC model), to generate rainfall maps from predicted satellite images. Typhoon Hinnamnor, the worst typhoon case in 2022 in Korea, is selected as a target case for model verification. The results show that the predicted satellite images can capture the structures and patterns of the typhoon. The rainfall maps generated from the GeorAIn model using predicted satellite images show a correlation coefficient of 0.81 for 3-hr and 0.56 for 7-hr predictions. The proposed disaster monitoring model can provide us with practical implications for disaster alerting systems and can be extended to flood-monitoring systems.

Type
Application Paper
Creative Commons
Creative Common License - CCCreative Common License - BY
This is an Open Access article, distributed under the terms of the Creative Commons Attribution licence (http://creativecommons.org/licenses/by/4.0), which permits unrestricted re-use, distribution and reproduction, provided the original article is properly cited.
Copyright
© The Author(s), 2023. Published by Cambridge University Press

Impact Statement

The development of a deep-learning-based disaster monitoring model using satellite observations has the potential to significantly improve disaster response efforts by providing real-time, accurate, and comprehensive information about affected areas on a global scale.

1. Introduction

Monitoring heavy rainfall is crucial to assess the risk of potential disasters and enable authorities to alert communities for their safety. Accurate and timely detection of heavy rainfall under the current weather system has been effective in reducing damage from related disasters such as floods and storms (Alfieri et al., Reference Alfieri, Salamon, Pappenberger, Wetterhall and Thielen2012; Cools et al., Reference Cools, Innocenti and O’Brien2016). Ground-based radars are highly effective in detecting heavy rainfall by directly measuring the returned signals from surrounding objects. These radar systems can measure rainfall intensity according to particle size since radar beams strike raindrops in the atmosphere. Therefore, most meteorological administrations operate radar systems to detect precipitation. However, radar systems have limitations in detecting precipitation over larger areas and making broader spatiotemporal predictions. Although these systems can predicts the next 0–2 hr of rainfall pattern changes (e.g., amount, timing, and location) by simple extrapolation methods, their accuracies decrease over longer periods and they cannot predict the development and dissipation of the system.

To alleviate these limitations in nowcasting, recent studies have utilized deep-learning (DL) approaches with satellite images (Shi et al., Reference Shi, Gao, Lausen, Wang, Yeung, Wong and Woo2017; Ravuri et al., Reference Ravuri, Lenc, Willson, Kangin, Lam, Mirowski, Fitzsimons, Athanassiadou, Kashem, Madge, Prudden, Mandhane, Clark, Brock, Simonyan, Hadsell, Robinson, Clancy, Arribas and Mohamed2021; Espeholt et al., Reference Espeholt, Agrawal, Sonderby, Kumar, Heek, Bromberg, Gazen, Carver, Andrychowicz, Hickey, Bell and Kalchbrenner2022; Seo et al., Reference Seo, Kim, Shin, Kim, Ahn and Choi2022). DL-based models learn the pattern change of rain events from past datasets and then predict future patterns instead of extrapolating continuous snapshots. Ravuri et al. (Reference Ravuri, Lenc, Willson, Kangin, Lam, Mirowski, Fitzsimons, Athanassiadou, Kashem, Madge, Prudden, Mandhane, Clark, Brock, Simonyan, Hadsell, Robinson, Clancy, Arribas and Mohamed2021) suggested that a deep generative model improves the quality of the next 0–2 hr of precipitation nowcasting by inputting only radar images and predicting the precipitation probability. Other studies used recurrent neural network (RNN)-based models to predict future precipitation probabilities from satellite images. These state-of-the-art models have shown significant improvements in short-term weather forecasts in terms of cost and accuracy. However, these models still rely on ground-based radars as input or target data for training. Their stochastic results can identify the intensity categories (e.g., high or low) to some extent but cannot provide an accurate rainfall rate. Therefore, further improvements are required for these prediction tasks.

Countries with satellite systems can complement their entire territories with airborne observations. However, developing countries may face financial challenges in implementing such systems. Even in countries with radar systems, they may still suffer from inadequate coverage or lack of sustainable operation, limiting their effectiveness. In the same manner, observations over oceans are still necessary. Given this situation, insufficient observational data hinder the use of DL-based models.

We propose a disaster monitoring model combining our DL-based models to predict heavy rainfall from satellite images, which can help authorities monitor and respond to potential disasters. Our model can predict rain rates by proxy radar reflectivity for 6 hr without the need for a ground radar. In Sections 2 and 3, we explain the data and models we used. In Section 4, we present the predicted results of our trained model against Typhoon Hinnamnor, which occurred in 2022 over South Korea, and discuss the findings in Section 5. Additionally, we conduct tests using the generated satellite images by Warp and Refine Network (WR-Net), which allow for predicting rain patterns over a longer time than existing nowcasting approaches. Overall, our study aims to overcome the spatial and temporal limitations of radar-based forecasting and to contribute to improving disaster response efforts.

2. Data

2.1. GEO-KOMPSAT-2A

GEO-KOMPSAT-2A (GK2A) is the second generation of meteorological geostationary (GEO) satellite launched in 2018 to capture meteorological phenomena by the Korean Meteorological Administration (KMA). The GK2A employs the Advanced Meteorological Imager (AMI) sensor, which provides 16 visible (VIS) and infrared (IR) channels with a high spatial resolution of 0.5 to 2 km. In this study, we trained our model using VIS, water vapor (WV), and IR channels, respectively. As an example, Figure 1 shows slightly different cloud features for each of the three channels.

Figure 1. Model input satellite channels. (a) is 0.64 $ \mu $ m visible channel, (b) is 6.03 $ \mu $ m water vapor channel, and (c) is 10.5 $ \mu $ m infrared channel. In common, bright areas indicate clouds or high-moisture areas. Each channel shows different characteristics associated with cloud states.

The VIS products, which are produced by reflecting sunlight from the earth’s surface and clouds, are available only during the daytime. We selected 0200–0600 UTC (1100–1500 KST) images of 0.64 $ \mu $ m VIS channel, considering the data quality over the South Korean region. As shown in Figure 1a, the darker areas in the image represent land and water surfaces, whereas clouds typically appear as bright pixels. The high spatial resolution of the VIS channel (0.5 km) enables us to distinguish small clouds and cloud shapes with high accuracy.

Next, the WV products show the amount of WV in the atmosphere. GK2A provides products at three different altitudes, and we used a high-level WV channel with a wavelength of 6.03 $ \mu $ m for this study. Similarly, high-moisture regions appear on the brighter pixels in Figure 1b. It is used to monitor severe weather potential with turbulence existence or to estimate wind direction.

Finally, the 10.5 $ \mu $ m IR channel is known as a “clean” window channel, which is less sensitive to atmospheric gas absorption. This channel measures the emitted radiation as heat from the earth or clouds, and we can estimate clouds top height and particle properties using this brightness temperature. IR channels provide images day and night by using thermal radiation. Therefore, they can be utilized to identify convective severe weather events anytime.

2.2. KMA weather radar

We use weather radar data from the KMA as a prediction target. A weather radar is a general detection equipment for monitoring severe weather events. Previous studies (Seed, Reference Seed2003; Pulkkinen et al., Reference Pulkkinen, Chandrasekar, von Lerber and Harri2020) widely used the constant altitude plan position indicator (CAPPI) product for precipitation nowcasting and flood prediction. However, CAPPI observations tend to underestimate the gauge rainfall, as reported by Yoon et al. (Reference Yoon, Jeong and Lee2014). Recently, several studies (Bellon et al., Reference Bellon, Zawadzki, Kilambi, Lee, Lee and Lee2010; Yoon et al., Reference Yoon, Jeong and Lee2014) have shown the composited column maximum (CMAX) provides a more accurate prediction for flood forecasting. CMAX represents the maximum precipitation possible in a column and is used to detect severe thunderstorms. Therefore, we select CMAX radar data to represent the rainfall in this study.

2.3. Data preprocessing

We performed data preprocessing to align the input images to the same spatial resolution. As the VIS channel has a higher resolution than IR and WV channels, we utilized bilinear interpolation to reduce the image resolution from 0.5 to 2 km, which is the same as the other channels. We normalized all data within the [−1, 1] range for model training. In the training process, we used 10-min interval data from October 2019 to July 2021 and evaluated the model’s performance on Typhoon Hinnamnor, a category-5 strength Super Typhoon that significantly impacted South Korea and Japan in the first week of September 2022. Among the typhoon season, we selected the closest case to South Korea, which occurred on September 9, 2022, between 0100 and 0700 UTC. The model generated predictions for the rain rate for the next 6 hr without tracking the typhoon movement, and our results are compared with weather radar and other precipitation products (Figure 2).

Figure 2. Track of Super Typhoon Hinnamnor. https://www.weather.go.kr/w/typhoon/typ-history.do.

We also performed qualitative comparisons with climate reanalysis data and satellite-based precipitation products. For this, we used the hourly total precipitation parameter provided by the ERA5 from ECMWF (European Centre for Medium-Range Weather Forecasts), which is the accumulated water that falls to the earth’s surface. We also used the IMERG (Integrated Multi-satellitE Retrievals for GPM)-Late Run data from GPM (Global Precipitation Measurement), which is calibrated precipitation based on the multi-satellite microwave estimates. It should be noted that while the KMA radar and our results represent hourly precipitation (mm/hr), the compared datasets show cumulative rainfall. The ERA5 data have a resolution of 25 km and are hourly, while the IMERG data have a resolution of 10 km and are half-hourly. Although the resolution is low, the reanalysis data combined with ground observation are useful for confirming accuracies in satellite-based precipitation. We chose the GPM product because IMERG is a well-known product that provides a satellite-based global precipitation map.

3. Method

The short-term forecasting model for typhoon rainfall consists of a two-step process (see Figure 3). First, we used a DL-based video frame prediction model (Seo et al., Reference Seo, Choi, Ryu, Park, Bae, Lee and Seo2022) to predict future sequences of GK2A satellite imagery. Then, we applied an image-to-image translation model (Jeong et al., Reference Jeong, Moon, Park, Lee and Baek2022) to the generation of a radar reflectivity map from predicted satellite images.

Figure 3. Architecture of the proposed disaster monitoring model, which consists of the two-step models. The WR-Net, a video frame prediction network, predicts future satellite images based on cloud movements. Using the generative adversarial network, the geostationary rainfall product (GeorAIn) generates the proxy radar reflectivity map from the satellite images.

3.1. Video frame prediction network—WR-Net

Satellite imagery has mainly served as a means of near-real-time monitoring of atmospheric conditions or as auxiliary data to improve the initial condition of the numerical forecasting model. However, recent advances (Espeholt et al., Reference Espeholt, Agrawal, Sonderby, Kumar, Heek, Bromberg, Gazen, Carver, Andrychowicz, Hickey, Bell and Kalchbrenner2022; Seo et al., Reference Seo, Kim, Shin, Kim, Ahn and Choi2022) in DL-based video frame prediction techniques have enabled us to predict future satellite images. The objective of this study was to leverage these techniques to generate future satellite images to better prepare for typhoon-related disasters in advance.

To achieve this objective, we employ the WR-Net model, proposed by Seo et al. (Reference Seo, Choi, Ryu, Park, Bae, Lee and Seo2022). WR-Net is a DL-based model that generates future images from two consecutive past images. Initially developed for frame interpolation to increase the temporal resolutions of GEO satellite observations, WR-Net comprises a two-step network: a warping component that uses manually extracted optical flow and a refinement component that adjusts the intensity changes of each pixel. In the weather and climate community, optical flows have traditionally been used for short-term precipitation forecasting by extrapolating the movement of precipitation systems from weather radar observations. However, the extrapolation method has a limitation in predicting the development and disappearance of clouds. To address this issue, WR-Net includes a learning-based refinement network. The results from WR-Net demonstrate significantly improved skill scores compared to WR-Net without a refinement network.

3.2. Generative adversarial network—GeorAIn

Meteorological GEO satellites provide rain rate information based on the reflectivity or brightness temperature observed from the VIS and IR channels. While these satellites offer relatively high spatial and temporal resolutions, the physical relationship between rain rate and brightness temperatures ( $ {T}_B $ ) is highly complex and nonlinear due to the different characteristics of each channel. This makes precipitation measurement from the GEO satellite challenging. However, DL-based precipitation retrieval algorithms can handle these complex relationships by using multiple hidden layers to model the nonlinear interactions between input ( $ {T}_B $ ) and output (radar reflectivity or rain rate).

In this study, we apply the Pix2PixCC model to a rainfall map from GK2A satellite images, which we named the GeorAIn. Pix2Pix is one of the popular methods of image-to-image translation using a Conditional Generative Adversarial Network (cGAN). GAN is known to produce high-quality image generation models using an adversarial training process between the generator and the discriminator. However, the early version of Pix2Pix has a limitation when generating high-resolution images. To solve this, Pix2PixHD (Wang et al., Reference Wang, Liu, Zhu, Tao, Kautz and Catanzaro2018) was proposed to handle high-resolution images. Despite its success in generating high-resolution images, GAN is known for its lack of interpretability, as they generate images that are often inconsistent with the input data. To address this challenge, Jeong et al. (Reference Jeong, Moon, Park, Lee and Baek2022) improved the Pix2PixHD, adding an inspector that guides the generator to create physically consistent images by calculating the correlation coefficient between real and generated images. They named the model “Pix2PixCC.” By doing so, we can produce more accurate precipitation maps from GK2A satellite images, which can aid in better understanding and predicting weather patterns.

4. Results and Discussion: Hinnamnor Case Study

4.1. Predicting satellite images from WR-Net

We predict future satellite images with a video frame prediction model (as explained in Section 3.1). Figure 4 shows a case of the IR channel. Using two continuous IR images (0000 and 0100 UTC), WR-Net predicts the future 6-hr (until 0700 UTC) IR images with an hour interval, iteratively. Figure 4a shows the original GK2A IR images, (b) shows the WR-Net-predicted results, and (c) shows the difference map between them. The WR-Net-predicted images preserve the location and shape of the clouds to some extent. However, as the prediction time increases, the predicted clouds become divided and dimmed. Figure 4c shows that errors accumulate gradually and the image differences increase.

Figure 4. IR images from (a) GK2A IR 10.5 $ \mu $ m channel, and (b) the results of WR-Net with the optical flow between 00 and 01 UTC on September 9, 2022. The bright pixels in both images indicate the presence of cloud areas. (c) is the difference map between (a) and (b). Each color bar means temperature (K).

Regarding the other channels, the histograms between the GK2A original image and the WR-Net-predicted image are compared in Figure 5. The figure shows that the two histograms have similar distributions of albedo and brightness temperature. However, as the prediction time increases, the differences in distribution extremes become more noticeable. Cloud pixels in satellite image generally have high albedo and low temperatures, representing cloud areas in the tails of the distribution. Unfortunately, the training dataset lacks a sufficient number of cloud pixels in these tail regions to capture cloud development features. Therefore, this result indicates that the image predicted by WR-Net lacks cloud areas and information. To address this issue, we conduct additional experiments by replacing only the IR channel that can mostly identify convective clouds among the three channels with the WR-Net results.

Figure 5. Comparison between the histogram of the original GK2A image (blue bar) and WR-Net-predicted image (brown bar) at each channel. (a) is 0.64 $ \mu $ m visible channel, (b) is 6.03 $ \mu $ m water vapor channel, and (c) is 10.5 $ \mu $ m infrared channel. All values are calculated on a log scale.

4.2. Monitoring Typhoon rainfall by GeorAIn

We generate the radar reflectivity maps from three satellite images using the GeorAIn model and then compare it with the KMA radar products. Figure 6 is a 2D histogram representing the correlation between the GeorAIn model results obtained from the three original input channels of GK2A and the two original channels combined with one WR-Net-predicted IR channel. Since images were normalized within the [−1, 1] range for model training, prediction results include negative values during reconstruction. Generally, radar reflectivity above 35 dBZ indicates moderate to heavy precipitation. In this figure, most pixel values appear near 0 dBZ, which means clean-sky pixels. This is because satellite imagery has a much higher percentage of non-cloud pixels than cloud pixels. The black line means two results have a perfect prediction. In the prediction after the first hour (0100 UTC), the results of the after 2 hr show a high correlation of about 0.89 and decrease as the prediction time increase. Despite this decrease, it still offers a significant correlation of about 0.75 until 5 hr later (0500 UTC). This means that after 5 hr, the WR-Net-predicted satellite images could effectively replace the original satellite image, and they can show similar proxy radar reflectivity results.

Figure 6. A 2D histogram of the radar reflectivity results from the GeorAIn model, depending on the combination of the input channel. ‘ori_radar_reflectivity’ is the result of original GK2A three channels (VIS, WV, and IR), and ‘gen_radar_reflectivity’ means the results of the combination of GK2A two channels (VIS, WV) and the WR-Net-predicted IR image. The ‘coeff’ in the subtitle means the correlation coefficient between two results in each graph. The color bar means the frequency of data, and it was calculated with a log scale.

For comparison, the predicted radar reflectivity (Z) data are converted to rain rate (R) by using the Z–R relationship (Marshall and WMK, Reference Marshall and WMK1948) as follows: $ z={aR}^b $ . The coefficients, a and b, are empirically determined depending on the precipitation types and regions. We use the most commonly used coefficients, a = 200 and b = 1.6, for stratiform precipitation which are applicable to the Korean region following the Marshall–Palmer equation.

Figure 7 compares the GeorAIn predicted results with various other datasets, including the KMA radar product, climate reanalysis data (ERA5), and GPM IMERG product over Korea. In the KMA radar image (Figure 7a), the masked area with gray pixels at the edge represents areas where ground-based radar observations are unavailable. It means radar has spatial limitations in North Korean and oceanic regions. Figure 7b,c depict projection maps of the predicted results from the GeorAIn. Because the results are predicted from satellite images, they are not subject to the spatial limitations encountered by ground-based radar observations. The GeorAIn model exhibits promising capabilities in predicting the location and shape of precipitation, comparable to radar observations. In particular, the results at the input of original GK2A channels (Figure 7b) successfully capture the heavy precipitation (above 10 mm/hr) in the vicinity of Jeju Island located in the southern part of the Korean Peninsula. These findings show that the GeorAIn model not only estimates the location, such as the precipitation probability prediction of typical DL-based models, but also accurately predicts the rainfall intensity. However, the result of Figure 7c of inputting the predicted image from WR-Net has significantly fewer rain areas and intensity than the other two results.

Figure 7. Comparison of the KMA radar data and predicted results. (a) are the rain rate from the KMA radar product, (b) from the GeorAIn model with GK2A channels, and (c) from the GeorAIn model with GK2A and the WR-Net-predicted IR images. (d) is the hourly total precipitation from ERA5, and (e) is the IMERG precipitation product. The color bar means rain rates (mm/hr).

This discrepancy may be attributed to a gradual decrease in cloud coverage, as discussed in Section 4.1. Nevertheless, the location of heavy rainfall near the typhoon and the overall cloud distribution is well predicted. This demonstrates the model’s potential for radar map predictability, thus enhancing its utility in disaster monitoring applications.

Figure 7d,e is ERA5 hourly total precipitation data and the IMERG-Late precipitation from GPM observations. The ERA5 data, with a resolution nearly ten times greater than that of GK2A (25 km versus 2 km), primarily capture the general cloud patterns rather than providing detailed rainfall information. This accumulated rainfall data show that heavy precipitation appeared in clouds near the central part of the Korean Peninsula and Jeju Island. However, the detailed location and the pattern are not accurately represented in this dataset. In contrast to the other datasets, the IMERG product exhibits limited or negligible precipitation due to its reliance on low-earth-orbit satellites. These satellites have limitations in observing precipitation systems only when they pass over them.

5. Summary and Conclusion

We utilized our DL-based disaster monitoring model, which combines WR-Net and GeorAIn, to predict rainfall in the case of Typhoon Hinnamnor using geostationary satellite images.

WR-Net is the video frame prediction network to generate future satellite images with optical flow and refinement methods. GeorAIn is a DL-based model to generate proxy radar reflectivity maps from VIS, WV, and IR images of GK2A, enabling the production of high-quality target products such as KMA radar reflectivity. While GeorAIn can generate accurate radar maps for input satellite images, it lacks the ability to forecast future scenes. This limitation is crucial for accurate and extended forecasting in disaster monitoring models.

To solve this issue, we incorporated the WR-Net results as the input data of the GerAIn model to predict future heavy rainfall. The generated IR results with WR-Net had shown a high correlation coefficient of over 0.8 at the future 3 hr and 0.75 at 5 hr with the results from GK2A original channels. Compared with the rain rate from the KMA radar and other precipitation datasets, our predicted results show a significantly similar pattern and location of clouds. Moreover, the heavy rainfall area (over 10 mm/hr) is preserved in future frames. It means our model can predict the timing, location, and intensity of heavy rainfall events.

We expect that further studies can improve cloud diminishing issues found in WR-Net and explore cloud cell generation conditions based on underlying physics. In addition, research such as Kim et al. (Reference Kim, Kim, Moon, Park, Shin, Kim, Kim and Hong2019) can be used to generate images of VIS channels that are not available at night and use them for the input of our model. In the GeorAIn, we plan to apply conditional weighting functions to preserve characteristics according to the region and precipitation type.

Our model has significant potential for utilization in disaster monitoring and forecasting heavy precipitation. It enables us to generate radar reflectivity and rain rate products for regions where radar data may be limited or unavailable. We expect that our results can help authorities set up impact-based and accurate alerting systems. To utilize these results globally, we also plan to produce a global radar map with this model by using other GEO satellites and adjusting the bias and local characteristics.

Author contribution

Conceptualization: D.K., Y.C.; Data visualization: D.K., S.S.; Methodology: Y.C., M.S., H-J.J.; Writing draft: D.K., Y.C. All authors approved the final submitted draft.

Competing interest

The authors declare none.

Data availability statement

The GK2A data used in this study can be found in NMSC: https://nmsc.kma.go.kr/enhome/html/main/main.do. The radar data can be found in KMA: https://data.kma.go.kr/resources/html/en/aowdp.html.

Funding statement

This work received no specific grant from any funding agency, commercial or not-for-profit sectors.

Provenance statement

This article is part of the Climate Informatics 2023 proceedings and was accepted in Environmental Data Science on the basis of the Climate Informatics peer review process.

References

Alfieri, L, Salamon, P, Pappenberger, F, Wetterhall, F and Thielen, J (2012) Operational early warning systems for water-related hazards in Europe. Environmental Science & Policy 21, 3549.CrossRefGoogle Scholar
Bellon, A, Zawadzki, I, Kilambi, A, Lee, HC, Lee, YH and Lee, G (2010) McGill algorithm for precipitation nowcasting by Lagrangian extrapolation (maple) applied to the South Korean radar network. Part I: Sensitivity studies of the variational echo tracking (vet) technique. Asia-Pacific Journal of Atmospheric Sciences 46(3), 369381.CrossRefGoogle Scholar
Cools, J, Innocenti, D and O’Brien, S (2016) Lessons from flood early warning systems. Environmental Science & Policy 58, 117122.CrossRefGoogle Scholar
Espeholt, L, Agrawal, S, Sonderby, C, Kumar, M, Heek, J, Bromberg, C, Gazen, C, Carver, R, Andrychowicz, M, Hickey, J, Bell, A and Kalchbrenner, N (2022) Deep learning for twelve hour precipitation forecasts. Nature Communications 13(1), 5145.CrossRefGoogle ScholarPubMed
Jeong, H-J, Moon, Y-J, Park, E, Lee, H and Baek, J-H (2022) Improved AI-generated solar farside magnetograms by STEREO and SDO data sets and their release. The Astrophysical Journal Supplement Series 262(2), 50.CrossRefGoogle Scholar
Kim, K, Kim, J-H, Moon, Y-J, Park, E, Shin, G, Kim, T, Kim, Y and Hong, S (2019) Nighttime reflectance generation in the visible band of satellites. Remote Sensing 11(18), 2087.CrossRefGoogle Scholar
Marshall, JS and WMK, Palmer (1948) The distribution of raindrops with size. J. Meteor. 5, 165166.2.0.CO;2>CrossRefGoogle Scholar
Pulkkinen, S, Chandrasekar, V, von Lerber, A and Harri, A-M (2020) Nowcasting of convective rainfall using volumetric radar observations. IEEE Transactions on Geoscience and Remote Sensing 58(11), 78457859.CrossRefGoogle Scholar
Ravuri, S, Lenc, K, Willson, M, Kangin, D, Lam, R, Mirowski, P, Fitzsimons, M, Athanassiadou, M, Kashem, S, Madge, S, Prudden, R, Mandhane, A, Clark, A, Brock, A, Simonyan, K, Hadsell, R, Robinson, N, Clancy, E, Arribas, A and Mohamed, S (2021) Skilful precipitation nowcasting using deep generative models of radar. Nature 597(7878), 672677.CrossRefGoogle ScholarPubMed
Seed, A (2003) A dynamic and spatial scaling approach to advection forecasting. Journal of Applied Meteorology and Climatology 42(3), 381388.2.0.CO;2>CrossRefGoogle Scholar
Seo, M, Choi, Y, Ryu, H, Park, H, Bae, H, Lee, H and Seo, W (2022) Intermediate and future frame prediction of geostationary satellite imagery with warp and refine network. Available at https://arxiv.org/abs/2303.04405 (accessed 8 March 2023).Google Scholar
Seo, M, Kim, D, Shin, S, Kim, E, Ahn, S and Choi, Y (2022) Simple baseline for weather forecasting using spatiotemporal context aggregation network. Available at https://arxiv.org/abs/2212.02952 (accessed 10 December 2022).Google Scholar
Shi, X, Gao, Z, Lausen, L, Wang, H, Yeung, D-Y, Wong, W-k and Woo, W-c (2017) Deep learning for precipitation nowcasting: A benchmark and a new model. Advances in Neural Information Processing Systems, 30.Google Scholar
Wang, T-C, Liu, M-Y, Zhu, J-Y, Tao, A, Kautz, J and Catanzaro, B (2018) High-resolution image synthesis and semantic manipulation with conditional GANs. In IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 87988807. https://doi.org/10.1109/CVPR.2018.00917CrossRefGoogle Scholar
Yoon, S, Jeong, C and Lee, T (2014) Flood flow simulation using CMAX radar rainfall estimates in orographic basins. Meteorological Applications 21(3), 596604.CrossRefGoogle Scholar
Figure 0

Figure 1. Model input satellite channels. (a) is 0.64 $ \mu $m visible channel, (b) is 6.03 $ \mu $m water vapor channel, and (c) is 10.5 $ \mu $m infrared channel. In common, bright areas indicate clouds or high-moisture areas. Each channel shows different characteristics associated with cloud states.

Figure 1

Figure 2. Track of Super Typhoon Hinnamnor. https://www.weather.go.kr/w/typhoon/typ-history.do.

Figure 2

Figure 3. Architecture of the proposed disaster monitoring model, which consists of the two-step models. The WR-Net, a video frame prediction network, predicts future satellite images based on cloud movements. Using the generative adversarial network, the geostationary rainfall product (GeorAIn) generates the proxy radar reflectivity map from the satellite images.

Figure 3

Figure 4. IR images from (a) GK2A IR 10.5 $ \mu $m channel, and (b) the results of WR-Net with the optical flow between 00 and 01 UTC on September 9, 2022. The bright pixels in both images indicate the presence of cloud areas. (c) is the difference map between (a) and (b). Each color bar means temperature (K).

Figure 4

Figure 5. Comparison between the histogram of the original GK2A image (blue bar) and WR-Net-predicted image (brown bar) at each channel. (a) is 0.64 $ \mu $m visible channel, (b) is 6.03 $ \mu $m water vapor channel, and (c) is 10.5 $ \mu $m infrared channel. All values are calculated on a log scale.

Figure 5

Figure 6. A 2D histogram of the radar reflectivity results from the GeorAIn model, depending on the combination of the input channel. ‘ori_radar_reflectivity’ is the result of original GK2A three channels (VIS, WV, and IR), and ‘gen_radar_reflectivity’ means the results of the combination of GK2A two channels (VIS, WV) and the WR-Net-predicted IR image. The ‘coeff’ in the subtitle means the correlation coefficient between two results in each graph. The color bar means the frequency of data, and it was calculated with a log scale.

Figure 6

Figure 7. Comparison of the KMA radar data and predicted results. (a) are the rain rate from the KMA radar product, (b) from the GeorAIn model with GK2A channels, and (c) from the GeorAIn model with GK2A and the WR-Net-predicted IR images. (d) is the hourly total precipitation from ERA5, and (e) is the IMERG precipitation product. The color bar means rain rates (mm/hr).