No results found
We couldn't find anything using that term, please try searching for something else.
Author ContributionsConceptualization, J.Z. and J.X.; methodology, S.H.; software, J.Z.; validation, J.Z., J.X. and S.H.; formal analysis, J.Z.; inves
Conceptualization, J.Z. and J.X.; methodology, S.H.; software, J.Z.; validation, J.Z., J.X. and S.H.; formal analysis, J.Z.; investigation, J.Z.; resources, J.Z.; data curation, J.Z.; writing—original draft preparation, J.Z.; writing—review and editing, J.Z.; visualization, J.Z.; supervision, S.H.; project administration, J.Z.; funding acquisition, J.X. All authors have read and agreed to the published version of the manuscript.
Figure 1.
Meteorological satellite cloud picture.
Figure 1.
Meteorological satellite cloud picture.
Figure 2.
Some meteorological satellite cloud picture samples. Please add explanation of subfigures in caption.
Figure 2.
Some meteorological satellite cloud picture samples. Please add explanation of subfigures in caption.
figure 3 .
Structure of the deep learning convolution layer.
figure 3 .
Structure of the deep learning convolution layer.
figure 4 .
structure of the deep learning collection layer .
figure 4 .
Structure of the deep learning collection layer.
figure 5 .
Schematic diagram of Graph Convolutional Network ( GCN ) structure .
figure 5 .
Schematic diagram of Graph Convolutional Network ( GCN ) structure .
Figure 6.
Schematic diagram of the Long Short-Term Memory (LSTM) model structure.
Figure 6.
Schematic diagram of the Long Short-Term Memory (LSTM) model structure.
Figure 7.
overall framework of the GCN – LSTM fusion model .
Figure 7.
overall framework of the GCN – LSTM fusion model .
figure 8 .
Performance comparison results of different models (Note: M1 is the ANN model, M2 is the Recurrent Neural Network (RNN) model, M3 is the GCN model, M4 is the LSTM model, M5 is the GCN–LSTM, and M6 is the RNN–LSTM model, where 0–200 is the test set and 200–1000 is the training set). Figures (A–E) show the performance result of different algorithm in accuracy , precision , recall , recognition rate , and processing speed under different test set .
figure 8 .
Performance comparison results of different models (Note: M1 is the ANN model, M2 is the Recurrent Neural Network (RNN) model, M3 is the GCN model, M4 is the LSTM model, M5 is the GCN–LSTM, and M6 is the RNN–LSTM model, where 0–200 is the test set and 200–1000 is the training set). Figures (A–E) show the performance results of different algorithms in accuracy, precision, recall, recognition rate, and processing speed under different test sets.
Figure 9.
Feature extraction effect of different number of convolution kernel ( note : Q1 ( 1 × 1 convolution kernel ) , Q2 ( 3 × 3 convolution kernel ) , Q3 ( 5 × 5 convolution kernel ) , Q4 ( 7 × 7 convolution kernel ) , Q5 ( 8 × 8 convolution kernel ) , and Q6 ( 9 × 9 convolution kernel ) ) . figure (A) show the feature extraction effect under different number of convolution kernel on the training set ; Figure (b) show the feature extraction effect under different number of convolution kernel on the test set .
Figure 9.
Feature extraction effect of different number of convolution kernel ( note : Q1 ( 1 × 1 convolution kernel ) , Q2 ( 3 × 3 convolution kernel ) , Q3 ( 5 × 5 convolution kernel ) , Q4 ( 7 × 7 convolution kernel ) , Q5 ( 8 × 8 convolution kernel ) , and Q6 ( 9 × 9 convolution kernel ) ) . figure (A) show the feature extraction effect under different number of convolution kernel on the training set ; Figure (b) show the feature extraction effect under different number of convolution kernel on the test set .
Figure 10.
Feature extraction result of some convolution kernel .
Figure 10.
Feature extraction result of some convolution kernel .
Figure 11.
Model accuracy under different convolution kernel . figure (A) shows the accuracy of the model under different convolution kernels on the training set; Figure (b) shows the accuracy of the model under different convolution kernels on the test set.
Figure 11.
Model accuracy under different convolution kernel . figure (A) shows the accuracy of the model under different convolution kernels on the training set; Figure (b) shows the accuracy of the model under different convolution kernels on the test set.
figure 12 .
Model accuracy rate at different neuron ratio . figure (A) shows the accuracy of the model under different neuron ratios on the training set; Figure (b) show the accuracy of the model under different neuron ratio on the test set .
figure 12 .
Model accuracy rate at different neuron ratio . figure (A) shows the accuracy of the model under different neuron ratios on the training set; Figure (b) show the accuracy of the model under different neuron ratio on the test set .
figure 13 .
Model stability test result . figure (A,b) illustrate the mean absolute errors of the GCN-LSTM model after 6 h and 12 h training on the data of 2010–2019, respectively. Figure (C) show the identify and predict number of typhoon of LSTM , RNN – LSTM , and GCN – LSTM model .
figure 13 .
Model stability test result . figure (A,b) illustrate the mean absolute errors of the GCN-LSTM model after 6 h and 12 h training on the data of 2010–2019, respectively. Figure (C) show the identify and predict number of typhoon of LSTM , RNN – LSTM , and GCN – LSTM model .
Table 1 .
Typhoon level standard label .
Table 1 .
Typhoon level standard label .
Typhoon Level | Maximum Wind Speed / kt | Maximum Wind Speed/(m/s) |
---|---|---|
Tropical depression | <34 | <17 |
Typhoon | >34–<64 | >17–<33 |
Strong typhoon | >64–<85 | >33–<44 |
Super Typhoon | >85–<105 | >44–<54 |
Table 2.
Comprehensive evaluation of typhoon level prediction.
Table 2.
Comprehensive evaluation of typhoon level prediction.
Categorical Data | Tropical Depression ( 0- ) | Typhoon (1-) | Strong Typhoon (2-) | Super Typhoon (3-) |
---|---|---|---|---|
Tropical depression | 83.36 | 12.67 | 9.59 | 3.28 |
Typhoon | 1 | 95.12 | 0 | 0 |
Strong typhoon | 1 | 1 | 93.24 | 7.24 |
Super Typhoon | 0 | 0 | 1 | 95.12 |