NWC REU 2018
May 21 - July 31

 

 

Photo of author

Verification of Hail Forecasts Produced by Machine-Learning Algorithms

Sarah McCorkle

 

What is already known:

  • Hail is damaging to property, crops, and livestock, which can result in billions of dollars of damage a year
  • The ability to better predict a hail event even a day in advance can help mitigate its risks
  • Machine-learning algorithms have already shown skill in predicting hail, as they can identify areas where hail will be a threat
  • The HREFv2 model is an operational ensemble model used by NOAA, and its output was used to create machine-learning forecasts during the HWT Spring Experiment in 2018

What this study adds:

  • By verifying these machine-learning forecasts, we can find weaknesses and make improvements to the algorithm
  • This study has shown that the HREFv2 ML forecasts have a bias to over-forecast
  • The raw HREFv2 forecasts are calibrated to the observations and SPC practically perfect forecasts using isotonic regression
  • The corrected HREFv2 forecasts became more reliable when calibrated to the observational data
  • Post-calibration is necessary if the HREFv2 ML forecasts are to be used in an operational forecasting applications

Abstract:

Hail can result in billions of dollars with of damage every year. The ability to forecast for significant hail events even just a day in advance can greatly mitigate severe hail risk. Machine-learning (ML) algorithms have already shown skill in producing skillful hail forecasts, as they can identify the areas that hail will be a threat. Using output from the High Resolution Ensemble Forecast version 2 (HREFv2) model, new forecasts were produced during the Hazardous Weather Testbed (HWT) Spring 2018 experiment for days April 30th to June 1st. Verification is necessary to identify weaknesses in these algorithms in order to make improvements. The ultimate goal of verification of these forecasts is to show that these ML algorithms can skillfully forecast for hail to increase trust to eventually implement them into operational forecasting. By verifying these forecasts using reliability diagrams, it was discovered that there was a bias of over-forecasting. Isotonic regression was used to correct for the HREFs tendency to over-forecasting. The raw HREFv2 data was calibrated to both the SPC practically perfect forecasts and to the observations. When calibrated to the observations, the corrected HREFv2 produced more reliable forecasts.

Full Paper [PDF]