NWC REU 2012

May 21 - July 31

 

 

 

Projects

 

“Comparing High-Speed Video and Lightning Mapping Array Observations to Investigate the Influence of Ground Strikes on Lightning Flash Characteristics”

Kate Ciampa - University of Oklahoma
Mentors: Dr. William Beasley and Dr. Dan Petersen


This study presents exploratory observations for a set of 28 cloud-to-ground flashes recorded on high-speed video, each with a time resolution of 10,000 frames per second. The high time resolution, along with precise GPS timing, allows a detailed comparison of the videos with corresponding data from the Oklahoma Lightning Mapping Array (LMA). The LMA detects VHF radiation sources throughout the flashes, complementing the videos to provide a more complete visual understanding of flash development. We identify several interesting effects and phenomena related to the ground strike, and highlight the differences in flash development between cloud-to-ground flashes recorded on video and nearby intracloud flashes. Observations start with initial breakdown patterns in the flashes, contrasting initial structure of video cloud-to-ground flashes with that of the intracloud flashes. An initial descending discharge pattern is likely to terminate with a ground strike. The opposite initial pattern, a rising succession of source points, was observed in both some video recorded cloud-to-ground flashes and most nearby intracloud flashes. For cloud-to-ground flashes with an initial rise, abrupt changes in flash development can occur after the ground termination on video. In some cases, breakdown starts suddenly at new altitudes or locations, often associated with the beginning of new large-scale branches. The density of radiation source points can also change following a ground strike on video, often switching from dense, well-defined branching to sparser, more scattered discharge during the continuing current phase. Nearby intracloud flashes typically featured simultaneous defined and scattered breakdown, each developing in a separate layer.

Full Manuscript

The basis for this study:

  • High-speed digital videos image the optical properties of lightning ground strikes with high time resolution and detail
  • Lightning Mapping Arrays detect impulsive VHF radio emissions throughout flash development, using precise time-of-arrival measurements to determine their locations

What this study adds:

  • Combining two frequency windows, optical and VHF, offers a more complete picture of cloud-to ground lightning development
  • Lighting ground strikes on high-speed video are related to several interesting effects and phenomena observed with the LMA
  • Exploratory investigation reveals differences between video-observed CG flashes and nearby intracloud flashes

 

“Accuracy of GFS Forecasts: An Examination of High-Impact Cold Season Weather Events”

Daniel Diaz - Florida State University

Mentors: Dr. Steven Cavallo and Dr. Brian Fielder


Numerical weather prediction models are thought to handle certain situations with a diminished level of skill, such as with baroclinic Rossby wave packets in the midlatitudes and with large-scale regime changes associated with the onset of atmospheric blocking. Using analysis and forecast data from the NCEP GFS model, this study examines the forecast skill of the 2011–2012 cold season in the Northern Hemisphere with the hypothesis that relatively large model error is primarily associated with baroclinic Rossby wave packets and the onset of atmospheric blocking events. Forecast skill is diagnosed by examining forecast model error through the use of Hovmöller diagrams and the 500 hPa geopotential height anomaly correlation skill score. These diagnostics identify cases which exhibit relatively large error in the forecast model, including tropopause-level wave packets and associated forecast error at the surface. One such event, a 955 hPa surface low, is examined in detail during November 2011, which is found to be associated with relatively large error downstream at later times. Although there are some instances of increased model error in wave packets, coherent patterns of error are not present with every wave packet identified in this study. Additionally, the onset of a long-lived blocking ridge of high pressure around 20° E longitude, which persisted intermittently from the middle of October 2011 to March 2012, is examined. This ridge is found to become stationary after a series of breaking baroclinic Rossby waves impact it, and low skill is seen during its transition from a non-stationary to a stationary ridge. However, no definitive conclusions can be made regarding the source or characteristics of model error for this particular event, and further examination is necessary to gain a better understanding of this event.

Full Manuscript

The basis for this study:

  • Wave packets can be problematic for global NWP models due to the fact that model error can propagate in a pattern similar to that of a wave packet when these features are present.
  • Waves on the jet stream impact weather at the surface. Impacts can include the onset of atmospheric blocking, and cyclogenesis.

What this study adds:

  • Forecast model error was diagnosed for the 2011-2012 Northern Hemisphere cold season using the GFS model while identifying particular high-impact weather events.
  • Model error was sometimes associated with wave packets in the upper atmosphere, with corresponding model error reflected at the surface.
  • Our expectation was that model error is generally enhanced with wave packets . Although circumstances exist in which a degradation of skill followed our expectations, there are instances identified where model error did not follow expectations, and hence our expectations were not proven in all cases.

“Intercomparison of Vertical Structure of Storms Revealed by Ground-based (NMQ) and Spaceborne Radars (CloudSat-CPR and TRMM-PR)”

Veronica Fall - Valparaiso University
Mentors: Dr. Yang Hong, Dr. Qing Cao and Nicole Grams

Spaceborne radars provide great opportunities to investigate the vertical structure of clouds and precipitation. Two typical spaceborne radars for such a study are the W–band Cloud Profiling Radar (CPR) and Ku–band Precipitation Radar (PR), which are onboard NASA’s CloudSat and TRMM satellites, respectively. Compared to S–band ground–based radars, they have distinct scattering characteristics for different hydrometeors in clouds and precipitation. The combination of spaceborne and ground–based radar observations can help the identification of hydrometeors and improve the radar–based quantitative precipitation estimation (QPE). This study analyzes the vertical structure of the 18 January 2009 storm using data from the CloudSat CPR, TRMM PR, and a NEXRAD–based National Mosaic and Multisensor QPE (NMQ) system. Microphysics above, within, and below the melting layer are studied through an intercomparison of multi–frequency measurements. Hydrometeors’ type and their radar scattering characteristics are analyzed. Additionally, the study of the vertical profile of reflectivity (VPR) reveals the brightband properties in the cold–season precipitation and its effect on the radar–based QPE. In all, the joint analysis of spaceborne and ground–based radar data increases the understanding of the vertical structure of storm systems and provides a good insight into the microphysical modeling for weather forecasts.

Full Manuscript

The basis of this study:

  • Microphysical processes in the sub-freezing region are highly related to the precipitation on the surface.
  • Understanding these processes may improve the microphysical modeling and quantitative precipitation estimation.
  • Multi-frequency observations from ground-based and spaceborne radars provide informative details of microphysical structure of cloud and precipitation.

What this study adds:

  • The intercomparison of S-band, W-band, and Ku-band radar observations demonstrates multi-frequency scattering characteristics of various hydrometeors.
  • Vertical profile of radar reflectivity reveals the vertical evolution of a cold-season storm.
    Identification of melting layer is important to understanding the vertical structure of winter storms.

“Evaluation of Precipitation Diurnal Variability by TRMM: Case of Pakistan's 2010 Intense Monsoon”

Hannah Huelsing - University of Northern Colorado
Mentors: Dr. Yang Hong, Dr. Sadiq Kahn and Nicole Grams

This study examines the spatial and temporal distribution of the Asian pre-monsoon (AMJ) and monsoon (JAS) seasons in Pakistan over the intense flood year of 2010. This is completed using Tropical Rainfall Measuring Mission (TRMM) 3-hourly data with a spatial resolution of 0.25° x 0.25° and comparing the rain rates from 2010 with those from 2005-2009. For better examination, the rainfall data was divided into four categories: the pre-monsoon seasons of 2005-2009, the pre-monsoon season of 2010, the monsoon seasons of 2005-2009, and the monsoon season of 2010. Both the pre-monsoon and monsoon seasons of 2010 showed an increase in rain rates associated with anomalous atmospheric conditions. The only spatial difference exhibited in 2010 was an increase in the intensity of rainfall at slightly higher elevations than normal, which can be attributed to the increase of moisture content in the atmosphere. Otherwise, a majority of the precipitation occurred along the Himalayan foothills in the northeastern region of Pakistan. The temporal shift between pre-monsoon and monsoon seasons was enhanced in 2010, showing the shift from the deep convection associated with severe storms to the strong, wide convection associated the Mesoscale Convective Systems.

Full Manuscript

The basis for this study:

  • In 2010, Pakistan experienced intense flooding as a result of anomalous atmospheric conditions associated with the Asian Monsoon.
  • Conditions leading to this event have not been established.
  • Understanding the atmospheric patterns associated with the floods will assist with forecasting similar events.

What this study adds:

  • The arrival of the 2005–2010 monsoon seasons exhibited a temporal pattern, correlating the occurrence of precipitation with convection type.
  • The comparison of 2010 TRMM precipitation data with that of other years showed an increase in frequency of rainfall occurrence and intensity for 2010.
  • Increase in precipitation in 2010 led to super-saturation of the soil and extreme run-off.

“Sensitivity of Planetary Boundary Layer Parameterization Schemes on Forecasting Blizzard Conditions for the 11–12 December 2010 Snowstorm”

Nathan Korfe - St. Cloud State University
Mentors: Dr. Heather Reeves and Dr. Adam Clark

Producing blowing snow and visibility forecasts for severe winter storms poses a significant challenge to numerical models. Changing the planetary boundary layer (PBL) parameterization schemes in numerical weather models may improve the forecast for blizzard conditions, but it is uncertain how much the forecast is dependent on different PBL parameterization schemes. The study examines five experiments, each with a different PBL scheme, using the Weather Research and Forecasting (WRF) model for a winter storm that occurred 11-12 Dec 2010 in the upper Midwest. Of the five experiments, the MYJ does not produce any blizzard conditions, while the MYNN and ACM2 provide the most accurate forecast of blizzard conditions with a significant area of surface winds 15-17 m s-1 in western Iowa. Liquid precipitation and model visibility are also considered. Although very similar over areas with widespread blizzard conditions, MYJ and QNSE produce accurate maximum precipitation forecasts with 55 mm. The model visibility does not show any significant changes from scheme to scheme.

Full Manuscript

The basis for this study:

  • Winter snowstorms associated with large amounts of blowing snow are hazardous for public safety.
  • Surface winds must be accurately forecasted to predict the visibility.
  • Changing the PBL parameterization scheme in numerical models may improve the surface wind forecast for blizzard conditions.

What this study adds:

  • Demonstrates how different PBL schemes can have significant effects on a surface wind speed forecast for a winter storm that occurred on 11 December 2010.
  • Model visibility was not a good predictor of short-term visibility for that case.
    Provides basis for further investigation into PBL sensitivity with severe winter storms.

“Investigating the Relationship of Multi-Radar Multi-Sensor Parameters to Tornado Intensity”

Jonathan Labriola - University of Miami
Mentors: Kiel Ortega, Darrel Kingfield and Madison Miller

Derived radar parameters were investigated to determine the correlation between radar products and tornado intensity. More than four–hundred tornadoes from eleven tornado outbreaks between 2008 and 2011 were analyzed using WSR–88D radar sites. Radar reflectivity data was quality controlled, Doppler velocity data was dealiased and then merged in order to fill in any potential data gaps related to volume coverage geometry or blockages. Derived parameters included reflectivity values at certain heights and maximum azimuthal shear values within certain layers of the atmosphere. The lifetime maximum values of these fields surrounding tornado tracks were extracted and compared to the reported tornado intensities. It was found that lifetime maximums of radar derived parameters showed little discrimination of tornado intensity. However calculations of azimuthal shear area of the tornado paths did show some discrimination.

Full Manuscript

The basis for this study: 

  • There is increasing interest to use the WSR-88D velocity fields to diagnose the strength of a tornado while it is occurring.
  • Past work was unable to determine tornado strength using single radar velocity fields.
  • Multi-radar, multi-sensor data and experimental azimuthal shear products have not been tested before.

What this study adds:    

  • Maximum lifetime values for a storm's shear and related velocity parameters do not discern weak (EF0-1) and strong (EF2-5) tornadoes well.
  • Tornado ratings were not dependent on maximum and average population density within the tornado path.
  • Shear area calculations, with limitations and caveats, showed some promise in differentiating between weak and strong tornadoes.

“Evaluating Likely Hail Impacts from SPC Day One Outlooks”

Brittany Recker - Pennsylvania State University
Mentors: Mark Leidner and Dr. Dan Gombos

The purpose of this study is to interpret the Storm Prediction Center (SPC) convective outlooks for potential impacts. Radar data was used to provide the link between what has been forecast by SPC outlooks to what actually occurred. Several metrics, such as total area (km2) of SPC outlooks and radar-determined hail, and center of mass positions, were calculated in order to evaluate SPC outlooks. It was found that, over the sample size in this study (n = 108 case days), the percentage of radar hail probability area inside the SPC Slight Risk threshold is approximately 23%, while the percentage of SPC Slight Risk or greater covered by radar data is approximately 12%. A correlation coefficient of 0.595 was also found by looking at the relationship between SPC area-weighted probability and the area of radar data that overlaps the SCP Slight Risk areas. These values, along with the other metrics, will allow convective outlook forecast users to be better prepared for the impacts of severe hail.

Full Manuscript

The basis for this study:

  • The Storm Prediction Center (SPC) issues probabilistic forecasts of areal hail coverage.
  • This study seeks to discover the relationship between these forecasts and what actually transpired.

What this study adds:

  • Radar data from 8 March – 8 July 2012 was used to evaluate corresponding SPC convective outlooks.
  • The area covered by radar hail probabilities was an order of magnitude smaller than the area covered by SPC outlooks.
  • 23% of radar hail probabilities fell inside SPC 15% outlook.
  • Using the link between what was forecast to what actually occurred established by radar data, users can be better prepared for the impacts of severe hail.

“Conceptualizing How Forecasters Think About Uncertainty in the Context of Severe Weather”

Astryd Rodriguez - Florida International University
Mentors: Dr. James Correia Jr, Rachel Riley and Gabe Garfield

Uncertainty is inherent in every weather forecast. In order to create better methods to communicate uncertainty to the public and other end users, it is necessary to understand how forecasters think about and understand it. Around twenty hours of observational data from the National Oceanic and Atmospheric Administration (NOAA) Hazardous Weather Tested Spring Experiment (HWT) 2012 was collected in order to analyze the participants’ assessment of uncertainty in a real forecasting environment. Also, ten in-depth interviews were carried out in which research and operational forecasters were asked several questions regarding uncertainty. The interviews were recorded and transcribed for analytical purposes. Results show that even though the forecasters in this study were aware of the inherent uncertainty in severe weather, they were unable to quantify it, nor did they have a consensus on its definition. Moreover, the forecasters lacked a conceptual model of uncertainty. Instead, they used their internal climatology as both a tool and a framework to describe and assess uncertainty. Finally, population size was the most important non-meteorological factor that they used to assess spatial uncertainty.

Full Manuscript

The basis for this study:

  • Uncertainty in weather is derived from the chaotic nature of the atmosphere, sparseness of weather observations, errors inherent in the observing systems, and the increased availability of numerical models and their amplified use.
  • Forecasters have a hard time understanding uncertainty, and as a result, conveying it effectively among themselves and end users.

What this study adds:

  • Even though forecasters are aware of the uncertainty inherent in weather, they are not able quantify it, nor do they have a consensus on its definition.
  • Forecasters lack a conceptual model of uncertainty.
    Internal climatology is a mental tool forecasters use to asses uncertainty.
  • Population size and verification are the most important non-meteorological factors that influence major forecast decisions.

“A Comparison of Mesoscale Analysis Systems Used for Severe Weather Forecasting”

Rebecca Steeves - North Carolina State University
Mentors: Dr. Dustan Wheatley and Dr. Michael Coniglio

The relative performances of several mesoscale analysis systems are evaluated with regard to severe convective weather forecasting, by exploring their ability to reproduce soundings collected in pre-convective and near storm environments observed during the Verification of the Origins of Rotation in Tornados Experiment 2 (VORTEX2) field phase. This was done to investigate a greater use of mesoscale ensemble forecasts in the operational setting. Soundings that matched the geographical locations and release times of the VORTEX2 soundings were extracted from datasets of the Rapid Update Cycle (RUC) model, the Surface Objective Analysis (SFCOA) developed by the Storm Prediction Center, and a Weather Research and Forecasting (WRF) mesoscale ensemble system, developed at the National Severe Storms Laboratory (NSSL). Parameters and characteristics important to severe weather forecasting are extracted from the systems’ datasets at the observed sounding locations and compared to the observations. Results show that the mesoscale ensemble forecasts, in many cases, produce smaller errors than the other mesoscale analyses considered when calculating the planetary boundary layer height, surface based convective available potential energy, the surface based lifted condensation level, and near surface temperatures and dew points. Findings thus far display the potential of the mesoscale ensemble models to produce an accurate depiction of the mesoscale environment.

Full Manuscript

The basis for this study:

  • An accurate depiction of the mesoscale environment is important to identifying the severe convective weather threat.
    Numerical weather prediction is an important part of operational severe convective storm forecasting.
  • Evaluating the error and bias characteristics of mesoscale analysis systems (single model and ensemble based) used for severe weather forecasting can assist the forecaster in interpreting the model systems' output.

What this study adds:

  • A preliminary investigation into the use of a WRF-based mesoscale ensemble system for operational severe convective storm forecasting.
  • From the 19 cases used and specific parameters/characteristics analyzed, WRF-based ensemble produced comparable or slightly lower errors to other analysis systems.
    These results support continued exploration of a WRF-based mesoscale ensemble system for use in operation on a larger scale.

“Evaluation of a Lightning Jump Algorithm with High Resolution Storm Reports”


Phillip Ware - University of North Carolina Charlotte
Mentors: Dr. Kristin Calhoun, Kiel Ortega and Greg Stumpf

Numerous studies have shown a correlation between rapid increases in lightning activity and the occurrence of severe weather at the surface. The skill of an automated algorithm that detects these rapid increases in lightning, or lightning jumps, was evaluated for 8 different cases in this study using high-resolution storm reports. A completely automated algorithm was used to identify and track storm cells in three domains: central Oklahoma, northern Alabama, and Washington D.C. Multiple storm attributes including total lightning were attributed to each tracked storm in 1-minute intervals. Lightning jumps with each of the 8 cases were then verified using high resolution storm reports collected during the Severe Hazards Analysis and Verification Experiment (SHAVE). These reports offered much better spatial resolution than NCDC Storm Data, and produced a more accurate view of hail and wind evolution or “severe storm periods” at the surface. For the 8 cases examined the algorithm produced an average lead time of 0 minutes when using SHAVE data for verification. Verification statistics were slightly better when using NWS storm reports though not nearly as good as that noted in previous studies.

Full Manuscript

The basis for this study:

  • A lightning jump algorithm has been shown to be useful in determining whether a storm is severe.
  • Those studies used Storm Data, which is fairly sparse in spatial coverage.
  • High resolution reports (SHAVE) are now available to evaluate the lightning jump in a more complete manner.

What this study adds:

  • Major differences in lightning jump skill are found when using Storm Data or SHAVE reports
  • Automated tracking and calculations of the lightning jump are confounded by storm mergers and splits.
  • The lightning jump algorithm did not perform well on storms with very little lightning.

“Toward a Better Understanding of Tornado Fatalities”

Hope Weldon - Jackson State University
Mentors: Greg Carbin and Dr. Harold Brooks

Between the years 1991 through 2010 there were over 400 tornadoes that directly caused the deaths of over 1130 people. Although the details surrounding the deaths are available, they are by no means easy to locate, and over 10% of the information was completely missing. By collecting this historical data from numerous sources, including Storm Data, the Storm Prediction Center (SPC) web page, the National Climatic Data Center (NCDC) database, and various newspaper articles, a single, searchable database was created. With this information all in one place, the circumstances concerning United States tornado fatalities were able to be analyzed with a more accurate data set for the years involved. In this study the United States was divided into three predetermined regions. Region one, the southeast, contains the states: Alabama, Florida, Georgia, Mississippi, North Carolina, South Carolina and Tennessee. Region two, the south plains consists of: Arkansas, Louisiana, Oklahoma and Texas. The remaining states make up the third region. The tornado fatality demographics from these regions were compared to each other as well as information from the United States 2010 Census data. It can be concluded from the data that in all regions men were preferentially killed over women, elderly people died at a greater rate than those who were younger, people in were in mobile homes died more than any other circumstance and on average just as many fatalities occurred between 8am to 8pm as 8pm to 8am. The purpose of this paper is to raise awareness amongst both the public and the eteorological community, by exploiting the differences and similarities of age, gender, circumstance and time of day of those fatalities across the different regions.

Full Manuscript

The basis for this study:

  • Over 10% of the information for the 1131 tornado fatalities known to have occurred between 1991–2010 is missing in NCDC's Storm Data database.
  • Identifying vulnerable populations can assist the spectrum of public safety programs in focusing resources to reduce future tornado fatalities.

What this study adds:

  • A more complete data set of fatalities due to killer tornadoes from 1991-2010.
  • Males died more frequently than women.
  • The elderly died at a higher rate than younger people.
  • People in mobile homes were more likely to die from a killer tornado than in any other circumstance.
  • The southeast region alone had just about as many fatalities from tornadoes as did the rest of the country.

 

 

 

Copyright © 2012 - Board of Regents of the University of Oklahoma