|
Projects
Using mPING Observations
to Verify Surface Precipitation Type Forecasts From Numerical
Models
Deanna Apps — SUNY Oswego
Mentors: Dr. Kim Elmore and Dr. Heather Grams
The mPING app allows the public citizen to submit reports of the
weather occurring at their location from anywhere on the globe.
This study uses precipitation type reports made through mPING in
the continental United States to verify precipitation type forecasts
of operational numerical models. The models evaluated are the North
American Mesoscale (NAM) model, the Global Forecast System (GFS),
and the Rapid Refresh (RAP) model. Strengths and weaknesses
of each model’s forecast are investigated for freezing rain,
ice pellets, rain, and snow. The Heidke and Peirce skills
scores are used predominantly, along with other performance measures. Overall,
the models show less skill in the rare events of freezing rain
and ice pellets, while overcompensating those precipitation types
for rain or snow.
Full Manuscript |
Basis For This Study
- The surface weather observing network reports of precipitation
type are too geographically sparse to adequately verify details
in the quality of forecasts from the numerical weather prediction.
- A new a smartphone app called mPING allows public citizens
to submit reports of the weather occurring at their location.
- This study uses quality-controlled mPING data to examine how
well the RAP, NAM, and GFS weather prediction models can forecast
rare precipitation types versus more common precipitation types.
What This Study Adds
- Two events in February 2013 were chosen for analysis because
snow, freezing rain, ice pellets, and rain all occurred in both
events.
- Numerical weather prediction forecasts were simplified into
four main precipitation types that are reported by mPING users:
freezing rain, ice pellets, rain, and snow.
- All three numerical weather prediction models forecasted rain
and snow significantly better than freezing rain or ice pellets.
|
Determining Which Polarimetric Variables
are Important for Weather/Non-Weather Discrimination Using Statistical
Methods
Samantha Berkseth — Valparaiso University
Mentors: Dr. Valliappa Lakshmanan, Dr. Chris Karstens, and Kiel Ortega
Weather radar is a useful tool for the meteorologist in examining the atmosphere
and determining what types of weather are occurring, how large an area a weather
event might cover, and how severe that event might be. It is also widely used
for automated applications. However, weather radar can pick up on objects other
than just weather, causing the data to become cluttered and harder for forecasters
to decipher. Quality control algorithms can help to identify which echoes returning
to the radar are meteorological and which are not, and they can then remove
such contaminants to create a clearer image for the meteorologist. With the
recent widespread upgrade to dual polarization technology for the WSR-88D (Weather
Surveillance Radar 1988 Doppler) radars, polarimetric variables can be used
in these quality control algorithms, allowing for more aspects of the data
to be analyzed and more of the contamination to be removed. This study analyzes
those polarimetric variables in order to determine which are the most important
for weather/non-weather discrimination. Such research serves to help rank variable
importance and prevent the quality control algorithm from being overfit, thus
aiding in developing the most efficient algorithm for operational use.
Full Manuscript
|
Basis For This Study
- Non-weather targets detected by weather radar can have
a negative effect on the quality of the data being analyzed.
- Quality control algorithms can help to remove these contaminants
and make the data easier to interpret.
- The dual polarization upgrade to the WSR-88D network allows
for additional radar variables to be analyzed and implemented
in these quality control algorithms.
What This Study Adds
- The importance of different dual polarization variables are
assessed using statistical methods
- Some statistical methods are more telling than others in revealing
how unique a variable is in aiding with quality control.
|
The Relationship of Precursory Precipitation
and Synoptic-Scale Mid-Tropospheric Flow Patterns to Spring Tornado
Activity in Oklahoma
Levi Cowan — University of Alaska Fairbanks
Mentors: Marcus Austin, Jonathan Kurtz, Matthew Day, and Michael Scotten
Winter precipitation and 500 hPa geopotential height are analyzed as potential
precursory predictors of spring tornado activity in Oklahoma (OK). The Storm
Prediction Center (SPC) tornado database is used to calculate tornado days
for each of the nine climate divisions in OK. Using daily precipitation totals
from the Climate Prediction Center U.S. Unified Precipitation dataset, Dec-Feb
accumulated precipitation is correlated with Mar-Jun tornado days for each
climate division. Insignificant correlations are found for all climate divisions,
and statistical tests affirm that there is no significant difference in OK
tornadic activity following wet versus dry winters. The synoptic-scale variability
in the Rossby wave pattern over the United States (US) associated with OK tornado
activity may explain the ineffectiveness of precursory precipitation as a predictor,
but also suggests qualitatively that precursory precipitation could be a statistically
significant predictor of tornado activity in other regions of the US (Shepherd
et al. 2009). Geopotential height at 500 hPa (Z500) from NCEP/NCAR reanalysis
is also examined. A statistically significant and temporally consistent relationship
is found between Z500 in the Pacific Northwest region and Mar-Jun statewide
tornado days during 1981-2010 when Z500 is averaged over the preceding 4-month
period (Nov-Feb). Persistent troughing (ridging) over the northwestern US and
southwestern Canada during the winter is found to shift southeastward into
the Rocky Mountains and enhance (suppress) OK tornado activity during the subsequent
spring. This relationship strengthens as lead time is decreased, and may provide
a method for predicting overall tornado activity in OK on a seasonal time scale.
Full Manuscript
|
Basis For This Study
- Seasonal tornado forecasts are not yet offered in either an
operational or an experimental capacity.
- Skillful seasonal outlooks could be very beneficial to emergency
managers, operational forecasters, and the public.
- Little research
has been focused on developing seasonal-scale predictive methods.
What This Study Adds
- Local precipitation and jet stream configuration in the winter
are investigated as potential predictors of spring tornado activity
in Oklahoma.
- Local wintertime precipitation is found to be a poor predictor
of springtime tornado activity in Oklahoma.
- The Nov-Feb 500 hPa geopotential height anomaly in the Pacific
Northwest region is found to be useful for predicting subsequent
Mar-Jun tornado days in Oklahoma. This relationship could contribute
to the development of seasonal tornado forecasts.
|
An Evaluation of the Climate Forecast
System Version 2 as an Extended Range Forecasting Tool in the Storm
Prediction Center
Joshua Crittenden — East Central University
Mentors: Dr. Harold Brooks, Greg Carbin, Dr. Sean Crowell ,and
Dr. Patrick Marsh
As of today, extended range forecasts cannot be made on a consistent
day to day basis. The ability of forecasters to predict severe weather
beyond a three day lead time is limited. If it is made possible for
forecasters to make reliable and consistent extended range forecasts,
then the safety of the public will be enhanced by severe weather
warnings several days in advance.
In order to potentially give forecasters
a new tool in assisting with extended range forecasting of severe
weather, the Climate Forecast System Version 2 (CFSv2) and its forecasts
are being examined and compared with the Storm Prediction Center
(SPC) Day 4–8 forecasts and also compared with actual reported
events.
Granted that there are days without severe weather, few of
SPC Day 4–8 Severe Weather Outlooks have actual forecasts.
The CFSv2 has shown an ability to reliably forecast severe (or lack
of severe) weather with a day four lead time and moderate reliability
at day five. Although the CFSv2’s capability to forecast reliably
beyond day five is, to some degree, limited, in this paper it is
shown that the CFSv2 does have potential as an extended range forecasting
tool.
Full Manuscript |
Basis For This Study
- Over half of the Storm Prediction Center’s (SPC) Day
4–8 Severe Weather Outlooks for the CONUS are a forecast
of “Predictability Too Low.”
- The Climate Forecast System Version 2 (CFSv2) is a fully coupled
ocean-land-atmosphere seasonal prediction model from which severe
weather proxies can be derived.
- This study investigates whether consistency of severe weather
proxies in CFSv2 forecasts can aid SPC forecaster decision making
for Day 4–8 outlooks.
What This Study Adds
- Statistics of SPC Day 4–8 Severe Weather Outlooks were
calculated for January – June 2013; the focus being on
May and June which contained contrasting cases of severe weather
forecasting in the SPC.
- Severe weather forecasts were approximated by calculating the
Supercell Composite Parameter from CFSv2 output.
- SPC Outlooks and Filtered Storm Reports were used to assess
forecast quality from the CFSv2 for May and June 2013.
- Cases studied indicate consistency of a severe weather proxy
in CFSv2 output may assist SPC forecasters in providing more
specific severe weather information in Day 4–5 forecasts.
|
Tornado Damage Mitigation: What National
Weather Center Visitors Know and Why They Aren't Mitigating
Kody Gast — University of Northern Colorado
Mentors: Dr. Jerry Brotzge, Dr. Daphne LaDue
A survey was conducted of adults touring the National Weather
Center in Norman, Oklahoma during the summer of 2013 to understand
what the visitors know in regards to mitigation and what factors
impact mitigation behavior. Survey questions were summarized into
four categories: background knowledge of tornadoes and tornado
damage, knowledge of mitigation, estimation of risk, and factors
impacting mitigation activities. Many visitors did not know that
mitigation against tornado damage is possible and that homes can
be designed or retrofitted to withstand a majority of the damage
that tornadoes can cause. Among nine key terms of mitigation, only
four terms were marked by more than 20% of respondents, signifying
that many of the visitors did not know about mitigation. Reasons
for why people are not mitigating, including not knowing what to
do, not perceiving too great of a risk, and the costliness of mitigation.
Full Manuscript
|
Basis For This Study
- Recent engineering advances now make it possible to apply home
construction techniques to prevent much of the damage caused
by winds up to EF–2 scale (135 mph) in strength.
- The vast majority of homes have few construction methods applied
that mitigate against damage from tornado strength winds, despite
the relatively low cost in doing so.
- The factors either inhibiting or encouraging the adoption of
new mitigation techniques by the public are largely unknown.
What This Study Adds
- This study confirms that in general, the public has little
specific understanding of many of the terms used in describing
mitigation or of the actual engineering steps needed in improving
mitigation.
- Significant barriers preventing greater adoption of mitigation
include the high costs, hassle of getting it done, and lack of
knowing what to do to get started.
- Nevertheless, a significant portion of the public may be willing
to spend $1,000 or more on mitigation activities, particularly
if they have a relatively high household income or consider themselves
at risk for tornado activity.
|
Using Bragg Scatter to Estimate Systematic
Differential Reflectivity Biases on Operational WSR-88Ds
Nicole Hoban — University of Missouri, Columbia
Mentors: Dr. Jeffrey Cunningham, Dr. David Zittel
This study examines the feasibility of using Bragg scatter to
estimate systematic differential reflectivity (ZDR) bias- es on
operational WSR-88Ds. ZDR greatly impacts rain rate estimates.
At constant reflectivity, a 0.25 dB bias in ZDR will yield a 22%
error in rain rate estimates for the rain rate equation currently
implemented in the WSR-88D radar product generator. Prior to this
study, the Radar Operation Center (ROC) used plan position indicator
scans of light rain (i.e. “scanning weather method”)
to monitor systematic ZDR biases on a fleet of 159 operational
WSR- 88Ds. While the scanning weather method is reliable for identifying
radar calibration trends, it is too imprecise for absolute ZDR
calibration because systematic ZDR biases estimates from the scanning
weather method are subject to big drop contamination. Data filters
based on single and dual polarization variables and two statistical
filters were used to isolate Bragg scatter from clutter, biota,
and precipitation. Six radars were examined in detail for May and
June 2013 from 1400-2200 UTC each day. Systematic ZDR biases estimates
from Bragg scatter were compared to reliable estimates from the
scanning weather method. Bragg scatter derived systematic ZDR biases
were compara- ble to those estimated by the weather method; most
cases were within 0.20 dB. With these filters, Bragg scattering
was found most frequently between 1400-2200 UTC. More cases of
Bragg scattering were found in May than in June. This study demonstrates
that Bragg scattering offers an alternative method for monitoring
systematic ZDR biases on the WSR-88D fleet.
Full Manuscript
|
Basis For This Study
- Differential reflectivity (ZDR) plays an important role in
rain rate estimations but is known to be vulnerable to radar
calibration errors
- At constant reflectivity, a 0.25 dB bias in ZDR will yield
a 22% error in rain rate estimates for the rain rate equation
currently implemented in the WSR-88D radar product generator
- Bragg scattering typically has an inherent ZDR near zero dB
and can be used to estimate systematic ZDR biases
What This Study Adds
- Comparisons of systematic ZDR biases between the weather method
and Bragg scattering yields similar values
- An automated method of estimating systematic ZDR biases from
Bragg scatter has been developed for use on operational WSR-88Ds
|
Exploring the effectiveness of Integrated
Warning Team Activities
Caleb Johnson — Jackson State University
Mentor: Lans Rothfusz
An Integrated Warning Team (IWT) is an ad hoc team of people involved
in the preparedness and response to high-impact weather events.
The most common members of this team are the NWS, broadcast media
and emergency managers. This study focuses on the effectiveness
of IWT activities. NWS offices are leading many IWT activities
with little communication between offices about what is working
and what isn’t. The goal of this study is to see if IWT workshops
enable more effective IWTs before, during, and after real events.
Semi-structured interviews were conducted with IWT workshop participants
to evaluate the effectiveness of the workshops. IWT participants
from the NWS, broadcast meteorology, emergency management, and
social science were interviewed. The interview was designed to
identify characteristics of effective/ineffective IWT workshops
and also help to develop a set of ideas on how to improve IWT activities.
This study succeeded in identifying ideas for improvements and
also identified a weakness between some of the core groups of an
IWT. Future work will be discussed to further improve operational
IWTs and IWT workshops.
Full Manuscript |
Basis For This Study
- An Integrated Warning Team (IWT) is an ad hoc team of people
involved in the preparedness and response to high-impact weather
events.
- The most common members of this team are members of the National
Weather Service, broadcast media and emergency managers.
- Little communication exists between NWS offices about the best
practices for IWT activities.
What This Study Adds
- Semi-structured interviews were conducted with IWT workshop
participants to evaluate the effectiveness of the workshops.
- The interview was designed to identify characteristics of effective/ineffective
IWT workshops and also to help develop a set of ideas on how
to improve IWT activities.
- This study identified aspects of IWTs needing improvement and
a weakness between core groups of the IWT
|
Evaluation of the National Severe
Storms Laboratory Mesoscale Ensemble
Brianna Lund — St. Cloud State University
Mentors: Dr. Dustan Wheatley, Dr. Kent Knopfmeier
Accurate short-term forecasts are critical for forecasters when
anticipating severe weather events and improving such forecasts
has long been a focus for meteorologists. The recent emergence
of ensemble based data-assimilation systems has proven to be a
promising step toward the improvement of these vital forecasts.
The National Severe Storms Laboratory Mesoscale Ensemble (NME)
is a 36-member ensemble that provides hourly forecasts and analyses
of a variety of products used for severe weather forecasting, such
as soundings and 2-m temperature fields. This project seeks to
quantitatively evaluate said products through comparison to observations
from a number of sources (surface stations, rawinsondes, etc.),
including the Oklahoma and Texas mesonets.
Full Manuscript |
Basis For This Study
- Ensemble-based weather forecasts are increasingly used as guidance
in the prediction of severe storms.
- During the 2013 Spring Forecast Experiment, the NSSL Mesoscale
Ensemble (NME) was run daily in a simulated forecasting environment.
What This Study Adds
- In regards to reproducing realistic mesoscale environments
in which storms were observed to form, the NME performed comparably
to a commonly used operational mesoscale model, the Rapid Refresh
(RAP) model.
- Both modeling systems were characterized by relatively small
errors in their placement of the dryline and the positioning
and strength of storm-induced cold pools, although the NME was
run at much lower computational expense.
|
Determining the Optimal Sampling Rate
of a Sonic Anemometer Based on the Shannon-Nyquist Sampling Theorem
Andrew
Mahre — University of Texas at Austin
Mentor: Gerry Creager
While sonic anemometers have been in use for nearly 50 years, there
is no literature which investigates the optimal sampling rate for
sonic anemometers based on the Shannon-Nyquist Sampling Theorem.
In this experiment, wind is treated as a wavelet, so that sonic anemometer
data with multiple sampling rates can be analyzed using spectral
analysis techniques. From the power spectrum, it is then possible
to determine the minimum frequency at which a sonic anemometer must
sample in order to maximize the amount of information gathered from
the wavelet, while minimizing the amount of data stored. Using data
from the Oklahoma Mesonet and data collected on-site, no obvious
peak is present in any resulting power spectra that can be definitively
be considered viable. This result suggests a nearly random power
distribution among frequencies, which is better-suited for averaging
and integrating data collection processes.
Full Manuscript |
Basis For This Study
- Sonic anemometers use no moving parts, and therefore the temporal
averaging method which has been historically used for non-sonic
anemometers has no scientific basis
- A set, optimal sampling rate would be useful for obtaining
the maximum amount of information possible from the wind’s “signal”
- Better understanding of the properties of wind would be useful
for modeling of the boundary layer
What This Study Adds
- Wind data from 4 separate instruments at 10 Hz, 1 Hz, and 1/3
Hz are analyzed using spectral analysis techniques
- A power spectrum was created for each dataset and decimated
versions of each dataset using a Fourier Transform
- Spikes in power were present in the power spectrum created
from the 10 Hz dataset and from decimated versions of the 10
Hz dataset, but are possibly due to the instrument, and not the
wind itself
- No spikes in power are present at any frequency in any other
dataset
|
Verification of Proxy Storm Reports
Derived From Ensemble Updraft Helicity
Mallory
Row — Valparaiso University
Mentors: Dr. James Correia, Jr., Dr. Patrick Marsh
Convection-allowing models (CAMs) are one of the newest improvements
the area of numerical weather prediction (NWP) has seen in the
last 10 years. One of the new diagnostic fields these models output
is updraft helicity (UH), a measure of rotation in modeled storms.
Data collected from Storm Scale Ensemble of Opportunity (SSEO)
and its individual members in 2012 is used to create proxy storm
reports derived from UH track-like objects. Daily probabilistic
forecasts are created from the reports allowing for a direct comparison
to the observed for that day. 2x2 contingency tables are constructed
daily to gain insight to if UH provides a skillful and reliable
probabilistic serve weathers forecast and understand the characteristics
of the SSEO and members. Various verification metrics are calculated
along with looking at correlation data and probabilistic outlooks
to provide a fuller understanding. The SSEO is found to have good
skill and reliability throughout the year with especially good
skill in the spring time (March to June).
Full Manuscript |
Basis For This Study
- Updraft helicity, a measure of rotation in modeled storms,
is a new variable in severe weather forecasting that is produced
from convection-allowing models
- As an extension of Sobash et al. 2011, ensemble data from the
Storm Scale Ensemble of Opportunity is used to create proxy storm
reports from updraft helicity track’s maxima
- Clark et al. 2013 found a strong correlation between modeled
updraft helicity tracks and observed tornado tracks
What This Study Adds
- The ensemble outperforms any individual members and shows decent
skill throughout the year, especially in the springtime
- Amongst the members, there are two separate groups: one
with higher Percent of Detection (POD) but lower Frequency of
Hits (FOH), and the other with lower POD but higher FOH. Taking
these two groups together in an ensemble mean, there is a compensation
effect happening that allows for a more skillful ensemble mean
- Case studies of an outbreak day and lower end severe weather
day show this compensation effect is happening on a smaller time
scale
|
Copyright © 2013 - Board of Regents
of the University of Oklahoma
|
|