NWC Logo

Years of REU: 2016 | 2015 | 2014 | 2013 | 2012 | 2011 | 2010 | 2009 | 2008 | 2007 | 2005 | 2004 | 2003 | 2002 | 2001 | 2000 | 1999 | 1998

Research and Related Accomplishments of Recent NWC REU Participants

 

 

 

 

 

 

 

Listed first are particularly notable research results, then the accomplishments of all participants. This information is available for the 2007 and later grants. The current grant is listed first, or you may skip down to the same sets of information for the 2011-2015 or 2007-2010 participants.

Last Updated: March 19, 2017


Current Grant, 2016-2020

Listed first are particularly notable research results from REU participants' work. Skip down to accomplishments of all participants in the 2007-2010 grant.

Special Research Nuggets

The items here are especially significant outcomes of REU projects.

Corrections to ASOS Location Errors Improve Validation of Radar-Based QPE
Sebastian Harkema
’s work highlighted the need for accurate metadata locations with Automated Surface Observing System (ASOS) precipitation gauges. the differences in location placed over 78% of the gauges in a different 1-km grid box. Placing the gauge observations with the updated latitude and longitude values had a better correlation with radar-based quantitative precipitation estimates (QPEs) and removed instances where non-zero gauges observations were matched with radar QPEs that detected no precipitation. Location errors found in his study were shown to have impacted the validation of radar-based QPE data sets. It is likely that such errors would also impact local bias corrections of radar-based QPE and the quality control of these gauges. His findings will assist NOAA in improving the quality of ground-based observational data.

Highlights of Student Research Accomplishments

Robert Baines worked in the calibration team of phased array antenna towards the development of a fully automated RF scanner to calibrate the polarization characteristics of a dual-polarization phased array weather radar. In this project, Robert was involved in mechanical modeling in Solidworks, control automation using Labview and antennad and electromagnetic theory. This RF scanner is a instrument that will enable the calibrating of dual polarization in phased array radars, which will be the future platform for future radar systems. Robert was focused in the design and fabrication of several mechanical parts using Solidworks and 3D printing technique. Robert helped to test and integrate some features of the RF scanner and very values in this development.

Christian Boyer worked towards the development of a transmitter and signal processing algorithm that would enable a UAV to calibrate the polarization characteristics of a dual-polarization phased array weather radar. Calibrating dual polarization in phased array radars is an important aspect of risk mitigation in moving towards a nationwide multifunctional phased array radar (MPAR) system for weather surveillance and to track aviation. The calibration of scan-dependent polarization in phased arrays is a primary goal in achieving the same products provided by traditional dish-based systems. In Christian’s project, the focus was on the calibration of the radar’s receive patterns, the first step in the overall calibration process. The so-called “Twitching Eye of Horus” circuit on the UAV, which Christian helped test and develop, provides a means for transmission of calibrated horizontal (H) and vertical (V) electric fields towards the radar in a controlled manner from the UAV. Christian worked towards a signal processing algorithm on the radar’s receiver to extract the polarization characteristics of the phased array using these H and V transmission signals from the UAV.

Austin Coleman analyzed the forecasts of a dual threat event (i.e. both tornadoes and flash floods) using a rapidly-updating convective scale Warn-on-Forecast ensemble system. The student found that a prototype Warn-on-Forecast system can forecast both threats with good accuracy. She identified that forecasts at 1-km horizontal grid spacing from the downscaled 3-km prototype system introduces many spurious cells with embedded spurious mesocyclones.

Dana Gillson looked at whether CMIP5 Global Climate Models were able to represent the historical trends, magnitudes, and variability of select extreme metrics over the South Central United States. She used the CLIMDEX suite of metrics to define extremes, including hot and cold temperatures, and heavy precipitation. She separated the region into three climatically-consistent sub-domains, and calculated GCM biases against four different reanalyses for the model historical period. Part of this research identified some large differences in bias depending on which reanalysis dataset was used as 'ground truth'. Dana also identified 5 'best' and 'worst' models, based on their mean bias across all analyzed years, seasons, and reanalyses. Projections with these models indicated possible differences in future magnitudes separated by whether the model captured the historical values well or not, however these results require further investigation to make firm conclusions.

Uriel Gutierrez’s main research goal was to further develop a hypothesis on the drivers of rapid sea ice loss. He found that oscillations in change of sea ice extent (1979-2014) at synoptic time scales were shown to be statistically significant with respect to red noise. Synoptic time scale reductions in sea ice extent occur most frequently in July and in December. Composite of top 1% of abrupt loss in sea ice extent events revealed strong winds over loss area. These conditions always occurred with a nearby surface cyclone; enhancement from anticyclones could sometimes also occur.

Sebastian Harkema found that the latitude and longitude metadata of the Automated Surface Observing System (ASOS) gauges were not accurate, and the differences in location placed over 78% of the gauges in a different 1-km grid box. Placing the gauge observations with the updated latitude and longitude values had a better correlation with radar-based quantitative precipitation estimates (QPEs) and removed instances where non-zero gauges observations were matched with radar QPEs that detected no precipitation.

Briana Lynch did some exploratory data analysis using different trace gases in Paris. She learned that some gases are correlated with CO2 and others are anti-correlated, and that this shows up in the measurements we analyzed. She verified the anti-correlation between the planetary boundary layer depth and the daily time series of trace gas measurements.

Russell Manser successfully completed multiple simulations with WRF-Chem and analyzed both meteorology and chemistry of each run. He found that WRF-Chem can reasonably reproduce the physical characteristics of observed convection, but shows some questionable results with respect to the transport of trace gases. His research produced more meaningful questions rather than answers, but the process was fruitful for both of us.

Kristina Mazur met with Tribal Emergency Managers (EMs) from three Nations in Oklahoma (Citizen Potawatomi, Chickasaw, and Choctaw) to identify weather and climate-related hazards of particular concern. These meetings established the research focus on extreme precipitation. Kristina analyzed data from 15 downscaled global climate models, and two emissions scenarios, to project changes in heavy precipitation at various thresholds. Her results indicated that the frequency of very heavy precipitation is anticipated to increase in all three Nations. This information was presented to the Tribal EMs, and the research may be expanded in the future to assist Tribal adaptation plans.

Karen Montes Berríos’s biggest accomplishment may have been how much she learned about meteorology, but she also gave us some sense of what an impact of a violent tornado might be in Norman, OK. It was a cursory look at the potential impact of such a storm. Her research sets the stage for further development by another student during the subsequent school year.

Joseph Nardi performed verification of machine-learning-based hail forecasts produced during the 2016 Hazardous Weather Testbed Spring Forecast Experiment. During his work, he found that the machine-learning-based forecasts performed as skillfully or more skillfully than other forecast methods in use, including the Thompson hail method and the HAILCAST algorithm. These results will be valuable in the continued development of machine learning tools for hail prediction.

Jamin Rader studied data from two years of field comparison data in Oklahoma of three anemometers (two with anti-icing properties) to determine the usefulness of two forms of anti-icing technologies. One anemometer was designed to have anti-icing construction, one was coated with an over-the-counter super-hygrophobic spray, and the third was a standard anemometer. Mr. Rader analyzed both the time delays in when each anemometer was coated in ice (e.g., wind speeds to zero) and when the anemometers returned to normal operations upon the melting of the ice. He found no real discernible difference in how they handled icing events. Two of the tested anemometers claimed anti-icing properties that, in reality, that turned out not to be completely true (e.g., advertising vs. reality). The results of this work helped the Oklahoma Mesonet determine if either of these two anti-icing approaches would be of use to monitoring wind speeds and directions in a region prone to ice accumulations in the winter.


Previous Grant, 2011-2015

Special Research Nuggets

The items here are especially significant outcomes of REU projects.

2015 Participants:

Discovery of Polarity Misclassification in ENTLN Data
The results of the James Coy, Jr.’s study were shared with both Vaisala and Earth Networks. Partly in response to James’s discovery of the polarity misclassification within the ENTLN data, Earth Networks modified their algorithm identifying the polarity of CG flashes. James completed a reanalysis on his own time in the fall semester of the reprocessed data and confirmed the algorithm updates now produced accurate polarity and location estimates of CG lightning.

Insensitivity of Supercell Forecasts to Initial Conditions
Elisa Murillo found a relative insensitivity of supercell forecasts to initial condition, which indicates that smaller-scale processes within organized convection are primarily governed by larger scales, and that real-time supercell forecasts over the next decade will not be strongly degraded by our limited ability to analyze very fine intra-storm scales. This motivates development of data assimilation & prediction systems that prioritize forecast over analysis grid resolution, and suggests that improvements to model physics/numerics and observations of storm environment will increase forecast skill more than will increasing the density of intra-storm observations.

2011 Participants:

Rapid Identification of Severe Surface-Level Winds
REU student Adam Taylor quantified the impact of having low-level, dual-Doppler radar data available for detection of severe surface winds.  Mr. Taylor found that operational forecasters would be able to identify areas of severe winds much faster, and with much greater accuracy, if overlapping low-level radar coverage (e.g., CASA data) was widespread.  Furthermore, Adam found that even a simple tool that corrected radar-derived wind speeds for height above ground (applying a wind profile correction) could aid forecasters in estimating surface wind speeds.

Anticipating Tornado Casualties for Emergency Planning
Amber Cannon used a GIS analysis to compare rates of incidence of fatality, by population density, for the Alabama portion of the 27 April 2011 and 3–4 April 1974 tornado outbreaks. If replicated and done over many geographic areas, this research could be combined with the distribution (number, intensity, size) of tornadoes expected with convective outlooks to help FEMA anticipate rates of incidence of fatality/injury hours to days ahead of time, enabling them to leverage resources for immediate response.

Assimilation of AQUA Data Improves Track Forecast of Hurricane Danielle
Although results are preliminary, Travis Elless's research highlighted the importance of studying the impact of the data and data assimilation methods on tropical cyclone forecasts. His work will be continued in Dr. Xuguang Wang's research group.

Simplifying Microphysics Parameterization to Achieve Better Forecasts of Convection
Diversity in the physical parameterizations used in forecast ensembles is already known to provide robust variance amongst the ensemble members in mesoscale forecasts (resolution of 10 to 30 km).  Sam Lillo’s research made a first systematic look at some possible means to achieve physics diversity within a single advanced microphysics parameterization for convection-resolving forecasts (resolution ~1km).  The work has implications for the Warn-on-Forecast initiative, which aims to assimilate radar data to provide short-term forecasts of severe weather.  The density of radar data can drastically reduce ensemble spread, and this research considered sensitivities that affected warm-rain physics, precipitation efficiency, and large ice hydrometeor characteristics that may help maintain storm-scale ensemble efficiency.

Rapid-Scan Dual-Polarimetric Radar
Alex Lyakhov used the RaXpol, a state-of-the-art dual-polarimetric mobile radar to scan a supercell and weak tornado. His research documented rapid changes in tornado and mesocyclone evolution during tornadogenesis and tornado dissipation and their relationship to polarimetric supercell signatures.

 

Highlights of Student Research Accomplishments

2015 Participants:

Proper designation of the rain/snow line in complex terrain is of pivotal importance for water resource management. Current operational methods, however, only use very simple temperature thresholds to delineate this zone. REU student Massey Bartolini explored the option of designating a rain/snow transition zone where both forms of precipitation exist using a spectral bin microphysical model that he specifically tuned to output the liquid water fraction of falling hydrometeors. His work shows the underscores the complexity of the problem and that simple "rules of thumb" are not likely to work in most situations. He was also able to demonstrate that one can effectively deduce the rain/snow transition zone using his model.

Tomer Burg assessed the skill of updating precipitation diagnostics for Rapid Refresh (RAP) with crowd-sourced mPing reports of precipitation type. He found that his statistical analysis improved the bias slightly for ice pellets, snow, and freezing rain, while bias degraded for rain. The upgrade to the RAP generated more realistic spatial distributions of precipitation type transition zones which were statistically significant, though they may not be practically significant.

Matthew Campbell studied damaging wind producing quasi-linear convection. He studied Mesoscale Convective System (MCS) evolution to categorize the systems and code them for quantitative analysis of data. He found that MCS organization and structure can be related to MCS motion, with the best organized MCSs consisting of a well-defined mesoscale convective vortex and transition zone where the fastest motion and propagation led to a convective line oriented relatively perpendicular to the mean wind. The relationship between MCS structure and motion could be used, in addition to damaging wind reports, to classify MCSs. Matthew’s REU work is being submitted for publication in Weather and Forecasting. Many of his findings relating various degrees of MCS structure to degree of propagation and decomposition of the propagation vector are enlightening in terms of the role these structural entities in various MCS types play in influencing system longevity and motion.

James Coy, Jr.’s compared detection of cloud-to-ground (CG) lightning flashes as measured by both the National Lightning Detection Network and Earth Networks Total Lightning Network with three-dimensional lightning mapping observations from the Oklahoma Lightning Mapping Array and storm chaser video recorded of the 31 May 2013 El Reno tornadic supercell. Initial results from the NLDN and ENLTN indicated a negative CG dominance, but, after a 15 kA peak current filter was applied, the NLDN indicated primarily positive CG polarity flashes while ENTLN still indicated primarily negative CG polarity. The average distance between the two networks for the same flash was more than 2 km and improved to approximately 1 km after the 15 kA filter was applied.

Rashida Francis examined two of six cases that forecasters had worked during the experiment. Forecasters were assigned randomly whether they received radar data only, radar plus total lightning, or radar, total lightning, and Earth Network’s Lightning Alerts. She found in one of the two cases that having lightning data increased forecasters’ confidence for good warning decisions, but for the second case, the electrically active storms were not severe. The presence of lightning data appeared to make forecasters more likely to warn, leading to false alarms. Lightning data had a mixed influence on their confidence during the second case, because the lightning data remained active but storm reports were not received.

Amber Liggett analyzed a number of gust front cases to confirm their polarimetric radar signatures, and the cause of these signatures (i.e., from insects). She successfully executed the neuro-fuzzy gust front detection on these cases and performed evaluation of the algorithm.

Elisa Murillo studied the sensitivity of supercell simulations to the resolution of initial conditions in a convective allowing ensemble modeling system. She found that vorticity was the variable most sensitive, while other variables (updraft strength, surface winds, and rainfall) showed little sensitivity to the initial condition resolution after the first 10-20 minutes. Scales missing from supercell initial conditions are rapidly generated as the forecast proceeds, and in most ways do not unduly degrade the forecast. See also Significant Results section of this report.

Natalie Ruiz-Castillo’s study was a projection in future changes in growing degree days of winter wheat; southwest Oklahoma is one of the most productive regions that is growing this highly consumed crop. She used new statistically downscaled outputs and focused on the subset of the Red River Basin. Her results show that at the end of the 2098 growing season, the increase in growing degree days (GDD) is expected to be between -2.0 and 6. Also, depending on the global climate model (GCM) used, Southwest Oklahoma is expected to see an increase in future GDD under the CCSM4 GCM, and a mix of increase, no change and decrease under the MIROC5 GCM.

Ryann Wakefield explored in more detail the possible link between soil moisture and convection, extending that work to explore whether there was a link to tornado activity on either regional or local scales. She found varying relationships between 6-month antecedent soil moisture averages in the five regions east of the Rocky Mountains that she studied, and that correlations were different for different times of year. This provides motivation to further study the physical mechanisms causing such relationships. Further, she compared the Climate Prediction Center’s (CPC) modeled soil moisture to values from the Oklahoma Mesonet to demonstrate that the CPC dataset reflected reality well. Ryann presented her work at the 22nd Conference on Applied Climatology at the 2016 Annual Meeting of the American Meteorological Society, where she won a student presentation award.

2014 Participants:

Nadajalah Bennett conducted a door-to-door survey of homeowners throughout the cities of Moore and Oklahoma City during the month of June 2013 to learn whether and how homeowners had incorporated mitigation techniques into their rebuilding and emergency preparedness decisions. She found that most homeowners had either considered or were considering installing a storm shelter inside their home to help them feel safer. Cost was the main reason for not implementing mitigation strategies. Many homeowners were unaware of other techniques they could use to prevent tornado and wind damage.

Robert Conrick studied effects of changing the boundary- and surface-layer parameterization scheme on forecasts of a lake-effect snow. The forecasts were quite sensitive to the choice of scheme, with differences in the six-hour liquid- equivalent accumulated precipitation on the order of 20 mm. The root cause of these differences is the manner in which the heat and moisture fluxes off of the lake are computed in the different surface-layer schemes. When the schemes are modified to use the same set of equations, the resulting forecasts are in very close agreement.

Rebecca DiLuzio calculated and compared verification statistics for Earth Networks’ Dangerous Thunderstorm Alerts (DTA’s) including probability of detection, false alarm ratio, and lead times of the DTAs compared to the National Weather Service’s severe and tornado warnings.

The goal of Kwanshae Flenory’s project was to examine the ocean and climate drivers related to extreme summer heat in SE Australia. She related January temperature data for selected sites in SE Australia to drivers, such as atmospheric blocking, teleconnections, concurrent and lagged sea-surface temperatures (SSTs). Optimization of the relationships by artificial intelligence techniques revealed a nearly equal contribution from nearby ocean SSTs and the atmosphere with close to a 50 percent predictability on independent data, substantially more accurate than with methods that had been used previously.

Montgomery Flora analyzed differences between idealized supercell simulations using different horizontal grid spacings. The analyses focused on model output most relevant to convective hazard forecasting, including low-level vorticity, surface winds, and rainfall. The results of the work will help guide the design of convective-scale ensemble forecasting systems, including the real-time systems envisioned by the Warn-on-Forecast paradigm.

Shawn Handler created a map that would allow forecasters to determine how likely it was that a tornado at their location would be accompanied by a debris signature. While the tornado debris signature has received a lot of attention, it is simply the case that the height of the radar beam, the strength of the tornado and the type of ground cover preclude tornadoes being accompanied by a debris signature in most locations in the United States. This research will help weather forecasters’ expectations for where they can expect to see such signatures.

Joshua Gebauer studied the feasibility of using atmospheric soundings as an indication of when Bragg scatter by comparing ~11,500 radar/sounding pairs obtained from 66 WSR-88Ds spread over the climatic regions defined by the National Climatic Data Center (Contiguous United States) for the six month period January to June 2014. Of 464 radar cases identified by the algorithm as having Bragg scatter, ~85 percent were corroborated by sounding data. Conversely, sounding data indicated the potential for Bragg scatter far more often than was actually observed but a majority of the times a radar was operating in a mode for which the Bragg scatter algorithm could not be applied. Regionally, Bragg scatter identified by radar was confirmed more often by refractivity gradient for the eastern through south central US and western climatic zones. However, gradient Richardson number more often confirmed Bragg scatter in the mountainous west and north central U.S.

Nathan Kelly studied data from the Oklahoma Atmospheric Surface-layer Instrumentation System (OASIS) project, which involved placing instrumentation focused on observing the surface energy budget at 89 Oklahoma Mesonet stations beginning in 1999. At any given time, 10 stations (designated “super sites”), were outfitted with additional instrumentation including a four component net radiometer with the capability to observe incoming and outgoing shortwave (solar) and longwave radiation. Data are available from the beginning of 2000 until October 2008. This data was filtered to remove observations non-representative of the days albedo (e.g. sunrise and sunset periods, cloudy days, and erroneous instrument readings) and monthly averages were computed for each of the super sites in order to develop a better understanding of the spatial and temporal variability of albedo in Oklahoma.

Thong Phan studied the use of polarimetric radar data to identify radar echoes that were due to electronic interference. His findings are a start to this project; the project will aid in the current quality control algorithm to be more efficient for operational use.

Julia Ross analyzed portions of a survey of shelter-seeking actions taken by Oklahomans during the three tornadic events that took place in May of 2013 in central Oklahoma. She focused her efforts on summary and conditional statistics that revealed that more people took actions on the third event than the subsequent two. Of those who indicated they drove somewhere during one or more events, they did so because they felt the buildings they were in were unsafe; and that the storms on the third seemed more dangerous.

Lori Wachowicz analyzed data her mentors had generated from a month-long Antarctic reanalysis using an ensemble Kalman filter (EnKF) data assimilation method with the Antarctic Mesoscale Prediction System (AMPS) model. AMPS is the only operational model in the Antarctic and is maintained by the National Center for Atmospheric Research (NCAR). The uncertainty in atmospheric state estimates is high over the Antarctic because there are relatively few observations to constrain numerical models. Lori's analysis revealed that our EnKF method creates a reasonable and comparable atmospheric state estimate as AMPS alone, but by using far fewer observations by utilizing information about the background atmospheric flow as a function of time. Furthermore, she found that our model overestimates stratospheric ozone concentrations, leading to a large bias after polar sunrise when the ozone hole develops.

Grant Williams used a modified genetic algorithm to determine how to place wind turbines for wind energy in such a way that the choppy, turbulent wake from one turbine has a minimal effect on nearby turbines. The algorithm he built was able to use the computation power of parallel processing and multiple processors to produce results much faster than running the algorithm sequentially on a single processor.

2013 Participants:

Deanna Apps used quality-controlled data from the citizen science smartphone app called mPING to examine how well the RAP, NAM, and GFS weather prediction models forecasted rare versus more common precipitation types. For two events in February 2013, in which snow, freezing rain, ice pellets, and rain all occurred, she found that the three numerical prediction models forecasted rain and snow significantly better than freezing rain or ice pellets.

Samantha Berkseth looked at how dual polarization variables can be used to improve quality control algorithms that filter non-meteorological targets. Her work contributed to a journal paper (currently under review) and has helped improve the quality control of radar reflectivity data, something that underpins most automated uses of radar data such as precipitation estimation and hail diagnosis.

Levi Cowen looked at winter precipitation and 500 hPa geopotential height to predict spring tornado activity in Oklahoma. Although his findings determined there was no significant correlation between Oklahoma tornado activity following wet versus dry weather, Levi found that persistent midlevel troughing over the northwestern U.S. and southwestern Canada in the wintertime enhanced tornadoes in Oklahoma in the following spring while ridging suppressed them. Levi's findings have and will be shared with the WFO Norman staff to better prepare for springtime tornado season in Oklahoma. Levi has been encouraged to formally publish his results.

Joshua Crittenden studied use of proxies for severe weather in Climate Forecast System Version 2 forecasts to assess their utility in aiding Storm Prediction Center forecasters with creating Day 4–8 outlooks. Statistics were calculated for January through June of 2013. SPC Outlooks and Filtered Storm Reports were used to assess forecast quality for May and June 2013. Cases studied indicated some consistencies in a severe weather proxy that may help SPC forecasters provide more specific severe weather information in Day 4–5 forecasts.

Kody Gast conducted a survey of visitors to the National Weather Center to evaluate what people knew about tornado damage mitigation and whether they had taken any actions to mitigate against tornado damage to their homes. Kody’s study may have been the first such study ever conducted to better understand how tornado mitigation is understood and perceived by the public. Overall, survey respondents were unfamiliar with terminology typically used in mitigation, and few had applied any the mitigation measures recommended. The REU mentors expect to publish this work in a refereed AMS journal. More importantly, however, this work will help guide the mitigation community in how to better shape their public engagement in the promotion and adoption of these measures more broadly.

Nicole Hoban studied the feasibility of using Bragg scatter to estimate systematic differential reflectivity biases on operational WSR-88D radars. Current methods are too imprecise because they are subject to a big drop in contamination. Nicole examined six radars in detail for May and June 2013 from 1400–2200 UTC each day, comparing systematic ZDR bias estimates from Bragg scatter to the currently used scanning weather method. Bragg scattering was found to be comparable. Bragg scattering may offer an alternative method for monitoring systematic ZDR biases.

Caleb Johnson interviewed nine National Weather Service meteorologists, three emergency managers, and two broadcast meteorologists about their experiences with Integrated Warning Team activities. His main, preliminary result validated some of the contentions held by Integrated Warning Team proponents: that relationships between emergency managers and broadcast meteorologists tends to be weak. He identified four areas in which future IWTs might consider focusing their efforts to improve their chance of success. While the results appeared somewhat obvious to those who have participated in IWTs, his work now documents these insights for others.

Brianna Lund looked at use of the National Severe Storms Laboratory’s Mesoscale Ensemble (NME) to aid in short-term forecasting of severe weather events. She found that the NME performed comparably to the Rapid Refresh (RAP) model for producing realistic mesoscale environments. Both modeling systems were characterized by relative small errors in their placement of the dryline and the positioning and strength of storm-induced cold pools. The NME is computationally less expensive.

Andrew Mahre looked at whether a set, optimal sampling rate would be useful for obtaining the maximum amount of information possible about the properties of the wind as measured by a sonic anemometer. He analyzed wind data from for separate instruments at 10 Hz, 1 Hz, and 1/3 Hz using spectral analysis techniques and created a power spectrum for each dataset using a Fourier Transform. Spikes in power were present in the power spectrum created from the 10 Hz dataset and decimated versions of the 10 Hz dataset, but might have been from the instrument rather than the wind. No spikes in power were present at any frequency in any other dataset.

Mallory Row examined forecasts from a convection allowing ensemble in an effort to understand what role the individual members play in producing good forecasts. The skill of each member was examined through the early spring to early summer period. She found that some members perform well but overforecast, while others perform somewhat poorly but underforecast, thus the ensemble mean performs well. Understanding why/how the ensemble works — better on the most severe days, somewhat worse on the less severe days — can give forecasters a way to maximize use of the ensemble when it performs well.

2012 Participants:

Daniel Diaz examined the Global Forecasting System (GFS) model forecast skill of the 2011–2012 cold season in the Northern Hemisphere with the hypothesis that relatively large model error is primarily associated with baroclinic Rossby wave packets and the onset of atmospheric blocking events. In his study, he found that a quasi-stationary blocking ridge developed over parts of Europe subsequent to a series of globally propagating Rossby Waves events. While the forecast skill of this blocking ridge was high once it was established, there was relatively little skill leading to its onset.

Veronica Fall investigated the microphysical processes in a winter storm using a combination of ground and satellite weather radars. She quantified the multi-frequency scattering characteristics for hydrometeors existing in the cloud and precipitation. She found that the vertical profile of radar reflectivity varied due to different physical processes; identification of the melting layer was important to retrieve microphysical properties in cloud and precipitation. Her findings are critical to the algorithm development of quantitative precipitation estimation and may help the identification and retrieval of snowfall in cold-season storms.

Hannah Huelsing examined the spatial and temporal distribution of the Asian pre-monsoon and monsoon seasons. She used Satellite remote sensing estimates from Tropical Rainfall Measuring Mission to compare the rain rates from 2010, when flooding was intense, with those from 2005 –2009. The temporal shift between pre-monsoon and monsoon seasons was enhanced in 2010, showing the shift from the deep convection associated with severe storms to the strong, wide convection associated Mesoscale Convective Systems.

Nathan Korfe studied how altering the boundary layer parameterization scheme affected low-level wind speeds in a blizzard event. His preliminary results showed a strong dependence on the choice of scheme, with some schemes strongly underestimating the actual wind speed. This, in turn, affected prediction of white-out conditions that sometimes accompany comma-head storms.

Jon Labriola investigated the relationship between the multi-radar multi-sensor parameters to tornado intensity. For 11 tornado outbreaks that occurred between 2008 and 2011 he found that neither the maximum azimuthal shear value along a tornado path nor total number of people impacted the final tornado rating.

Brittany Recker compared Storm Prediction Center (SPC) convective outlooks (forecasts) to one estimate of real convective activity, the radar-derived probability of severe weather. She assembled a data set of 108 case days during the spring and summer of 2012 (March – July) to compare the SPC forecasts and radar-estimated severe weather. She found that about 12% of the area within SPC's Slight Risk or greater outlooks was covered by non-zero, radar-estimated probability of severe hail.

Astryd Rodriguez's research project explored forecasters' perceptions regarding uncertainty in weather forecasting. She found that forecasters lacked a conceptual model of uncertainty and defined uncertainty in two ways: unknown outcomes next versus multiple possible outcomes. They expressed uncertainty using primarily hedging terms (e.g. possible, may occur, etc). In general, they preferred to express uncertainty in words over graphics, which were deemed overly confident.

Rebecca Steeves compared the relative performance of several mesoscale analysis systems with applications to severe weather forecasting, by exploring the ability of each to reproduce soundings collected during the Verification of the Origins of Rotation in Tornadoes Experiment 2 (VORTEX2). She found that model soundings derived from the ensemble-based products (i.e., those utilizing groups of forecasts) generally produced smaller errors than those systems that utilize single, deterministic forecasts.

Phillip Ware evaluated a total lightning algorithm currently under development for implementation in operations using high resolution storm reports. For the eight cases examined, he found that using the high resolution storm reports modified verification statistics from those that had only used reports available from NOAA's Storm Data database. His results showed a decrease in the probability of detection and reduced lead time; however, he also showed a reduction in the false alarm rate.

Hope Weldon researched a variety of sources to fill in the approximately 10% of missing information in the database of tornado fatalities. She then did some preliminary assessments of risk for different demographic groups. The four groups at the greatest risk were the elderly, males, people living in the southeast US, people inside mobile homes.

2011 Participants:

Rebekah Banas considered operational forecasts of heavy precipitation events along the Sierra Nevada mountains of California.   She found that the precipitation amounts are consistently under predicted and that forecast quality lessens as elevation is increased.

Eric Beamesderfer compared storm motion estimates to observed storm motions for different storm modes and environments.  Eric found that storm motion estimates were fairly inaccurate, with deviations up to 20 m s-1 very common.  He also found that storm-relative helicity influenced storm motion the most.  However, since SRH is heavily tied to storm mode, it was hypothesized that storm mode might be the most important predictor of motion.

Amber Cannon successfully completed a GIS analysis that helped her to compare rates of incidence of fatality, by population density, for two tornado outbreaks affecting Alabama.  She successfully manipulated datasets from several different sources, and included raster, vector, and photographic information in her analysis.

Tracey Dorian examined daily MODIS imagery to estimate whether our cloud detection algorithm was producing accurate estimates of mean cloudiness over different regions of the globe.  She determined that our original procedure significantly underestimated stratus estimates, and recommended new threshold values.

Travis Elless successfully run the operational global data assimilation and forecast system to study the impact of Aqua data on hurricane track forecast of Danielle (2010). He found the assimilation of AQUA data improved the intensity track of Hurricane Danielle.

Sam Lillo took the first systematic look at supercell storm forecast variability resulting from sensitivity to parameters within a multimoment bulk microphysics scheme.  The goal was to identify some parameters or parameterizations that could be diversified provide smooth (as opposed to multi-modal) spread in forecast ensembles. Focus was put on large ice (graupel and hail) characteristics, for example fall speed, rime density, and upon warm-rain physics response to cloud condensation nuclei.

Alex Lyakhov used the RaXpol, a state-of-the-art dual-polarimetric mobile radar to investigate a supercell and weak tornado.   His research documented rapid changes in tornado and mesocyclone evolution during tornadogenesis and tornado dissipation and their relationship to polarimetric supercell signatures.   

Using high-resolution SHAVE hail reports, Sarah Mustered investigated radar and environmental parameters to determine hail size at the surface.  Sarah found that more novel matching techniques will be necessary given the high-resolution reports might match different parameters to wide distribution of hail sizes.  She also found that while combinations of radar and environmental parameters did not stratify hail size well, there were certain parameter spaces more favorable to large hail production.

Highlights of Adam Taylor's work include a comparison of Oklahoma Mesonet surface wind speeds with radar-derived estimates from WSR-88D and CASA.  Adam tested the impact of dual-Doppler and wind profile corrections to radar-derived estimates when comparing against surface wind observations. 

 


Previous Grant, 2007-2010

Listed first are particularly notable research results from REU participants' work. Skip down to accomplishments of all participants in the 2007-2010 grant.

Special Research Nuggets

The items here are especially significant outcomes of REU projects.

2010 Participants:

Deficiencies in boundary layer parameterizations may hurt model forecasts of shallow cold air.
William Leatham investigated eleven model forecasts for events that included arctic air arriving in the southern Plains in advance of winter storms.  The average error for position of the freezing line was quite large and increased with time (up to 135 km at 24 hours).  A preliminary investigation revealed that radiative schemes may be a strong contributing source to the error.  Precipitation was often well forecast in space and time, and model soundings became saturated to the wet bulb temperature.  But the models allowed too much heating of the boundary layer so that the sounding was too warm prior to the onset of precipitation.  In at least one instance, the GFS and NAM models forecast the freezing line to retreat more than 30 km northward during the peak of the solar cycle, while the observed freezing line progressed an equal amount in the opposite direction (southward).

Defining Spatial Vulnerability From Tornadoes Based on Fujita Scale
Eric Hout's research defined the idea of spatial vulnerability of counties based on the standardized tornado fatalities for individual counties over time. Previous studies on tornado vulnerability have provided insight on how individual factors influence overall social and spatial vulnerability. However, few studies have been conducted to evaluate the aggregated effect on vulnerability when these factors coincide. Additionally, a definition of vulnerability has been absent from the meteorological literature. Thus, to provide a more comprehensive view of vulnerability, this study proposes a mathematical definition for spatial vulnerability, and then uses tornado casualty data from 1950 through 2009 to calculate vulnerability on a county level for seven time periods. Hout analyzed trends of spatial vulnerability for each county and interpreted the spatial patterns among counties with increasing or decreasing trends of spatial vulnerability. Some patterns could be attributed to regional and others to local effects, which suggest regional and local influences on social responses to tornadoes of different damage (Fujita) scales.

Incorporating Societal Impacts into Development of Warn-on-Forecast
The National Severe Storms Laboratory has begun research to move towards a Warn-on-Forecast (WoF) system. WoF will include probabilistic information from ensemble model forecasts and forecaster input with much greater lead time than today's current warnings. Sarah Stalker's research was the first to address some of the societal impacts of moving to a WoF system. She interviewed six individuals in the Norman, OK region who were affected by the 10 May 2010 tornado outbreak. Subjects noted that  seeing a projected path of the storm, similar to that provided by graphical hurricane outlooks, was more important than the additional lead time to them. Further research will develop upon these results and help tune WoF products throughout their development.

2009 Participants:

Advanced modeling techniques help forecasters stay in tune with snow band prediction.
Banded snow is one of the greatest winter weather forecasting challenges faced operationally with large economic and human safety consequences.  Numerical models frequently fail to provide forecasters with adequate guidance to anticipate banded snow. Using techniques normally applied to springtime convective events, Astrid Suarez Gonzalez demonstrated that a combination of high-resolution, data assimilation and ensemble forecasts can greatly aid in the forecast decision-making process.

Low-level Scanning Could Be Key to Reducing NWS Tornado False Alarms
Hannah Barnes investigated NWS tornado warning statistics for marginal storm events.  Hannah quantified the false alarm rate (FAR) and probability of detection (POD) for three scenarios: i) days with no reported tornadoes; ii) days with only one reported tornado; and iii) outbreak days with ten or more reported tornadoes within a Weather Forecast Office county warning area.  Hannah identified three key results.  First, the large-scale environment differed little between zero and one tornado days, but both differed significantly from large outbreak events.  Thus, there were no large-scale signatures differentiating between zero and one tornado days.  Second, the circulation intensity of false alarms at the lowest height , as scanned by WSR-88D radars, was notably weaker than those associated with confirmed tornado warnings.  Third, the presence of obscured velocity data (marked by ‘purple haze’) led to a 15% increase in the false alarm rate.  Hannah’s findings are critical first steps in understanding how to reduce the number of tornado false alarms. These findings may determine how future radar systems are deployed and how optimal scan strategies could be utilized.

One to two hour tornado warning lead-time may not be necessary for general public     
Stephanie Hoekstra's research provided insight into whether a 1-2 hour tornado warning lead-time (also known as warn on forecast) is currently demanded by the general public. On average, participants stated needing a minimum lead-time of 13.8 minutes but would like 33.5 minutes in an ideal situation. Her work is significant because while longer lead-times are often the focus of meteorological research, little to no research has been published regarding how the public would respond to such a warning. Stephanie and her mentors are aiming to publish her results in a peer reviewed journal.

NWS Lead Time for Severe and Damaging Hail Adequate for Preventive Measures
Lauren Potter quantified NWS warning lead times for reported severe hail and damaging hail events.  Lauren compared two years (1999 and 2000) of severe hail reports and ten years (1999-2008) of damaging hail reports from Oklahoma, Colorado, Massachusetts and South Carolina.  Interestingly, she found no significant differences among those states in the lead time of reported severe hail or damaging hail.  The mean lead time for severe hail was 18-22 minutes, with a lead time ranging from 19 to 29 minutes for damaging hail across the four states.  Overall, Lauren found that about 72% of reported hail occurs during a Severe Thunderstorm warning and another 7% occurs during Tornado Warnings.  Such a relatively long lead time and warning rate provides the general public and emergency and government services with the opportunity to take preventive cautions and thereby mitigate at least some property damage from hail.

Exploring Viability of Social Networking to Communicate Weather Information
In order to begin developing an understanding of social networking as a means for communicating weather information to the general public, particularly with regard to time-critical information about severe weather, Justin Wittrock developed and distributed a web-based survey designed to address fundamental questions about this issue.  In contrast to many REU projects, which represent student participation in an ongoing research program established previously by the mentor, Justin’s project was self-initiated.  He learned how to develop an effective survey and in particular, how to pose questions in a neutral manner.  He also gained experience in the Institutional Review Board process, learned how to identify communities and sample them in appropriate ways, and how to apply statistical analysis techniques.  Most importantly, he was shown how to explain findings, rather than simply present them, and pose questions for further study based upon them.

2008 Participants:

Tornado Warning Performance Dramatically Improves Inside a Tornado Watch
Kelly Keene's research showed that having a tornado watch in place prior to the issuance of tornado warnings vastly improves the performance measures associated with tornado warnings, particularly the critical measure of probability of detection (POD). Specifically, when a tornado watch is in place, the average POD for tornadoes over the last 10 years is around 0.85, while when no watch is in place the POD drops significantly to 0.50. This drop in POD suggests that not having a watch in place prior to a tornado warning equates to where the warning performance was some 20 years ago just prior to the implementation of the Doppler radar network.

Model of New York Harbor Improving through Ensemble Data Assimilation
The New York Harbor Observing and Prediction System (NYHOPS) is being upgraded (funded by the Navy's Small Business Innovation Research (SBIR) program) to make better use of observations routinely collected in New York Harbor. Jon Poterjoy's project on ensemble Kalman filter localization accomplished, by hand, what will become an automated procedure that maximizes the impact of a wide variety of ocean observation systems deployed in New York Harbor.

2007 Participants:

Student Discovers Error in Data
As Doug Crauder was scoring his velocity products looking for velocity dealiasing errors, he came across a situation where there were noisy velocities in a meteorologically benign area near the radar.  Doug realized these noisy regions had the classic teardrop shape associated with range folded echoes.  A closer look identified strong storms in the fourth Doppler trip were causing the problem.  Normally the range folded data should be shown as "purple haze."  In this case the new range aliasing mitigation technology developed by Sachidananda and Zrnic (SZ-2) is not correctly sorting the data.  National Severe Storm Laboratory scientists agree they will need to tweak threshold parameters to clean up the data.

Type of Weather Watch Affects Warning Performance
Jessica Ram's research showed that the type of watch issued by the Storm Prediction Center affects warning performance at local National Weather Service Forecast Offices.  Specifically, tornado warning performance was highest for Particularly Dangerous Situation (PDS) Tornado watches, and lowest when no watch of any kind was in effect.  The study also showed that 93% of all tornado events occurred inside some type of watch, with 3/4 occurring either in a PDS tornado watch or a tornado watch.  An interesting result of the forecaster survey is that watch type seems to influence an individual warning threshold, such that it is lowest for a PDS tornado watch.

Long-Term Changes in Atmospheric Instability Could be Related to Increasing Temperatures
Victor Gensini found that there are long-term changes in the frequency of high instability in the atmosphere in the US, with high values occurring at the beginning and end of the analysis period associated primarily with increased low-level moisture.  To first order, trend resembles the US annual temperature record, implying that it could occur more frequently in a greenhouse-enhanced climate.  In South America, on the other hand, instability decreased throughout the period, dominated by drying conditions.

Students' Dataset Forms Basis of Competition
The storm classification data set created as part of Eric Guillot's REU project is being used in a competition sponsored by the Artificial Intelligence Committee of the American Meteorological Society at their 2008 annual meeting.

 

Highlights of Student Research Accomplishments

2010 Participants:

With current and anticipated climate change, there arise questions related to how the hydrologic cycle may be affected in a region. Using a GCM ensemble Christopher Bednarczyk studied potential changes to the Blue River Basin of Oklahoma. Depending on the emissions scenario, streamflow is projected to decrease 10 to 30%. This is important because several area communities get water from this river, and there has also been talk of outside communities pumping water to supplement their own future water supplies.

Jeffery Deppa studied WRF model forecasts of the low level jet (LLJ) over a wind farm in southwest Oklahoma. As part of his investigation of mountain wave dynamics he studied parameters such as the Froude number and used IDV to visualize the forecasts. The WRF forecasts indicated that the strongest winds at turbine height might actually occur a few kilometers downstream of the ridgetop wind farm.

Todd Ferebee investigated the use of multi-radar, multi-sensor severe weather products in determining where different hail categories did or did not fall. He found several products did fairly well in depicting where no hail, non-severe hail, severe hail and significant-severe hail fell. Several other products showed delineation between just two categories, such as non-severe hail vs. severe hail or significant-severe hail vs. all other categories. Todd gained experience with the R statistical program and the Warning Decision Support System--Integrated Information command line utilities.

Stacey Hitchcock learned the importance of programming knowledge, writing skills, and data visualization in meteorological research. She also learned several different forms of forecast verification, including the use of Performance Diagrams (Roebber Diagrams) to convey large amounts of information succinctly in a single figure.

Eric Hout's research defined the idea of spatial vulnerability of counties based on the standardized tornado fatalities for individual counties over time. Previous studies on tornado vulnerability have provided insight on how individual factors influence overall social and spatial vulnerability. However, few studies have been conducted to evaluate the aggregated effect on vulnerability when these factors coincide. Additionally, a definition of vulnerability has been absent from the meteorological literature. Thus, to provide a more comprehensive view of vulnerability, this study proposes a mathematical definition for spatial vulnerability, and then uses tornado casualty data from 1950 through 2009 to calculate vulnerability on a county level for seven time periods. Hout analyzed trends of spatial vulnerability for each county and interpreted the spatial patterns among counties with increasing or decreasing trends of spatial vulnerability. Some patterns could be attributed to regional and others to local effects, which suggest regional and local influences on social responses to tornadoes of different damage (Fujita) scales. Eric learned the process of developing a research project with an emphasis in GIS and spatial vulnerability of tornadoes. He experienced the entire research procedure: defining a research question, literature review, data acquisition, analysis, and interpretation. He has learned GIS skills in data integration and spatial analysis.

Christopher Kerr learned to process and analyze CASA X-band polarimetric radar data and made comparisons between the radar measurements and calculated radar variables from disdrometer data. He calculated mean biases and errors of the measurements for the radar data with and without attenuation correction. The biases and errors are significantly reduced with attenuation correction, but substantial residual errors exist even after the correction. The residual errors vary depending on the location of the storm and the propagation path through the storm. This indicates that the attenuation effects have not been fully accounted for and further study is required.

Major ice storm events have become a seemingly routine component of southern U.S. winters during the past decade. In order to determine where and how frequently ice storms have occurred during the past decade, Carly Kovacik conducted a climatological analysis of ice storm events across the southern United States. Her research accomplishments included the development of a 10-year dataset of ice storm events across the southern U.S. (specifically KS, MO, OK, AR, TN, TX, LA, MS), an analysis of events to determine spatial and temporal characteristics, and a preliminary investigation into atmospheric mechanisms potentially responsible for changes in ice storm frequency observed during the past decade. Carly quantified an ice storm maxima within a region stretching from far southwest Oklahoma northeastward through Missouri. One particularly important result of this research revealed significant geographical inconsistencies in ice storm frequencies across National Weather Service Forecast Office boundaries. Although limitations in the National Climatic Data Center’s Storm Storm (and Storm Events) archives are known, this result emphasized the lack of a universal definition for ice storms nation-wide. Through this project Carly gained skills in building datasets, analyzing phenomena spatially, and effectively communicating results orally and in writing. The work she accomplished is already contributing to continuing efforts to study southern U.S. ice storms at the Oklahoma Climatological Survey.

Forecasters have long observed that operational models are too slow with the southward progression of shallow, arctic air across the sloping terrain immediately east of the Rocky Mountains.  This deficiency affects forecasters’ confidence in forecasting precipitation type and issuing winter storm warnings at lead times of only 12 to 24 hours.  William Leatham set out to quantify the problem.  Inspecting eleven model runs related to four winter storms, Bill found an average error of about 60 km on the location of the cold front and, more importantly, 107 km on the location of the surface freezing line.  Freezing line error increased from east to west, and, as expected, the model error was toward the north.  Twenty-one hours into one forecast, much of the freezing line from Oklahoma City to the New Mexico border was observed farther south than any of the twenty-two members of the Short Range Ensemble Forecast (SREF) had forecast.  With a robust ensemble, this result should be rare, and finding this result with a case sample size of one is troubling.  Preliminary inspection found that diabatic influences may play a large role in creating model error with respect to the freezing line.  Building a larger data set will help confirm whether a strong model bias exists, and testing sensitivity to diabatic processes may provide insight into potential causes.

Sarah Stalker investigated public actions and reactions to the 10 May 2010 Norman, Oklahoma tornado. Sarah showed great poise and enthusiasm throughout the summer and acted as a self-starter in order to get her research accomplished in the limited time-period of the REU program. She learned how to work through the Internal Review Board (IRB) process, which included a detailed training and developing a description of her research process and goals. Interviews were completed with individuals living in the path of the tornado in order to gain perspective on what they choose to do and why during a tornado. A qualitative research technique (thematic analysis) was used to analyze her data and associate with past work via a conceptual model. Participants did not feel any direct threat during early storm development and advisories and waited until the final moments to take shelter, though all subjects later believed they should have taken action sooner. Participants also stated that information on the projected storm-track, similar to that provided by graphical hurricane outlooks, was more important to them than longer lead-times. Sarah continued this project after REU, adding participants in Minnesota following the tornado outbreak that occurred in that region on 17 June 2010. She will present her work at the Sixth Symposium on Policy and Socio-Economic Research at the upcoming 2011 AMS Annual Meeting.

Joshua Turner investigated whether the urban heat island affects storm trajectories.  He analyzed output from a cloud tracking algorithm that provided locations and size of storms throughout their lifetime.  The goal was to see if changes in storm velocity (both direction and speed) were correlated to urbanized centers.  A threshold filter was applied to eliminate spurious results from different storms joined together.  There was no clear relationship of changes in velocity to urbanization, but signals may have been swamped by noise due to the time limitations in developing the thresholding filter.  Josh’s MatLab programming techniques were greatly expanded during this project.

Available datasets of global wetlands, water bodies, and seasonally inundated areas do not meet the needs of greenhouse gas flux models, which are used to estimate the flow of trace gases such as methane from the land surface into the atmosphere. Kevin Van Leer contributed to efforts to develop advanced water mapping techniques by investigating the effect that pixel scale has when flooded area is determined from satellite remote sensing imagery. Kevin's research showed that classified coarse-resolution imagery from the Moderate Resolution Imaging Spectroradiometer (MODIS, 250- to 2000-m) can significantly underestimate total inundation area compared with fine-resolution Landsat imagery (30 m). Furthermore, he showed that 250-m MODIS imagery did not improve the inundation area estimate compared with coarser-resolution imagery. Through this project, Kevin developed skills in spectral analysis, image classification, and spatial analysis of satellite imagery. He also gained experience in the Matlab programming environment and in the handling of large remote sensing datasets.

The Storm Prediction Center (SPC) has begun to develop a measured severe thunderstorm gust dataset that is partially independent of the National Weather Service's severe weather report database. Andrew Winters played an integral role in the development of the measured wind dataset by parsing surface observation data and then subsequently analyzing spatial relationships in a Geographic Information System framework and analyzing near-storm environmental variables linked to each individual gust.  Through his work, the SPC is in the early stages of obtaining an objective, non-damage biased severe thunderstorm wind gust climatology.  Preliminary results show measured severe wind gusts to be most frequent in the Plains and portions of the Midwest.  This is in contrast to the severe thunderstorm wind database showing the maximum in severe thunderstorm wind gust/damage frequency over the Appalachians.  It was found that a much lower frequency of measured wind gusts exists there compared to the Plains. Through this work Andrew increased his skills in manuscript writing, oral presentations, statistical methods, and Geographic Information System analysis techniques.

 

2009 Participants:

Hannah Barnes investigated NWS tornado warning statistics for marginal storm events.  Hannah quantified the false alarm rate (FAR) and probability of detection (POD) for three scenarios: i) days with no reported tornadoes; ii) days with only one reported tornado; and iii) outbreak days with ten or more reported tornadoes within a Weather Forecast Office county warning area.  Hannah identified three key results.  First, the large-scale environment differed little between zero and one tornado days, but both differed significantly from large outbreak events.  Thus, there were no large-scale signatures differentiating between zero and one tornado days.  Second, the circulation intensity of false alarms at the lowest height , as scanned by WSR-88D radars, was notably weaker than those associated with confirmed tornado warnings.  Third, the presence of obscured velocity data (marked by ‘purple haze’) led to a 15% increase in the false alarm rate.  Hannah’s findings are critical first steps in understanding how to reduce the number of tornado false alarms.

Wind ramp events – abrupt changes in wind power output due to variations in wind speed – are a growing concern in the wind power industry. Kristen Bradford examined the climatology of wind ramp events at 34 METAR sites in the Southern Plains during June-July 2009. The observations were used to validate Weather Research and Forecasting (WRF) model forecasts on a 3-km grid. Owing to the paucity of instrumented tower data, 10-m winds were used for the study. The WRF model performed well during frontal passages but did not capture the temporal variability of the observations. Similarly, although there little overall bias in the forecast wind speeds, many more ramps were noted in the observations owing to the temporal variability.

David Gorree collected 20 years of 1-km resolution, biweekly maximum-value composite normalized-difference vegetation index (NDVI) data from polar orbiting satellites over the contiguous United States and converted the data to vegetation fraction for periods centered near 1 April and 1 May.  Analyzed these data to produce mean, maximum, minimum, and standard deviation fields and explore interannual vegetation variability.  Developed improved skills in programming languages and visualization tools.

Stephanie Hoekstra got a taste of what it is like to integrate social science into meteorology. She looked at how the public perceives severe weather as well as whether tornado warning lead-times longer than the current average lead-time (about 13 minutes) are in demand. She surveyed National Weather Center (NWC) visitors ranging from 18 to 65+ years of age. Many social science studies only sample undergraduate students, so the broad range in ages is noteworthy. Stephanie found that her sample population perceived weather risks and fatalities fairly accurately, but interesting patterns were emerging for the way different age groups or people from different areas perceived and ranked these risks. She also found that the participants would prefer a tornado warning lead-time of at least 13.8 minutes, with an ideal lead time of 33.5 minutes. Stephanie learned about creating surveys and the difficulties that can accompany that process. Additionally, she learned some introductory methods for analyzing survey data, as well as ways to correlate the perceptions of those surveyed to the climatology of severe events.

Alex Kowaleski evaluated lightning and severe thunderstorm forecasts from the European Storm Forecast Experiment (ESTOFEX). He found that between-forecaster variability is about the same or less than between-season variability, suggesting that the ESTOFEX forecasters put out products that look like they come from a single unit rather than individual forecasters. By utilizing new approaches to visualization of forecast performance, Alex was able to show the progression of forecast performance through the year and between different years. Such techniques will be applied in the future to forecasts from the NWS Storm Prediction Center.

Lauren Potter quantified NWS warning lead times for reported severe hail and damaging hail events.  Lauren compared two years (1999 and 2000) of severe hail reports and ten years (1999-2008) of damaging hail reports from Oklahoma, Colorado, Massachusetts and South Carolina.  Interestingly, she found no significant differences among those states in the lead time of reported severe hail or damaging hail.  The mean lead time for severe hail was 18-22 minutes, with a lead time ranging from 19 to 29 minutes for damaging hail across the four states.  Overall, Lauren found that about 72% of reported hail occurs during a Severe Thunderstorm warning and another 7% occurs during Tornado Warnings.  Such a relatively long lead time and warning rate provides the general public and emergency and government services with the opportunity to take preventive cautions and thereby mitigate at least some property damage from hail.

Astrid Suarez Gonzalez considered numerical forecasts of banded snow. Banded snow is one of the greatest winter weather forecasting challenges faced operationally with large economic and human safety consequences.  Her work focused specifically on using techniques that have been successfully used to improve forecasts of springtime convective systems: namely, convection-permitting forecasts, ensemble forecasts, and data assimilation.  The work she did is quite unique: Some people argue that winter phenomena may not be as sensitive to these techniques because the flow is usually forced by larger-scale processes. However, Astrid demonstrated that these techniques can greatly improve a forecast.  Astrid came to REU with a great interest in anything related to numerical modeling.  She had never heard of data assimilation before but was really enthusiastic to learn and try and is now very interested in this research area.

Cristal Sampson (funded through CUNY's REU) conducted evaluation research on existing user feedback surveys from the ongoing research and development of the National Weather Radar Testbed's Phased Array Radar (PAR). The PAR is a research radar that is under consideration to replace the Weather Surveillance Radar-1988 Doppler (WSR-88D). As a new technology it is important to provide user insight into the development stage to ensure intended users have the most usable tool upon deployment and not only understand the operational utility of PAR. Results from experiments held in 2008 and 2009 have already assisted researchers developing PAR. The participants of these experiments evaluated real-time and archived cases; after each evaluation questionnaires were completed. The responses to two archived cases were analyzed in this paper using a data-driven method. The results show how high-temporal resolution data of PAR impacted the participating forecasters in a simulated warning environment. Suggestions are made to improve future research and development.

Jeff Viel performed a robust statistical analysis of temperature time series data of US cities for which there are weather contracts that trade on the Chicago Mercantile Exchange.  Jeff was able to utilize Fourier decompositions of the data to remove the seasonal signal in the first and second statistical moments, leaving a distribution of historical residuals.  Jeff demonstrated that the residual distributions are, in most cases, not drawn from a normal population. This finding is very important, for it has implications on the way in which options on weather futures contracts could be priced.  Then, using the statistics, he developed a stochastic model that attempts to simulate realistic temperature paths for the eventual purpose of incorporation into a pricing model.  The preliminary results of Jeff's model seem very promising.

Travis Visco used a new least, linear squares derivatives (LLSD) technique (developed at NSSL) to derive azimuthal shear and radial convergence fields. He compared the shear and convergence fields to tornado tracks to form a distribution. His study represents the first such effort to obtain these distributions of the derived LLSD fields. Travis also separated out the first tornadoes from storms in our database. By doing this, he could isolate (i.e., avoid interference from ongoing tornadoes) trends in the fields prior to tornado touchdown. These trends showed a general increase in azimuthal shear prior to tornado touchdown, especially in the leading 5 to 10 minutes. Travis' most significant find was the large spread in the distribution of azimuthal shear values.

Jonathan Vogel conducted a survey of NWS meteorologists to assess the impacts of super-resolution radar data on signature recognition and warning-decision making. The majority of meteorologists surveyed indicated that they did see an improvement in signature recognition for various signatures noted in the literature (i.e. gust fronts/boundaries). When it came to warning-decision making, the meteorologists were a little more reserved in their comments because they wanted to give super-resolution data a little more time before making a judgment. Jonathan gained valuable experience in developing and conducting human surveys. He also gained experience in analyzing radar data.

In order to begin developing an understanding of social networking as a means for communicating weather information to the general public, particularly with regard to time-critical information about severe weather, Justin Wittrock developed and distributed a web-based survey designed to address fundamental questions about this issue. In contrast to many REU projects, which represent student participation in an ongoing research program established previously by the mentor, Justin’s project was self-initiated. He learned how to develop an effective survey and in particular, how to pose questions in a neutral manner. He also gained experience in the Institutional Review Board process, learned how to identify communities and sample them in appropriate ways, and how to apply statistical analysis techniques. Most importantly, he was shown how to explain findings, rather than simply present them, and pose questions for further study based upon them.

 

2008 Participants:

Blake Allen investigated the effects of enabling prediction of a second microphysical moment, number concentration, for each hydrometeor category in a mixed phase, bulk microphysics scheme in a cloud-resolving numerical prediction model. The electrification of the storm was also found to be quite sensitive to changes in the microphysics complexity, due at least in part to variations in cloud ice and graupel production in the different model runs. Along the way, he learned about scientific computing in a UNIX environment, using 3D visualization tools, and helped uncover errors in the model code as it was being developed.

Severe weather watches are a part of a series of products issued by the Storm Prediction Center (SPC) that are used to alert forecasters, emergency managers, the media, and the public of the likelihood of the occurrence of severe weather. What makes severe weather watches important is their ability to help improve public safety and help save lives as they make people aware of the potential danger of severe weather within their area in the hours immediately following the issuance of the Watch. Becky Belobraydich surveyed the college students at Northern Illinois University and The University of Oklahoma. The students' responses were then analyzed to see what they knew about Watches and how they responded to them. The responses were right in line with what we thought they would be, and they point to the fact that the majority knew there county but did not know the counties next to them. More research and public education is warranted to get the SPC Watches to the full effect.

Tim Bonin looked into the notion that we have experienced more ice storms in the southern plains in recent decades, compared to prior history. His research combined two large datasets: the climatological record of winter precipitation and upper-air data that informs it. He did a solid job of working around, and reasoning through, some limitations of the data. His results showed that the precursors for icing events remained largely unchanged, but the scenarios that potentially support high-end events may have increased in the last decade.

Madison Burnett focused on evaluating the amount of tornado activity that occurred during the autumn and winter months of 2007-08 and comparing that activity with the historical record of tornadoes and tornado-related fatalities. She discovered a substantial upswing in all tornado reports over the past 50 years but very little change, to a slight downward trend, in the strongest and most violent tornadoes reported during the cool season months over this same period of time. Further analysis revealed that cool seasons with a large number of tornado-related fatalities have appeared in the record about once a decade over the past 50 years. In order to conduct this work, Madison had to become proficient not only in the format of the NWS/SPC tornado database but also in the use of the structured query language used in evaluating tornado data contained within the database. In a side project comparing the SPC and NCDC tornado databases, Madison uncovered a peculiar inconsistency where extra counties existed in the SPC database for tornadoes tracking up to and perhaps crossing a county border. This inconsistency was most noticeable for a period of the 1970s and may have been due to different tornado coding standards used at the time. Further investigation into this issue is needed.

Brad Hegyi applied lake-effect snow forecasting parameters to lake-effect snow cases on the west side of Lake Michigan to see if those parameters were helpful in forecasting those relatively rare lake effect events. He found that northeast and north-northeast winds at 850 and 925 mb were common to western Lake Michigan lake-effect snow events, in addition to a minimum of a 13°C temperature difference between 850 mb and the lake surface.

Christina Holt investigated the physical characteristics of a tornado producing mini-supercell that occurred over Oklahoma during tropical storm Erin. The mini-supercell had a shallow circulation only 4.5 km in diameter that extended through a depth of only 3 km. These physical attributes are consistent with previous studies of similar storms. Christina's study was unique in that she was able to use data from the Multifunction Phased Array Radar (MPAR), operating in Norman, OK, to sample the rapid intensification to a tornadic phase. This transition took only 3 minutes. The case serves as an example of how higher temporal sampling might improve hurricane-spawned tornadoes and improve forecasts and warnings of them.

Kelly Keene examined tornado warning performance in relation to watches for 1997-2007. Her database consisted of over 30,000 tornado warnings and over 15,000 tornadic events. She found that the issuance of any watch improves the overall tornado warning performance. She found that the Probability of Detection (POD) of tornado warnings increases by .327 when a tornado watch is in effect, as opposed to when no watch is in effect. Lead time from tornado warning occurrence is improved by an average of five to six minutes when a tornado watch is in effect, as opposed to no watch. Finally, when a tornado watch is in effect, there is a slight decrease (in the amount of 0.81) in False Alarm Ratio (FAR) compared to when no watch is in effect.

Jennifer Newman analyzed data from nine interviews with meteorologists from two key stakeholder groups in the Southern Plains, NWS forecasters and TV broadcasters, to attain specific information about current radar capabilities and how those capabilities helped or hindered participants' ability to fulfill their roles. Her analysis revealed that the problems participants spoke of fell into four basic needs. First, meteorologists clearly conveyed the need for reliable, clean, and accurate radar data. Second, several stories involved weather situations that evolved more rapidly and on smaller spatial scales than WSR-88D can sample. Third, both groups told stories illustrating advantages of high-resolution and low-altitude station or TDWR radar data, and how the lack of that information in other areas hampered their awareness of the weather that was occurring. Finally, size, distribution and type of hydrometeors in both warm and cold season events were critical information participants could only partially infer in data from current radar systems.

Jonathan Poterjoy's project addressed a fundamental question about using observations to improve ocean models: what area/volume within an ocean model should a single observation influence? His work shows that the answer varies widely depending on the local bathymetry, depth and by variable, and importantly, that there are high correlations at great distances from the observation that are spurious and must be trimmed (i.e., localization). His work also shows that some cross variable correlations (i.e., salinity/temperature) are significant. With the knowledge generated by Jonathan's project, there is a now benchmark for devising automated methods to calculate localization distances.

Christopher Wilson successfully used a high-resolution hail verification dataset to evaluate several hail diagnosis techniques. Chris's project may also be one of the first projects to use lead time, in a meaningful way, in algorithm performance evaluation. Chris tracked observed hail sizes and storm attributes at discrete times (i.e., a radar volume); this is in contrast to other studies which arbitrarily relate storm attributes to hail sizes by using a time window (e.g., +/- 20 min). While different diagnosis techniques had high probability of detection and low false alarm ratio scores, which hint at good performance, relatively high probability of false detection scores hampered overall performance (determined by Heidke Skill Score). Finally, Chris showed that for lead times greater than 10 min, all evaluated hail diagnosis techniques showed poor skill.

Jeff Zuczek investigated the climatology of when weather would likely be favorable for prescribed burning. Such burns are critical for controlling invasive plants, including the Eastern Red Cedar, which is notorious for breaking up pastureland and wildlife habitats. Wind climatology using Oklahoma Mesonet data from January 1994 through May 2008 was analyzed to determine the number of days each year meeting all criteria for prescribed burns, using a consensus definition for "favorable burn day" from Oklahoma's 11 burn associations. Jeff's results showed good burning conditions were more likely earlier, rather than later, in the burn season.

2007 Participants:

Rachel Butterworth took the opportunity to work on developing a proposal that, if funded, could lead to hear master's degree. Whether or not that might come to pass, she learned a great deal about how to research and develop a proposal. She took her initial idea of addressing the gaps in scientific literacy among the general public with the anticipated capabilities of new radar technologies such as the CASA concept and developed an education and communication plan to that would help people take advantage of weather technology in their daily activities.

Douglas Crauder successfully demonstrated the feasibility of using two rather than three Doppler scans with the Multiple PRF Dealiasing Algorithm when one of the two Doppler scans uses the phase coding logic developed by Sachidananda and Zrnic to mitigate range aliasing.  The significance of using two rather than three Doppler scans is that nearly thirty seconds can be removed from the time required to complete a volume scan of data which is normally between five and six minutes.  During severe weather a reduction of thirty seconds is important to operational forecasters who want a rapid update for assessing storms.  The Applications Branch expects to submit change requests to the WSR-88D system to add new volume coverage patterns based on his findings.

Victor Gensini analyzed time series from 1958-1999 of high values of atmospheric variables important for severe thunderstorms in regions of North and South America, based off of NCAR/NCEP reanalysis data.  He learned how to look at cumulative distribution functions of very large datasets in an efficient manner, so that comparisons between different periods and locations could be assessed.

Eric Guillot found that the amount of forecast skill involved when issuing tornado and severe thunderstorm warnings is closely related to the type of storm that causes the severe weather. It was found that, for a sample of over 3000 warnings, both tornado warnings and severe thunderstorm warnings issued for isolated supercells and convective line storms have better skill scores than those issued for pulse and non-organized storms. Lead times were consistently longer for supercell and line storms, while usually very short for pulse and non-organized storms. We concluded that measures of forecast skill are particularly sensitive to the type of storm. Thus, any measurement of forecast skill, such as the year-over-year skill measure of an individual forecast office, has to take into account the types of storms in that office’s warning area in the time period considered. This project focused on the analysis of multi-radar, multi-sensor data from convective storms, statistics, and severe weather verification techniques.

Stephanie Henry helped to develop a procedure for determining cloud forests in Central America using MODIS imagery with 250m spatial resolution.  Cloudiness was extracted from the visible images via an algorithm and this cloudiness was further stratified by its annual and diurnal variations.  Together these allowed the mapping of regions of differing cloudiness, which could then be related to cloud forests estimated via other means.  The procedure can be applied globally to map different vegetation regimes based on the satellite observed cloudiness.

Luke McGuire developed a tool that allows us simulate satellite orbit tracks and sensor field-of-view configurations and project this track onto the globe.  Then, using a cloud-cover database, he developed a method to determine for each satellite footprint the probability that the sensor will encounter clouds – and if it does, the expected cloud altitude. This tool allows us to investigate the impact of cloud-cover on satellite configurations.  The results can be used to assess the utility of new sensors by allowing for a robust simulation of clouds when we simulate the satellite measurements.  Luke wrote an excellent summary of his work and we are in the process of expanding this into a refereed journal paper.

Scott Powell learned a number of social science statistical approaches and how to properly manipulate variables involved in how people make decisions based on weather information. He was very quick to learn a new statistical software package (SPSS), effectively incorporated the valuable tips his mentor, a professor in the Department of Communications, gave him on presentation skills, and wrote a well-formatted, organized, and developed final paper. Scott found that individuals' responses to weather information vary demographically, especially by geography, age, and gender. Californians, for example, reported less planning, readiness, and trust in weather information, no matter the source. Over one third of the sample population did not know the difference between a severe weather watch and a warning.

Jessica Ram successfully quality controlled and organized thousands of storm reports and national weather service issued warnings for over 250 watches from the first few months of 2006.  She learned all about 2x2 contingency tables and statistics related to warning and watch performance like false alarm rate, probability of detection, and critical success index.  She also received over 40 completed surveys from NWS forecasters all across the country.

Bo Tan worked on developing a strategy to relate satellite imagery to tropical wave positions identified with special radiosonde data collected over West Africa during the NAMMA - 2006 field program.  The procedures that Bo explored will be expanded to develop multi-year climatologies based on geostationary satellite imagery that distinguish rapidly developing tropical waves from those that develop slowly, or do not develop.  This ongoing study should benefit longer-range hurricane forecasting over the Atlantic.

 

 

 

 

 


Contact
Copyright © 1998-2017 - Board of Regents of the University of Oklahoma