Author Archives: Kerry Lynn Ryan

Seismicity of the 2014 Oso, WA Landslide

The banks of the Stillaguamish River near Oso, Washington have been known to landslide. In the last 50 years, 6 events were documented. On March 22nd, 2014 a catastrophic landslide occurred ~ 6.5 km from Oso causing 43 fatalities. The landslide traveled approximately 1.1 km covering a nearby highway, destroying several homes, and damming the Stillaguamish River. To investigate the slope failures a team of scientists analyzed short and long period signals from the landslide.

They found that the landslide was comprised of a series of multiple failures with two major collapses that occurred ~ 3min apart. The first event showed characteristic features of seismic signals generated by landslides, notably an emergent onset and lack of clear p and s waves. The second event was more impulsive with several discernible amplitude peaks.

Screen Shot 2015-05-06 at 1.45.00 PM

 

Shown above are the seismic signals from the 2 events filtered between 1-3 Hz and 3-10 Hz for (a&c) and (b&d), respectively. Figure from: Hilbert et al., 2014 (The entire report can also be found at that link.

The long period signals were used to invert for the forces acting at the source. Combined with remote sensing data they were able to estimate the volume of material displaced by the landslide. The first event displaced between 6.0e6 and 7.5e6 cubic meters of material. In total a volume of 7e6 and 10e6 cubic meters was mobilized during the landslide.

Aftershock triggering model using revised rate and state friction law

The rate- and state- dependent friction laws (RSF) are empirical relations based on laboratory experiments that have been used to model a variety of earthquake behaviors, including the mechanics of a seismic cycle, episodic aseismic slip, and triggered seismicity (Kame et. al, 2013). These laws describe variations in friction based on the loading rate and state of the sheared zone. There are several forms of the RSF laws. The paper summarized below is based on the RSF law proposed by Dieterich (1979) and a more recently revised version proposed by Nagata et al. (2012).

In 1994, Dieterich modeled aftershock seismicity after an imposed stress step using his RSF model. His model can predict the observed 1/t decay of aftershock rate but there are two major observational gaps: (1) The model under predicts the amount of aftershock productivity and (2) The model predicts too long a delay time before the onset of decay. In a recent paper, Kame et al. (2013) hoped to address these gaps by running similar models using the Nagata RSF law.

Dieterich’s model considered a fault of fixed size embedded in an elastic medium. He was able to solve for the aftershock rate analytically. Kame et al. (2013) applied the Nagata law, which contains a stress weakening effect, to a similar model but found that the problem required a numerical solution.

Main observations from Kame et al. (2013) study:

1) Although the revised model produced greater seismicity and shortened delayed times, these improvements were only by a small factor compared to the disparities with natural observations that span several orders of magnitude.

2) Unlike the Dieterich model , in which a stress step always advances the timing of an earthquake, the revised model showed two different types of behavior. In most cases, the timing of the earthquake was advanced. However, if the stress step occurred at a specific time in the loading history of the fault , oscillatory slow slip cycles began, effectively delaying  the onset of the earthquake.


For more details on this study see:

Kame, Nobuki, et al. “Effects of a revised rate-and state-dependent friction law on aftershock triggering model.” Tectonophysics 600 (2013): 187-195.
http://www.sciencedirect.com/science/article/pii/S004019511200755X

Other sources:

K. Nagata, M. Nakatani, and S. Yoshida. A revised rate- and state-dependent friction law obtained by constraining constitutive and evolution laws separately with laboratory data, 2012.
http://onlinelibrary.wiley.com/doi/10.1029/2011JB008818/abstract

J.H. Dieterich. A constitutive law for rate of earthquake production and its application to earthquake clustering, 1994. http://onlinelibrary.wiley.com/doi/10.1029/93JB02581/abstract

J.H. Dieterich. Modeling of rock friction 1. Experimental results and constitutive equations, 1979.
http://onlinelibrary.wiley.com/doi/10.1029/JB084iB05p02161/abstract

Tomography reveals spatial extent of Yellowstone magma reservior

Yellowstone is one of the world’s largest volcanic systems. In the last 2.1 Ma it has had 3 major eruptions (2.1, 1.3, and 0.64 Ma) releasing an estimated 2500 km^3, 280 km^3 and 1000 km^3 of material, respectively. Yellowstone also features widespread seismicity and ground deformation rates of up to 7 cm/yr. Recently, Farrell et al. compiled an earthquake data set from 1984 to 2011 which they used to construct a 3D P-wave velocity structure of the Yellowstone system. The low P-wave velocity zone is taken to be the volcanic reservoir which is 2.5 times larger than suggested by a previous study. The goal of this study was to provide a better estimate of magmatic volume, melt distribution, and fluid state, all which influence the volcanic and earthquake hazard in the area.

This study looked at over 48,000 P-wave arrivals from more than 4,500 earthquakes. They only used events where at least 8 P-wave observations were available with uncertainties less than 0.12 s. Farrell et al. used an automatic picking method to ensure consistency in the selection of their first P-wave arrivals. They inverted the data to find the hypocenter, origin time, and 3D P-wave velocity structure. Sensitivity tests showed that they had adequate resolution to identify low-velocity bodies to depths of ~17km.

Based upon their inversion, they estimated the volume of melt in the Yellowstone system to be between 200 km^3 and 600 km^3. This suggests that there is enough material available for an eruption of similar size to the 1.3 Ma event.

For more details, check out the paper here.

L.A. Preparing for the next “Big One”

The March 1st, 2015 edition of EOS featured an article about Los Angeles and their recent effort to prepare for a large earthquake in southern California. Over the past year, Lucy Jones a seismologist from the USGS has been working at the L.A. city hall helping to devise a plan to help the city prepare for the next big earthquake. The article focused mainly on discussing what steps the community thought were most important to help minimize damage from future earthquakes. The “Resilience by Design” report released in December focused on retrofitting vulnerable structures, preserving access to water, and preserving telecommunications. To complete the suggested tasks will likely cost billions of dollars. I think this article raises an interesting public issue on how much time and money should be invested in earthquake preparedness measures that although expensive could potentially save thousands of lives and billions of dollars when the next large earthquake inevitably strikes. Should all cities be developing similar preparedness plans?

Reading this article also prompted me to think a lot about earthquake prediction and a book I recently read, Predicting the Unpredictable by Susan Hough.  The book was a very interesting and enjoyable read ( I would highly recommend it and would be happy to let anyone borrow it). It describes the highs and lows felt by the seismological community over the past several decades when there was a hope that we would be able to reliably make short-term predictions about large earthquakes. While reliable short-term predictions may not be possible we can use statistical seismology to provide likelihood of when events will occur.

Assessing Earthquake Hazards in central Oklahoma

‘ Earthquake hypocenters and focal mechanisms in central Oklahoma reveal a complex system of reactivated subsurface strike-slip faulting, McNamara et al. , Geophysical Research Letters, 2015. ‘

Since late 2009, there has been a notable increase in the number of earthquakes with magnitude greater than 3 in Oklahoma. Given that the most recent activity does not follow patterns normally observed in the area, many  believe that fluid injection of wastewater from the oil and gas industry may be contributing to the increased seismic activity.

To better understand the cause of this activity and to better assess potential hazard, McNamara et al. used a multiple event location method to relocate over 3,600 events and used Regional Moment Tensors to compute source parameters for 195 events in Oklahoma.

They found that a majority of faults are optimally oriented for failure relative to the regional compressive stress field and that most of the earthquakes originate in the shallow crystalline basement. Based upon these observations they believe that there is a high potential earthquake hazard in Oklahoma.

For more details check out the article here.

Understanding Einstein Summation Notation

In class we are using a continuum mechanics approach to study stresses and strains within the earth. Many of the equations we will use to describe the motion of the earth are written using Einstein summation notation. Therefore, it is important to understand how this convention works.

When I first encountered the Einstein summation notation I had some difficulty understanding what the equations actually meant. I’ve included some of the external resources that helped me to better understand this convention and thought they may be helpful to others who are less familiar with this material.

The MOOC Alternative (Youtube Videos):

This is a series of short youtube videos related to the Einstein summation convention. The first video starts with basic concepts of scalars, vectors, and tensors. With those building blocks, the subsequent videos go on to discuss summation over indices, dummy indices, and the Kronecker delta (all of which are important to the derivations we do in class).

Intro to indicial notation by Theo Hopman:

For a basic introduction, I found the first two sections of this document to be most helpful. The first section discusses free indices, summation over an index, and dummy indices. The second part discusses the Kronecker delta and Levi-Civita functions. I found the other sections interesting to read although I am not sure how much of that material will be necessary for this course. Nevertheless, I would recommend having a quick look over them particularly the examples in section 4.