# Manipulating Elevation/Bathymetry Profiles with Mathematica

### Introduction

One of the earliest posts that I made on this site described how to use Mathematica to create elevation/bathymetry profiles for the Earth, at a specified latitude. I use those profiles to have students in my 100-level non-science majors course on earthquakes and society become familiar with Earth’s surface topography and its relationship to plate tectonic features. We are now studying tsunami, and one of the activities that we complete is an estimate of the travel time from an earthquake in the eastern Atlantic to a set of cities along the east coast of the United States. The source is located near the assumed source region of the 1755 Lisbon Earthquake.

We compute the rough speed of the tsunami, $$c$$ using the simple formula $$c = \sqrt{d \cdot g}$$, where $$d$$ is the mean depth of the water along the great-circle arc connecting the source region and the point of interest, and $$g$$ is the acceleration of gravity (about $$10~m/s^2$$). In large classes I provide a map with depth of the ocean contoured and including great circle arcs connecting the source and locations of interest and students estimate the mean depth along the path. They generally do ok, but estimating the mean depth is a challenge (one worthwhile). In my smaller courses, which are taught in computer laboratories, we can do the analysis using Mathematica notebooks. Then students can plot the great circle arcs, look at the profile, estimate the mean depth, and then compute the mean depth.

### The Code

Here’s the code listing to extract the elevation/bathymetry along an arc connecting points 0 and 1, specified by the latitude and longitudes (e.g. lat1,lon1)

(* city *)
lat0 = 40.66;
lon0 = -73.94;
(* eq *)
lat1 = 36;
lon1 = -10;
(* get the great-circle arc and the distance along the arc *)
gcarc = GeoPath[{{lat0, lon0}, {lat1, lon1}}, "GreatCircle"];
gcarcDistance = GeoDistance[{{lat0, lon0}, {lat1, lon1}}, UnitSystem -> "Metric"]
(**)
p0 = Framed[GeoGraphics[{Red, AbsoluteThickness, gcarc},
GeoRange -> {{0, 70}, {-120, 20}},
GeoProjection -> "CylindricalEquidistant", GeoZoomLevel -> 6,
GeoBackground -> "ReliefMap", ImageSize -> Full]];
(* extract the elevations along the great-circle arc *)
profile = GeoElevationData[gcarc, Automatic, "GeoPosition", GeoZoomLevel -> 4];
(* extract the pts = {lat,lon,elevation} from the results *)
pts = profile[][];
depths = #[] & /@ pts;
distances =  QuantityMagnitude[GeoDistance[{pts[][[1;;2]], #[[1;;2]]},
UnitSystem -> "Metric"]] & /@ pts;
avg = Mean[depths];
pts = Transpose[{distances, depths}];
(* plot the elevations along the line and a dashed line for sea level *)
label =  "Mean depth along the path: " <> ToString[Round[-avg, 5]] <> " m";
(**)
p1 = ListPlot[pts, Joined -> True,
Frame -> {True, True, False, False}, Axes -> False,
Filling -> Bottom, ImageSize -> Full,
PlotLabel -> Style[label, 18], AspectRatio -> 0.4, BaseStyle -> 14,
Prolog -> {Blue, Dashed,
Line[{{distances[], 0}, {distances[[-1]], 0}}], Black,
Line[{{distances[], avg}, {distances[[-1]], avg}}]}];
(**)
Column[{p0, p1}] // Print

#### The Output

The graphics that result are show below – the dashed blue line is sea-level, the dashed black line is the mean value. A minor goal of the computation is to have students see the relationship between the mean depth and the profile. Continue reading

# Mapping Countries, States, & Cities with Mathematica

The last few updates to Mathematica have added some useful cartographic tools for map making. Some of my earlier posts included some simple maps. For any serious cartography, you need to be able to plot the boundaries of various “administrative” units that make up a region – these can be states, provinces, counties, cities, etc. Mathematica, like python can do just about anything that you really set your mind to, so if you have geographic information in existing files, you can parse it and display it with the GeoGraphics command. In this note I show how to acquire information on basic administrative units, in particular, their coordinates, so they can be displayed on the maps you use to show your own data (earthquake locations, seismic stations, etc).

When I make a map, I usually have a set of bounds that define the region of interest (influenced strong by the Generic Mapping Tools) by specifying the west, east, south, and north edges. Let’s start with an example defining a geographic rectangle and requesting a list of the countries that overlap our region. In  the example I was exploring seismic stations available in northeast China so the box covers that region. Instead of plotting seismometer locations, I’ll plot country boarders and large city locations. We’ll need the country list, country polygons, and a list of large cities and their populations.

(* define the geographic region of interest *)
west = 112;
east = 137;
south = 35;
north = 51;

(* acquire a list of countries that are contained in the region *)
regionalCountries =
GeoEntities[Rectangle[GeoPosition[{south, west}], GeoPosition[{north, east}]],
"Country"]

(* acquire the coordinate polygons for each country in the list *)
regCountryPolygons = CountryData[#, "Polygon"] & /@ regionalCountries;

regionalCities = GeoEntities[Rectangle[GeoPosition[{south, west}], GeoPosition[{north, east}]],
"City"]
(* create a list of large cities in the region -
minimum population is 1,000,000 *)
largeRegionalCities = Select[regionalCities,
QuantityMagnitude[CityData[#, "Population"]] >= 1000000 &];

You can see what properties are available from CityData using

In:= CityData["Properties"]

Out= {"AlternateNames", "Coordinates", "Country", "Elevation", "FullName",
"Latitude", "LocationLink", "Longitude", "Name", "Population", "Region",
"RegionName", "StandardName", "TimeZone"}

Here’s what the output in the notebook looks like. The “data” requests return Mathematica “Entities“, which are information objects that can be queried for additional information, as we did asking the countries for cities and asking the cities for populations and coordinates. Entities are represented by the orange rounded rectangles in the notebook. Continue reading

# Understanding FFT Scaling with Matlab (or Python, or…)

The Fourier Transform is one of the most frequently used computational tools in earthquake seismology. Using an FFT requires some understanding of the way the information is encoded (frequency ordering, complex values, real values, etc) and these are generally well documented in the various software packages used in the field. I’ll assume that you have a good handle on these issues and that you know the basic idea about using forward and inverse transforms. I want to focus on an aspect of FFT use that often confuses students, the scaling provided by the transforms. Scaling can be confusing because to make the functions more efficient and flexible, the scaling is often omitted, or it is implicitly included by assuming that you plan to forward and inverse transform the same signal.

In seismology we usually start with a signal that has a specified sample rate, $$\delta t$$, and a length, $$N$$. I’ll assume that we are dealing with a signal for which $$N$$ is a power of two, which used to be the case whenever we used FFT’s but is less required now that computers are much faster. FFT routines don’t use a specific value of sample rate, $$\delta t$$, at all, but they don’t always completely ignore it – they sometimes include $$\delta t$$ implicitly in their scaling. For most cases, when you compute a forward transform, you have to scale the result by the sample rate to get the correct values (this $$\delta t$$ arises from the $$\delta t$$ in the Fourier Transform that the DFT is approximating).

Consider the case in Matlab. We’ll construct a unit area “spike” in Matlab by creating a signal with a single non-zero value. So we can keep track of the physical units, which are also ignored in the computation but are essential to keep track of when doing science, I’ll assume that our signal is a displacement seismogram with units of meters and that the units of $$\delta t$$ are seconds. To create a signal with unit area, we must set the amplitude equal to $$1 / \delta t ~\mbox{meters}$$. That is the spike amplitude has the same numerical value as $$1/\delta t$$, but units of length. For a spike with an amplitude $$1/\delta t$$, the area under the curve in the time domain is (using a the formula for the area of a triangle with width 2 dt):

$Area = \frac{1}{2} \cdot 2 \times \delta t~\mbox{[s]} \cdot \frac{1}{\delta t}~\mbox{[m]}= 1 \mbox{ [m-s] .}$

The Fourier Transform of a unit area spike should have a value of unity. Let’s see what we get in Matlab.

 >> dt = 0.1;
>> s = [1/dt,0,0,0,0,0,0,0];
>> shat = fft(s) % amplitudes are wrong without the correct scaling
shat = 10 10 10 10 10 10 10 10

The spectrum is a factor of $$1/\delta t$$ too large. We get the correct result  by multiplying by $$\delta t$$.

>> shat = fft(s) * dt % you must scale by dt to get the correct values shat = 1 1 1 1 1 1 1 1

When we inverse transform, we have to account for the $$\delta f$$ in the Fourier Transform Integral. That means we multiply by $$\delta f$$. For a signal of length $$N$$ and sample rate $$\delta t$$, the frequency sampling rate, $$\delta f$$ is

$\delta f = \frac{1}{N \cdot \delta t}~.$

Here’s the tricky part, Matlab multiplies the inverse transform by $$1/N = \delta t \cdot \delta f$$. So we have to correct for that by either dividing by $$\delta t$$ or simply remove the $$1/N$$ factor and apply the $$\delta f$$, which makes things easier to understand. Here’s a summary, I explicitly multiply the results of the inverse transform (ifft) by $$N$$ to remove Matlab’s scaling, then I scale by $$\delta f$$.

dt = 0.1;
N = 8;
df = 1/(N*dt);
% define a unit area signal
s = [1/dt,0,0,0,0,0,0,0]
% forward transform - scale by dt
shat = fft(s) * dt
% inverse transform - remove 1/N factor, then scale by df
ifs = (N * ifft(shat)) * df


### Summary

The first thing that I do when I start using a new FFT, is to test the amplitudes using a unit-area delta-function approximation (which is really a triangle). The unit area time domain signal must have a value of unity at the zero frequency, and if it’s a spike, the spectrum should be flat. If you think this is not that critical, then you need to think about deconvolutions, earthquake-source spectra calculations, etc. To get the correct amplitudes, you have to apply your scaling (unless you do a forward and then inverse transform without applying any operations using physical quantities to the original signal).

There’s nothing special about Matlab here – you can use the same tests to understand the scaling in Mathematica, Python, Fortran, C, JavaScript, …