“The skill of writing is to create a context in which other people can think.”—Edwin Schlossberg

data files used: source(“Losses.txt”); load(“best.use.RData”); load(“ncdataframe.RData”)

required packages: maps, ggplot2, oce, sp, quantreg

source code: source(“LossGui.R”); source(“plot.windrose2.R”); source(“getmax.R”); load(“SavedObjects.R”)

Hurricanes are capable of generating large financial losses. We begin with a model that estimates extreme losses conditional on climate covariates. We then describe a method for quantifying the relative change in potential losses over several decades.

Financial losses are directly related to fluctuations in hurricane climate. Environmental factors influence the frequency and intensity of hurricanes at the coast. Thus it is not surprising that these same environmental signals appear in estimates of total losses.

Economic damage is the loss associated with a hurricane's direct impact. Direct impact losses do not include losses from business interruption or other macroeconomic effects including demand surge and mitigation.

A normalization procedure adjusts the loss estimate to what it would be if the hurricane struck in a recent year by accounting for inflation and changes in wealth and population, plus a factor to account for changes in the number of housing units exceeding population growth. The method produces hurricane loss estimates that can be compared over time (Pielke et al. 2008).

You focus on losses exceeding one billion ($ U.S.) that have been adjusted to 2005. The loss data are available in Losses.txt in JAGS format. Input the data by typing

```
source("Losses.txt")
```

The log-transformed loss amounts are in y. The annual number of loss events are in L. The data cover the period 1900–2007. More details about these data are given in Jagger et al. (2011).

You begin by plotting a time series of the number of losses and a histogram of total loss per event.

```
par(las = 1, mgp = c(2, 0.4, 0), tcl = -0.3)
layout(matrix(c(1, 2), 1, 2, byrow = TRUE), widths = c(0.57, 0.43))
plot(1900:2007, L, type = "h", xlab = "Year", ylab = "Number of loss events")
grid()
points(1900:2007, L, type = "h")
mtext("a", side = 3, line = 1, adj = 0, cex = 1.1)
hist(y, main = "", ylab = "Frequency", xlab = "Loss amount [log10 2005 $]")
mtext("b", side = 3, line = 1, adj = 0, cex = 1.1)
```

The annual number of loss events varies between 0 and 4. There appears to be slight increase in the number over time. The frequency of extreme losses per event decreases nearly linearly on the log scale.

A time series of the amount of loss indicates no trend. The mean value is the expected loss while the standard deviation is associated with the unexpected loss. Tail values are used to estimate the 'value-at-risk' (VaR) in financial instruments used by the insurance industry.

You assume a Poisson distribution adequately quantifies the number of extreme loss events. There is no evidence of loss event clustering. And you assume a generalized Pareto distribution (GPD) quantifies the amount of losses above the billion dollar threshold.

The frequency of loss events and the amount of losses given an event vary with your covariates including sea-surface temperature (SST), the Southern Oscillation Index (SOI), the North Atlantic Oscillation (NAO) and sunspot number (SSN).

The model is written in JAGS code available in JAGSmodel3.txt. Posterior samples from the model are available through a graphical user interface (GUI). Start the GUI by typing

```
source("LossGui.R")
```

If you are using a MAC you will need to download and install the GTK+ framework. You will also probably need the **gWidgetstcltk** package.

Use the slider bars to vary the covariates. The covariates are scaled between $\pm$2 s.d. Select OK to bring up the return level graph. You can select the graph to appear as a bar or dot plot.

The GUI allows you to easily ask 'what if' questions related to future damage losses.

As one example, the figure shows the posterior predictive distributions of return levels for individual loss events using two climate scenarios.

```
load("SavedObjects.R")
rp = c(5, 10, 20, 50, 100, 200, 500)
cov1 = data.frame(ssn = 115, soi = -1.1, nao = 0.7, sst = -0.243)
post1 = jagsgpdposterior(newcov = cov1, yp = rp, x = jagSampleMatrix, thin = 10,
N = NULL, u = 9, mask = parmaskgpd, yrange = c(8, 16), yl = 18, type = "Total Loss",
mfrow = c(1, 1), etafun = c(exp, exp, identity))
```

```
cov2 = data.frame(ssn = 9, soi = -1.1, nao = -1.4, sst = 0.268)
post2 = jagsgpdposterior(newcov = cov2, yp = rp, x = jagSampleMatrix, thin = 10,
N = NULL, u = 9, mask = parmaskgpd, yrange = c(8, 16), yl = 18, type = "Total Loss",
mfrow = c(1, 1), etafun = c(exp, exp, identity))
```

```
clrs = c("red", "darkred", "black", "darkred", "red")
qtls = c(3, 4, 5, 6, 7) # 5, 25, 50, 75, 90 percentiles
par(mfrow = c(1, 2), las = 1, mgp = c(2, 0.4, 0), tcl = -0.3)
plot(rp, rep(9, 7), xlim = c(3, 700), ylim = c(9, 14), type = "n", bty = "n",
log = "x", ylab = "Return level [log$_{10}$ 2005 \\$]", xlab = "Return period [yr]",
cex.axis = 1, cex.lab = 1)
grid(equilogs = FALSE)
for (i in 1:length(rp)) {
points(rep(rp[i], 5), post1$quantiles[qtls, 1, i], col = clrs, pch = 15,
cex = 0.8)
}
mtext("a", side = 3, line = 0, adj = 0, cex = 1.1)
plot(rp, rep(9, 7), xlim = c(3, 700), ylim = c(9, 14), type = "n", bty = "n",
log = "x", ylab = "Return level [log$_{10}$ 2005 \\$]", xlab = "Return period [yr]",
cex.axis = 1, cex.lab = 1)
grid(equilogs = FALSE)
for (i in 1:length(rp)) {
points(rep(rp[i], 5), post2$quantiles[qtls, 1, i], col = clrs, pch = 15,
cex = 0.8)
}
mtext("b", side = 3, line = 0, adj = 0, cex = 1.1)
```

For each return period, the black square shows the median and the red squares show the 0.05, 0.25, 0.75, and 0.9 quantile values from the posterior samples. The left panel shows the losses when SST = $-$0.243 \( ^\circ \) C, NAO = \( + \).7 s.d., SOI = $-$1.1 s.d., and SSN = 115. The right panel shows the losses when SST = $+$0.268 \( ^\circ \) C, NAO = $-$1.4 s.d., SOI = $-$1.1 s.d., and SSN = 9.

The first scenario is characterized by covariates related to fewer and weaker hurricanes and the second scenario is characterized by covariates related to more and stronger hurricanes.

The loss distribution changes substantially. Under the first scenario, the median return level of a 50-year loss event is $18.2 bn; this compares with a median return level of a 50-year loss event of $869.1 bn under the second scenario.

The results are interpreted as a return level for a return period of 50 years with the covariate values as extreme or more extreme than the ones given (about one s.d. in each case). With four independent covariates and an annual probability of about 16% that a covariate is more than one standard deviation from the mean, the chance that all covariates will be this extreme or more in a given year is less than 0.1%.

The direction of change is consistent with a change in U.S. hurricane activity using the same scenarios and inline with your understanding of hurricane climate variability in this part of the world.

The model is developed using aggregate loss data for the entire United States susceptible to Atlantic hurricanes. It is possible to model data representing a subset of losses capturing a particular insurance portfolio, for example. Moreover since the model uses MCMC sampling it can be extended to include error estimates on the losses. They can also make use of censored data where you know that losses exceeded a certain level but you do not have information about the actual loss amount.

Hazard risk affects the profits and losses of companines in the insurance industry. Some of this risk is transferred to the performance of securities traded in financial markets. This implies that early reliable information from loss models is useful to investors.

A loss model consists of three components; cyclone frequency and intensity, vulnerability of structures, and loss distributions. They are built using a combination of numerical, empirical, and statistical approaches. The intensity and frequency components rely on expanding the historical cyclone data set. This is usually done by resampling and mixing cyclone attributes to generate thousands of synthetic cyclones.

The approach is useful for estimating probable maximum losses (PML), which is the expected value of all losses greater than some high value. The value is chosen based on the quantile of the loss say \( L(\tau) \), where \( \tau=1-1/\hbox{RP} \) and where RP is the return period of an event. The PML can be estimated on a local exposure portfolio or a portfolio spread across portions of the coast.

The PML is used to define the amount of money that a reinsurer or primary insurer should have available to cover a given portfolio. In order to estimate the 1-in-100 year PML a catalogue that contains at least an order of magnitude more synthetic cyclones than contained in the historical data set is required.

Your model above uses only the set of historical losses. By doing so it allows you to anticipate futures losses on the seasonal and multi-year time scales without relying on a catalogue of synthetic cyclones. The limitation however is that the losses must be aggregated over a region large enough to capture a sufficient number of loss events. The aggregation can be geographical, like Florida, or across a large and diverse enough set of exposures.

Your model is also limited by the quality of the historical loss data. Although the per storm damage losses have been adjusted for increases in coastal population the number of loss events have not. A cyclone making landfall early in the record in a region void of buildings did not generate losses, so there is nothing to adjust. This is not a problem if losses from historical hurricanes are estimated using constant building exposure data.

One critical variable in a loss model is hurricane intensity. Here we show you a way to adjust this variable for climate change. The methodology focuses on a statistical model for cyclone intensity trends, the output of which can be used as input to a hazard model. Details are found in Elsner et al. (2011).

You begin by determining the historical cyclones relevant to your location of interest. This is done with the get.tracks() function in **getTracks.R**.

Here your interest is a location on Eglin Air Force Base (EAFB) with a latitude 30.4 \( ^\circ \) N and longitude -86.8 \( ^\circ \) E. You choose a search radius of 100 nmi as a compromise between having enough cyclones to fit a model and having only those that are close enough to cause damage. You also specify a minimum intensity of 33~m~s\( ^{-1} \).

```
load("best.use.RData")
source("getTracks.R")
lo = -86.8
la = 30.4
r1 = 100
loc = data.frame(lon = lo, lat = la, R = r1)
eafb = get.tracks(x = best.use, locations = loc, umin = 64, N = 200)
```

```
## Warning: We found only 48 tracks
```

Tracks meeting the criteria are given in the tracks list of object eafb as a data frame containing the attributes of individual cyclones from best.use.

Your interest is restricted further to a segment of each track near the coast. That is, you subset your tracks keeping only the records when the cyclone is near your location. This is done using the selectTrackSubset() function and specifying a radius for clipping the tracks beyond 300 nmi. Convert the translation speed (maguv) to m~s\( ^{-1} \).

```
r2 = 300
eafb.use = getTrackSubset(tracks = eafb$tracks, lon = lo, lat = la, radius = r2)
eafb.use$maguv = eafb.use$maguv * 0.5144
```

The output is a reduced data frame containing cyclone locations and attributes for track segments corresponding to your location. Finally you remove tracks that have fewer than 24 hr of attributes.

```
x = table(eafb.use$Sid)
keep = as.integer(names(x[x >= 24]))
eafb.use = subset(eafb.use, Sid %in% keep)
```

You plot the tracks on a map reusing your code from earlier.

```
t1 = split(eafb.use, eafb.use$Sid)
require(maps)
par(las = 1, mgp = c(2, 0.4, 0), tcl = -0.3)
map("world", ylim = c(22, 37), xlim = c(-95, -77))
for (i in 1:length(t1)) {
Lo = t1[[i]]$lon
La = t1[[i]]$lat
n = length(Lo)
lines(Lo, La, lwd = 2)
arrows(Lo[n - 1], La[n - 1], Lo[n], La[n], lwd = 2, length = 0.1)
}
points(lo, la, col = "red", pch = 19)
box()
```

The plot shows a uniform spread of cyclones approaching EAFB from the south with an equal number of cyclones passing to the west as passing to the east.

Your catalogue of historical cyclones affecting EAFB contains 47 hurricanes. You summarize various attributes of these hurricanes with plots and summary statistics.

Your interest is on attributes as the cyclone approaches land so you first subset on the land marker (M).

```
sea = subset(eafb.use, !M)
```

A graph showing the distributions of translational speed and approach direction is plotted by typing

```
par(mfrow = c(1, 2), mar = c(5, 4, 3, 2) + 1, las = 1)
require(oce)
source("plot.windrose2.R")
u = -sea$maguv * sin(sea$diruv * pi/180)
v = -sea$maguv * cos(sea$diruv * pi/180)
wr = as.windrose(u, v, dtheta = 15)
par(mfrow = c(1, 2), las = 1, mgp = c(2, 0.4, 0), tcl = -0.3)
hist(sea$maguv, main = "", freq = FALSE, xlab = "Translational speed [m/s]")
library(MASS)
fit = fitdistr(sea$maguv, "gamma")
sh = fit$estimate[1]
rt = fit$estimate[2]
curve(dgamma(x, shape = sh, rate = rt), from = 0, to = 30, col = "red", lwd = 2,
add = TRUE)
mtext("a", side = 3, line = 0, adj = 0, cex = 1.2)
plot.windrose2(wr, type = "count", convention = "meteorological")
mtext("b", side = 3, line = -1, adj = 0, cex = 1)
```

The smoothed curve on the histogram is a gamma density fit using the fitdistr() function (**MASS** package). The wind rose function is from the **oce** package (Kelly 2011). The median forward speed of approaching cyclones is 3.9m~s\( ^{-1} \) and the most common approach direction is from the southeast.

The correlation between forward speed and cyclone intensity is 0.3. For the subset of approaching cyclones that are intensifying this relationship tends to get stronger with increasing intensity.

The evidence supports the idea of an intensity limit for hurricanes moving too slow due to the feedback of a cold ocean wake. This is likely to be even more the case in regions where the ocean mixed layer is shallow.

Your historical catalogue of 47 cyclones is too small to provide an estimate of changes over time. Instead you examine the set of hurricanes over the entire Gulf of Mexico. Changes to hurricanes over this wider region will likely be relevant to changes at EAFB.

Subset the cyclones within a latitude by longitude grid covering the region using the period 1900 through 2009, inclusive.

```
llo = -98
rlo = -80
bla = 19
tla = 32
sy = 1900
ey = 2009
gulf.use = subset(best.use, lon >= llo & lon <= rlo & lat >= bla & lat <= tla &
Yr >= sy & Yr <= ey)
```

Next find the per cyclone maximum intensity for cyclones while in the Gulf of Mexico.

```
source("getmax.R")
GMI.df = get.max(gulf.use, maxfield = "WmaxS")
GMI.df$WmaxS = GMI.df$WmaxS
```

You use the July SST over the Gulf of Mexico as your covariate for modeling the changing intensity of Gulf cyclones. The gridded SST data are in ncdataframe.RData where the column names are the year and month concatenated as a character string that includes Y and M.

First create a character vector of the column names.

```
se = sy:ey
cNam = paste("Y", formatC(se, 1, flag = "0"), "M07", sep = "")
```

Then load the data, create a spatial points data frame and extract the July values for North Atlantic.

```
load("ncdataframe.RData")
require(sp)
coordinates(ncdataframe) = c("lon", "lat")
sstJuly = ncdataframe[cNam]
```

Next average the SST values in your Gulf of Mexico grid box. First create a matrix from your vertex points of your grid box. Then create a spatial polygons object from the matrix and compute the regional average using the over() function.

Next make a data frame from the resulting vector and the structure data set of corresponding years. Finally merge this data frame with your wind data frame from above.

```
bb = c(llo, bla, llo, tla, rlo, tla, rlo, bla, llo, bla)
Gulfbb = matrix(bb, ncol = 2, byrow = TRUE)
Gulf.sp = SpatialPolygons(list(Polygons(list(Polygon(Gulfbb)), ID = "Gulfbb")))
SST = over(x = Gulf.sp, y = sstJuly, fn = mean)
SST.df = data.frame(Yr = sy:ey, sst = t(SST)[, 1])
GMI.df = merge(SST.df, GMI.df, by = "Yr")
```

The resulting data frame has 451 rows one for each hurricane in the Gulf of Mexico region corresponding to the fastest wind speed while in the domain. The spatial distribution favors locations just off the coast and along the eastern boundary of the domain.

Theory, models, and data provide support for estimating changes to cyclone intensity over time. The heat-engine theory argues for an increase in the maximum potential intensity of hurricanes with increases in sea-surface temperature.

Climate model projections indicate an increase in average intensity of about 5–10% globally by the late 21st century with the frequency of the most intense hurricanes likely increasing even more. Statistical models using a set of homogeneous tropical cyclone winds show the strongest hurricanes getting stronger with increases as high as 20% per \( ^\circ \) C for the strongest hurricanes.

Your next step fits a model for hurricane intensity change relevant to your catalogue of cyclones affecting EAFB. The correlation between per cyclone maximum intensity and the corresponding July SST is a mere 0.04, but it increases to 0.11 for the set of cyclones above 64 m s\( ^{-1} \).

You use a quantile regression model to account for the relationship between intensity and SST. Save the wind speed quantiles and run a quantile regression model of lifetime maximum wind speed (intensity) on SST.

Save the trend coefficients and standard errors from the model for each quantile.

```
tau = seq(0.05, 0.95, 0.05)
qW = quantile(GMI.df$WmaxS, probs = tau)
n = length(qW)
require(quantreg)
model = rq(WmaxS ~ sst, data = GMI.df, tau = tau)
trend = coefficients(model)[2, ]
coef = summary(model, se = "iid")
```

```
## Warning: Solution may be nonunique
```

```
## Warning: Solution may be nonunique
```

```
ste = numeric()
for (i in 1:n) {
ste[i] = coef[[i]]$coefficients[2, 2]
}
```

Next use a local polynomial regression to model the trend as a change in intensity per \( ^\circ \) C and plot the results. The regression fit at intensity \( w \) is made using points in the neighborhood of \( w \) weighted inversely by their distance to \( w \). The neighborhood size is a constant of 75% of the points.

```
trend = trend/qW * 100
ste = ste/qW * 100
model2 = loess(trend ~ qW)
pp = predict(model2, se = TRUE)
xx = c(qW, rev(qW))
yy = c(pp$fit + 2 * pp$se.fit, rev(c(pp$fit - 2 * pp$se.fit)))
plot(qW, trend, pch = 20, ylim = c(-20, 30), xlab = "Intensity (m/s)", ylab = "Percent Change (per C)")
polygon(xx, yy, col = "gray", border = "gray")
for (i in 1:n) segments(qW[i], trend[i] - ste[i], qW[i], trend[i] + ste[i])
points(qW, trend, pch = 20)
lines(qW, fitted(model2), col = "red")
abline(h = 0, lty = 2)
```

Points are quantile regression coefficients of per cyclone maximum intensity on SST and the vertical bars are the standard errors. The red line is a local polynomial fit through the points and the gray band is the 95% confidence band around the predicted fit. There is little change in intensity for the weaker cyclones but there is a large, and for some quantiles, statistically significant upward trend in intensity for the stronger hurricanes.

Next you quantify the trend in SST over time using linear regression of SST on year.

```
model3 = lm(sst ~ Yr, data = SST.df)
model3$coef[2] * 100
```

```
## Yr
## 0.6788
```

The upward trend is 0.68\( ^\circ \) C per century, explaining 36% of the variation in July SST over the period of record. The magnitude of warming you find in the Gulf of Mexico is consistent with reports of between 0.4 and 1\( ^\circ \) C per century for warming of the tropical oceans (Deser et al. 2010).

Your estimate of the per \( ^\circ \) C SST increase in hurricane intensities (as a function quantile intensity) together with your estimate of the SST warming is used to estimate the increase in wind speeds for each hurricane in your catalogue.

You assume your catalogue is a representative sample of the frequency and intensity of future hurricanes, but that the strongest hurricanes will be stronger due to the additional warmth expected. The approach is similar to that used in (Mousavi et al. 2011) to estimate the potential impact of hurricane intensification and sea-level rise on coastal flooding.

The equation for increasing wind speeds \( w \) of hurricanes in your catalogue of hurricanes affecting EAFB is given by \[ w_{2110} = [1 + \Delta w(q) \cdot \Delta\hbox{SST}] \cdot w \] where \( w_{2110} \) is the wind speed one hundred years from now, \( \Delta w(q) \) is the fractional change in wind speed per degree change in SST as a function of the quantile speed (red curve), and \( \Delta \) SST is per century trend in SST. You certainly do not expect an extrapolation (linear or otherwise) to accurately represent the future but the method provides an estimate of what Gulf of Mexico hurricanes might look like, on average, during the 22nd century.

Additional cyclone vitals including radius to maximum winds, a wind decay parameter, and minimum central pressure need to be added to the catalogue of cyclones to make them useful for storm surge and wind-field components (Vickery et al. 2006) like those included in the U.S. Federal Emergency Management Agency (FEMA) HAZUS model.

Lacking evidence indicating these vitals will change in the future you use historical values. Otherwise you adopt a method similar to what you used above for cyclone intensity. In the end you have two cyclone catalogues, one representing the contemporary view and the other representing a view of the future; a view that is consistent with the current evidence and theory of cyclone intensity and which aligns with the consensus view on anthropogenic global warming.

This concludes your study of hurricane climatology using modern statistical methods. We trust your skills will help you make new reproduceable discoveries in the fascinating field of hurricane climate. Enjoy.