Chapter 10: Time Series Models

“A big computer, a complex algorithm and a long time does not equal science.”—Robert Gentleman

data files used: load(“annual.RData”); load(“best.use.RData”); read.table(“SST.txt”, T)
required packages: gamlss, xtable, chron, reshape, ggplot2, netork, sna, igraph
source code: source(“get.visibility.R”)

Here we consider time series models. A time series is an ordered sequence of numbers with respect to time. In climatology you encounter time-series data in a format given by \[ \{h\}^T_{t=1} = \{h_1, h_2, \ldots, h_T\} \] where the time \( t \) is over a given season, month, week, day and \( T \) is the time series length. The aim is to understand the underlying physical processes.

Trend is an example. Often by simply looking at a time series you can pick out a trend that tells you that the process generating the data is changing.

A single time series gives you one sample from the process. Yet under the ergodic hypothesis a time series of infinite length contains the same information (loosely speaking) as the collection of all possible series of finite length.

In this case you can use your single series to learn about the nature of the process. This is analogous to spatial interpolation, where the variogram is computed under the assumption that the rainfall field is stationary.

Here we consider a selection of techniques and models for time series data. We begin by showing you how to over lay plots as a tool for exploratory analysis. This is commonly used especially in paleo climate studies (wiggle matching).

We demonstrate large variation in hurricane counts arising from a constant rate process. We then show techniques for smoothing your series.

We continue with a change-point model and techniques for decomposing a continuous-valued series.

We conclude with a unique way to create a network graph from a time series of counts and suggest a new definition of climate anomalies.

Time Series Overlays

A plot showing your variables on a common time axis is a useful exploratory graph. Values from different series are scaled to have the same relative range so the covariation in the variables can be visually compared.

Here you do this with hurricane counts and sea-surface temperature (SST). Begin by loading annual.RData then subset the data for years starting with 1900 and rename the year column.

dat = subset(annual, Year >= 1900)
colnames(dat)[1] = "Yr"

Plot the basin-wide hurricane count by year, then plot SST data from the North Atlantic.

You do this by keeping the current graphics device open with the new=TRUE argument in the par() function.

You turn off the axis labels in the second plot call and then add them using the axis() function where 4 is for the vertical axis on the right side of the graph. Axes are numbered clockwise starting from the bottom of the plot. The axis is labeled using mtext() function.

plot(dat$Yr, dat$B.1, xlab = "Year", ylab = "Hurricane Count", lab = c(10, 7, 
    20), type = "h", lwd = 2)
par(new = TRUE)
plot(dat$Yr, dat$sst, type = "l", col = "red", xaxt = "n", yaxt = "n", xlab = "", 
    ylab = "", lwd = 2)
mtext(expression(paste("SST [", degree, "C]")), side = 4, line = 2.5, las = 0)
legend("topleft", col = c("black", "red"), lty = 1, legend = c("Hurricanes", 

plot of chunk timeSeriesSSThurricaneConts

The correspondence between the two series is clear. There tends to be more hurricanes in periods of high SST and fewer hurricanes in periods of low SST. You retain the distinction between the series by using bars for the counts and lines for the SST values.

Discrete Time Series

Hurricane counts arise from a rate process that is described as Poisson. More precisely, the number of occurrences over an interval is described by a Poisson distribution with a rate parameter proportional to the time interval.

And the counts in non overlapping intervals are independent. Since the rate of hurricanes can change from day to day and from year to year, you assume the process has a rate that is a function of time (\( \lambda(t) \)).

Note that if you are interested in modeling yearly counts you are really interested in modeling the underlying yearly rate (more precisely, the integral of the underlying instantaneous rate over a year).

You can integrate the rate over any time period and obtain the hurricane count over that period. For example, how many more hurricanes can you expect during the remainder of the season given a date of September 15th?

Here you examine methods for estimating the rate process. You first consider running averages to get a smoothed estimate of the annual rate. You then consider a change-point model where the rate is constant over periods of time, but changes abruptly between periods.

Running averages and change-point models are useful for describing time series, but are less useful for forecasting. You begin with a look at interannual count variability.

Count variability

The time series of hurricane counts appears to have large interannual variable. But this might simply be a consequence of the randomness in the counts given the rate.

In fact, large variations in small-count processes is often misdiagnosed as physically significant.

As an example, consider hurricane counts over a sequence of N years with a constant annual Poisson rate lambda. What is the probability that you will find at least M of these years with a count less than X (described as an inactive season) or a count greater than Y (described as an active season)?

Here are the steps using R notation.

You create the PM() function to perform these computations.

PM = function(X, Y, lambda, N, M) {
    PXY = 1 - diff(ppois(c(X - 1, Y), lambda))
    return(1 - pbinom(M - 1, N, PXY))

Arguments for ppois() are q (quantile) and lambda (rate) and the arguments for pbinom() are q, size, and prob.

You use your function to answer the following question. Given an annual rate of 6 hurricanes per year (lambda), what is the probability that in a random sequence of 10 years (N) you will find at least two years (M) with a hurricane count less than 3 (X) or greater than 9 (Y)?

PM(X = 3, Y = 9, lambda = 6, N = 10, M = 2)
## [1] 0.4405

Thus you find a 44% chance of having two years with large departures from the mean rate.

The function is handy. It protects you against getting fooled by randomness. Indeed, the probability that at least one year in ten falls outside the range of +/-2 standard deviations from the mean is 80%.

This compares to 37% for a set of variables described by a normal distribution and underscores the limitation of using ideas derived from continuous distributions on count data.

On the other hand, if you consider the annual global tropical cyclone counts over the period 1981–2006 (From Elsner et al. 2008) you find a mean of 80.7 tropical cyclones per year with a range between 66 and 95.

Assuming the counts are Poisson you use your function to determine the probability that no years have less than 66 or more than 95 in the 26-year sample.

gtca = c(66, 78, 72, 89, 89, 78, 78, 73, 79, 81, 70, 93, 73, 92, 78, 95, 88, 
    78, 72, 84, 84, 79, 81, 84, 90, 74)
1 - PM(X = 66, Y = 95, lambda = 80.7, N = 26, M = 1)
## [1] 0.07566

This low probability provides suggestive evidence to support the notion that the physical processes governing global hurricane activity is more regular than Poisson.

The regularity could be due to feedbacks to the climate system. For example, the cumulative effect of many hurricanes over a particular basin might make the atmosphere less conducive for activity in other basins. Or it might be related to a governing mechanism like the North Atlantic Oscillation (Elsner and Kocher 2000).

Moving average

A moving average removes year-to-year fluctuation in counts. The assumption is that of a smoothly varying rate process.

You use the filter() function to compute running means. The first argument in the function is a univariate or multivariate time series and the second is the filter as a vector of coefficients in reverse time order.

For a moving average of length \( N \) the coefficients have the value of \( 1/N \). For example, to compute the 5-year running average of the basin-wide hurricane counts, type

ma = filter(dat$B.1, rep(1, 5)/5)
str(ma, strict.width = "cut")
##  Time-Series [1:111] from 1 to 111: NA NA 4.2 3.8 4 3.4 3.2 3.8 4.2 3.6 ...

The output is an object of class ts (time series). Values at the ends of the time series are not filtered so NA's are used. If you have an odd number of years then the number of values missing at the start of the filtered series matches the number of values missing at the end of the series.

Here you create a new function called moveavg() and use it to compute the moving averages of basin counts over 5, 11, and 21 years.

moveavg = function(X, N) {
    filter(X, rep(1, N)/N)
h.5 = moveavg(dat$B.1, 5)
h.11 = moveavg(dat$B.1, 11)
h.21 = moveavg(dat$B.1, 21)

Plot the moving averages on top of the observed counts.

cls = c("grey", "red", "blue", "green")
lg = c("Count", "5-yr rate", "11-yr rate", "21-yr rate")
plot(dat$Yr, dat$B.1, xlab = "Year", ylab = "Hurricane count/rate", col = "grey", 
    type = "h", lab = c(10, 7, 20), lwd = 4)
points(dat$Yr, dat$B.1, type = "h", lwd = 4, col = "grey")
lines(dat$Yr, h.5, col = "red", lwd = 2)
lines(dat$Yr, h.11, col = "blue", lwd = 2)
lines(dat$Yr, h.21, col = "green", lwd = 2)
legend("topleft", lty = 1, lwd = c(4, 2, 2, 2), col = cls, legend = lg, bg = "white", 
    cex = 0.65)

plot of chunk hurricaneMovingAverage

Notice the reduction in the year-to-year variability as the length of the moving average increases. Note also that the low frequency variation is not affected. Check this by comparing the means (the mean is the zero frequency) of the moving averages. Thus a moving average is a low-pass filter.


One of the most obvious climatological characteristic of hurricanes is seasonality. Very few hurricanes occur before July 15th, September is the most active month, and the season is typically over by November.

In general, the ocean is too cool and the wind shear too strong during the months of January through May and from November through December.

Seasonality is evident in plots showing the historical number of hurricanes that have occurred on each day of the year. Here you model this seasonality to produce a probability of hurricane occurrence as a function of the day of year.

You use the hourly interpolated best track data (best.use.RData). The data spans the years from 1851–2010. Import the data frame and subset on hurricane-force wind speeds.

H.df = subset(best.use, WmaxS >= 64)
##     Sid Sn SYear       name   Yr Mo Da hr   lon lat  Wmax WmaxS DWmaxDt
## 1     1  1  1851 NOT NAMED  1851  6 25  0 -94.8  28 80.00 79.76 0.08598
## 1.1   1  1  1851 NOT NAMED  1851  6 25  1 -94.9  28 80.03 79.91 0.09957
## 1.2   1  1  1851 NOT NAMED  1851  6 25  2 -95.0  28 80.05 80.05 0.11140
## 1.3   1  1  1851 NOT NAMED  1851  6 25  3 -95.1  28 80.07 80.18 0.11970
## 1.4   1  1  1851 NOT NAMED  1851  6 25  4 -95.2  28 80.06 80.30 0.12271
## 1.5   1  1  1851 NOT NAMED  1851  6 25  5 -95.3  28 80.04 80.40 0.11867
##     Type Shour maguv diruv    jd     M
## 1      *     0 5.241 270.9 175.0 FALSE
## 1.1    *     1 5.246 270.8 175.0 FALSE
## 1.2    *     2 5.261 270.5 175.1 FALSE
## 1.3    *     3 5.286 270.2 175.1 FALSE
## 1.4    *     4 5.322 269.7 175.2 FALSE
## 1.5    *     5 5.368 269.1 175.2 FALSE

Next, create a factor variable from the day-of-year column (jd). The day of year starts on the first of January. You use only the integer portion as the rows correspond to separate hours.

jdf = factor(trunc(H.df$jd), levels = 1:365)

The vector contains the day of year (1 through 365) for all 83151 hurricane hours in the data set. You could use 366, but there are no hurricanes on December 31st during any leap year over the period of record.

Next, use the table() function on the vector to obtain total hurricane hours by day of year and create a count of hurricane days by dividing the number of hours and rounding to the nearest integer.

Hhrs = as.numeric(table(jdf))
Hd = round(Hhrs/24, 0)

The vector Hd contains the number of hurricane days over the period of record for each day of the year.

A plot of the raw counts shows the variation from day to day is rather large. Here you create a model that smooths these variations. This is done with the gamlss() function from the gamlss package (Rigby and Stasinopoulos 2005).

You model your counts using a Poisson distribution with the logarithmic link as a function of day of year.

julian = 1:365
sm = gamlss(Hd ~ pb(julian), family = PO, trace = FALSE)

Here you use a non-parametric smoothing on the Julian day. The function is a penalized B-spline (Eilers and Marx 1996) and is indicated as pb() in the model formula. The penalized B-spline is an extension of the Poisson regression model that conserves the mean and variance of the daily hurricane counts and has a polynomial curve as the limit. The Poisson distribution is specified in the family argument with PO.

Although there are a days with hurricanes outside the main season, your interest centers on the months of June through November. Here you create a sequence of Julian days defining the hurricane season and convert them to dates.

hs = 150:350
doy = as.Date("1970-12-31") + hs

You then convert the hurricane days to a relative frequency to allow for a probabilistic interpretation. This is done for the actual counts and the smoothed modeled counts.

ny = (2010 - 1851) + 1
Hdm = Hd[hs]/ny
smf = fitted(sm)[hs]/ny

Finally you plot the modeled and actual daily frequencies by typing

plot(doy, Hdm, pch = 16, xlab = "", ylab = "Frequency (days/yr)")
lines(doy, smf, lwd = 2, col = "red")

plot of chunk plotActualvsModeled

Circles show the relative frequency of hurricanes by day of year. The red line is the fitted values of a model for the frequencies. Horizontal tic marks indicate the first day of the month.

On average hurricane activity increases slowly until the beginning of August as the ocean warms and wind shear subsides. The increase is more pronounced starting in early August and peaks around the first or second week in September.

The decline starting in mid September is somewhat less pronounced than the increase and is associated with ocean cooling. There is a minor secondary peak during the middle of October related to hurricane genesis over the western Caribbean Sea. The climate processes that make this part of the basin relatively active during at this time of the year are likely somewhat different than the processes occurring during the peak of the season.

Change Points

Hurricane activity can change abruptly going from active to inactive in a matter of a year or so. In this case a change-point model is appropriate for describing the time series.

Here a change point refers to a jump in the rate of activity from one set of years to the next. The underlying assumption is a discontinuity in the rates.

For example, suppose hurricanes suddenly become more frequent in the years 1934 and 1990, then the model would still be Poisson, but with different rates in the periods (epochs) 1900–1933, 1934–1989, and 1990–2010.


The simplest approach is to restrict your search to a single change point. For instance, you check to see if a model that has a rate change during a given year is better than a model that does not have a change during that year.

In this case you have two models; one with a change point and one without one. To make a choice, you check to see which model has the lower Schwarz Bayesian Criterion (SBC).

The SBC is proportional to \( -2\log[p(\hbox{data}|\hbox{model})] \), where \( p(\hbox{data}|\hbox{model}) \) is the probability of the data given the model.

This comparison can be done using the gamlss() function. Make the package available and obtain the SBC value for each of three models by typing

require(gamlss, quiet = TRUE, message = FALSE)
## Error: unused argument(s) (message = FALSE)
gamlss(B.1 ~ 1, family = PO, data = dat, trace = FALSE)$sbc
## [1] 528.5
gamlss(B.1 ~ I(Yr >= 1910), family = PO, data = dat, trace = FALSE)$sbc
## [1] 528.7
gamlss(B.1 ~ I(Yr >= 1940), family = PO, data = dat, trace = FALSE)$sbc
## [1] 514.8

Here the Poisson family is given as PO with the logarithm of the rate as the default link (Stasinopoulos and Rigby 2007). The first model is one with no change point.

The next two are change-point models with the first having a change point in the year 1910 and the second having a change point in 1940. The change-point models use the indictor function I() to assign a TRUE or FALSE to each year based on logical expression involving the variable Yr.

The SBC value is 528.5 for the model with no change points. This compares with an SBC value of 528.7 for the change point model where the change occurs in 1910 and a value of 514.8 for the change point model where the change occurs in 1940. Since the SBC is lower in the latter case, 1940 is a candidate year for a change point.

You apply the above procedure successively where each year gets considered in turn as a possible change point. You then plot the SBC as a function of year. = gamlss(B.1 ~ 1, family = PO, data = dat, trace = FALSE)$sbc + 2 * 
changepoints = 1901:2010
sbc.change = sapply(changepoints, function(x) gamlss(B.1 ~ I(Yr >= x), data = dat, 
    trace = FALSE)$sbc)
par(las = 1, mgp = c(2, 0.4, 0), tcl = -0.3)
plot(changepoints, sbc.change, ylab = "SBC value", xlab = "Year", ylim = c(520, 
    550), lab = c(10, 7, 20), lwd = 2, type = "l")
abline(h =, col = "grey")
lines(changepoints, sbc.change, lwd = 2)
changepoints2 = c(1995, 1948, 1944, 1932)
rug(changepoints2, lwd = 2, col = "red")

plot of chunk sbcChangePoints

The horizontal line is the SBC for a model with no change points and tick marks are local minimum of SBC. Here the SBC for the model without a change point is adjusted by adding \( 2\log(20) \) to account for the prior possibility of 5 or 6 equally likely change points over the period of record.

Here you find four candidate change points based on local minima of the SBC. The years are 1995, 1948, 1944, and 1932.

You assume a prior that allows only one change point per decade and that the posterior probability of the intercept model is 20 times that of the change point model. This gives you 12 possible models (1995 only, 1995 & 1948, 1995 & 1948 & 1932, etc) including the intercept-only model but excludes models with both 1944 and 1948 as the changes occur too close in time.

Next, you estimate the posterior probabilities for each of the 12 models using \[ \mathrm{Pr}(M_i|\mathrm{data})=\frac{\exp(-.5\cdot\mathrm{SBC}(M_i))}{\sum_{j=1}^{12} \exp(-.5\cdot\mathrm{SBC}(M_j))} \] where the models are given by \( M_i \), for \( i=1, \ldots, 12 \). The results are shown in the table.

changepointstext = sapply(changepoints2, function(x) paste("I(Yr>=", x, ")"))
modelarray ="expand.grid", rep(list(c(FALSE, TRUE)), 4))[-1, ]
modelarray = modelarray[!(modelarray[2] & modelarray[3]), ]
modelformulas = lapply(data.frame(t(modelarray)), function(x) formula(paste("B.1~", 
    paste(changepointstext[x], collapse = "+", sep = ""), sep = "")))
modelformulas = c(X1 = formula("B.1~1"), modelformulas)
modelfits = lapply(modelformulas, function(x)"gamlss", list(x, family = quote(PO( = "identity")), 
    data = dat, trace = FALSE)))
modelsbc = sapply(modelfits, function(x) x$sbc)
modelspostprob = exp(-0.5 * (modelsbc - min(modelsbc)))
modelspostprob = modelspostprob/sum(modelspostprob)
modelspostprobround = round(modelspostprob, 3)
modelorder = order(modelsbc)
modelsandprob = data.frame(Formula = gsub(fixed = TRUE, " ", "", sapply(modelformulas[modelorder], 
    function(x) deparse(x))), Probability = modelspostprobround[modelorder])
bestmodel = modelfits[[which(max(modelspostprob) == modelspostprob)[1]]]
coefs = round(coef(bestmodel), 2)
tbl = xtable(modelsandprob, label = "tab:modelposteriorprobs", caption = "Model posterior probabilities from most (top) to least probable.")
print(tbl, = TRUE, caption.placement = "top")
## % latex table generated in R 2.15.2 by xtable 1.7-0 package
## % Thu Apr 18 16:06:28 2013
## \begin{table}[ht]
## \begin{center}
## \caption{Model posterior probabilities from most (top) to least probable.}
## \label{tab:modelposteriorprobs}
## \begin{tabular}{rlr}
##   \hline
##  & Formula & Probability \\ 
##   \hline
## X10 & B.1\~{}I(Yr$>$=1995)+I(Yr$>$=1932) & 0.43 \\ 
##   X6 & B.1\~{}I(Yr$>$=1995)+I(Yr$>$=1944) & 0.20 \\ 
##   X4 & B.1\~{}I(Yr$>$=1995)+I(Yr$>$=1948) & 0.18 \\ 
##   X12 & B.1\~{}I(Yr$>$=1995)+I(Yr$>$=1948)+I(Yr$>$=1932) & 0.07 \\ 
##   X14 & B.1\~{}I(Yr$>$=1995)+I(Yr$>$=1944)+I(Yr$>$=1932) & 0.06 \\ 
##   X3 & B.1\~{}I(Yr$>$=1948) & 0.02 \\ 
##   X5 & B.1\~{}I(Yr$>$=1944) & 0.01 \\ 
##   X2 & B.1\~{}I(Yr$>$=1995) & 0.01 \\ 
##   X9 & B.1\~{}I(Yr$>$=1932) & 0.01 \\ 
##   X11 & B.1\~{}I(Yr$>$=1948)+I(Yr$>$=1932) & 0.01 \\ 
##   X13 & B.1\~{}I(Yr$>$=1944)+I(Yr$>$=1932) & 0.00 \\ 
##   X1 & B.1\~{}1 & 0.00 \\ 
##    \hline
## \end{tabular}
## \end{center}
## \end{table}

The top three models have a total posterior probability of 80%. These models all include 1995 with 1932, 1944, and 1948 competing as the second most important change-point year. You can select any one of the models, but it makes sense to choose one with a relatively high posterior probability. Note the weaker support for the single change-point models and even less support for the no change point model.

The single best model has change points in 1932 and 1995. The coefficients of this model are shown here.

tbl = xtable(glm(bestmodel), label = "tab:coefficients", caption = "Best model coefficients and standard errors.")
print(tbl, = TRUE, caption.placement = "top")
## % latex table generated in R 2.15.2 by xtable 1.7-0 package
## % Thu Apr 18 16:06:28 2013
## \begin{table}[ht]
## \begin{center}
## \caption{Best model coefficients and standard errors.}
## \label{tab:coefficients}
## \begin{tabular}{rrrrr}
##   \hline
##  & Estimate & Std. Error & t value & Pr($>$$|$t$|$) \\ 
##   \hline
## (Intercept) & 3.9063 & 0.4101 & 9.53 & 0.0000 \\ 
##   I(Yr $>$= 1995)TRUE & 2.5069 & 0.6494 & 3.86 & 0.0002 \\ 
##   I(Yr $>$= 1932)TRUE & 1.6493 & 0.5036 & 3.28 & 0.0014 \\ 
##    \hline
## \end{tabular}
## \end{center}
## \end{table}

The model predicts a rate of 3.9 hur/yr in the period 1900–1931. The rate jumps to 6.4 hur/yr in the period 1931–1994 and jumps again to 8.1 in the period 1995–2010.


To understanding what might be causing the abrupt shifts in hurricane activity, here you include known covariates in the model. The idea is that if the shift is no longer significant after adding a covariate, then you conclude that a change in climate is the likely causal mechanism.

The two important covariates for annual basin-wide hurricane frequency are SST and the SOI. You first fit and summarize a model using the two change points and these two covariates.

model1 = gamlss(B.1 ~ I(Yr >= 1932) + I(Yr >= 1995) + sst + soi, family = PO, 
    data = dat, trace = FALSE)
## *******************************************************************
## Family:  c("PO", "Poisson") 
## Call:  gamlss(formula = B.1 ~ I(Yr >= 1932) + I(Yr >= 1995) + sst +  
##     soi, family = PO, data = dat, trace = FALSE) 
## Fitting method: RS() 
## -------------------------------------------------------------------
## Mu link function:  log
## Mu Coefficients:
##                    Estimate  Std. Error  t value  Pr(>|t|)
## (Intercept)          1.4421      0.0993    14.52  5.88e-27
## I(Yr >= 1932)TRUE    0.2715      0.1176     2.31  2.29e-02
## I(Yr >= 1995)TRUE    0.2049      0.1300     1.58  1.18e-01
## sst                  0.3806      0.2228     1.71  9.06e-02
## soi                  0.0514      0.0142     3.62  4.47e-04
## -------------------------------------------------------------------
## No. of observations in the fit:  111 
## Degrees of Freedom for the fit:  5
##       Residual Deg. of Freedom:  106 
##                       at cycle:  2 
## Global Deviance:     475 
##             AIC:     485 
##             SBC:     498.5 
## *******************************************************************

You find the change point at 1995 has the largest \( p \)-value among the variables. You also note that the model has an SBC of 498.5.

You consider whether the model can be improved by removing the change point at 1995, so you remove it and refit the model.

model2 = gamlss(B.1 ~ I(Yr >= 1932) + sst + soi, family = PO, data = dat, trace = FALSE)
## *******************************************************************
## Family:  c("PO", "Poisson") 
## Call:  gamlss(formula = B.1 ~ I(Yr >= 1932) + sst + soi, family = PO,  
##     data = dat, trace = FALSE) 
## Fitting method: RS() 
## -------------------------------------------------------------------
## Mu link function:  log
## Mu Coefficients:
##                    Estimate  Std. Error  t value  Pr(>|t|)
## (Intercept)          1.4811      0.0955    15.51  3.91e-29
## I(Yr >= 1932)TRUE    0.2550      0.1165     2.19  3.08e-02
## sst                  0.5880      0.1790     3.29  1.38e-03
## soi                  0.0536      0.0142     3.78  2.60e-04
## -------------------------------------------------------------------
## No. of observations in the fit:  111 
## Degrees of Freedom for the fit:  4
##       Residual Deg. of Freedom:  107 
##                       at cycle:  2 
## Global Deviance:     477.4 
##             AIC:     485.4 
##             SBC:     496.3 
## *******************************************************************

You find all variables statistically significant (\( p \)-value less than 0.1) and the model has a SBC of 496.3, which is lower than the SBC of your first model that includes 1995 as a change point.

Thus you conclude that the shift in the rate at 1995 is relatively more likely the result of a synchronization (Tsonis et al. 2006) of the effects of rising SST and ENSO on hurricane activity than is the shift in 1932.

The shift in 1932 is important after including SST and ENSO influences providing evidence that the increase in activity at this time is likely due, at least in part, to improvements in observing technologies.

model3 = gamlss(B.1 ~ sst + soi, family = PO, data = dat, trace = FALSE)
## *******************************************************************
## Family:  c("PO", "Poisson") 
## Call:  
## gamlss(formula = B.1 ~ sst + soi, family = PO, data = dat, trace = FALSE) 
## Fitting method: RS() 
## -------------------------------------------------------------------
## Mu link function:  log
## Mu Coefficients:
##              Estimate  Std. Error  t value  Pr(>|t|)
## (Intercept)    1.6638      0.0423    39.37  6.95e-66
## sst            0.7953      0.1518     5.24  8.11e-07
## soi            0.0524      0.0142     3.69  3.48e-04
## -------------------------------------------------------------------
## No. of observations in the fit:  111 
## Degrees of Freedom for the fit:  3
##       Residual Deg. of Freedom:  108 
##                       at cycle:  2 
## Global Deviance:     482.3 
##             AIC:     488.3 
##             SBC:     496.5 
## *******************************************************************

A change-point model is useful for detecting rate shifts caused by climate and observational improvements. When used together with climate covariates it can help you differentiate between these two possibilities. However, change-point models are not particularly useful for predicting when the next change will occur.

Continuous Time Series

Sea-surface temperature, the SOI, and the NAO are continuous time series. Values fluctuate over a range of scales often without abrupt changes. In this case it can be useful to split the series into a few components where each component has a smaller range of scales.

Here the goal is to decompose the SST time series as an initial step in creating a time-series model. The model can be used to make predictions of future SST values. Future SST values are subsequently used in your hurricane frequency model to forecast the probability of hurricanes (Elsner et al. 2008).

You return to your montly SST values over the period 1856–2010. You input the data and create a continuous-valued time series object (sst.ts) containing monthly SST values beginning with January 1856.

con = ""
SST = read.table(con, header = TRUE)
sst.m = as.matrix(SST[6:160, 2:13])
sst.v = as.vector(t(sst.m))
sst.ts = ts(sst.v, frequency = 12, start = c(1856, 1))

First you plot at your time series by typing

plot(sst.ts, ylab = "SST (C)")

plot of chunk plotSeries

The graph shows the time series is dominated by interannual variability. The ocean is coldest in February and March and warmest in August and September.

The average temperature during March is 18.6$\circ\( C and during August is 23.1 \)\circ$C. There also appears to be a trend toward greater warmth, although it is difficult to see because of the larger interannual variations.

The observed series can be decomposed into a few components. This is done here using the stl() function, which accepts a time series object as its first argument and the type of smoothing window is specified through the s.window argument.

sdts = stl(sst.ts, s.window = "periodic")

The seasonal component is found by a local regression smoothing of the monthly means. The seasonal values are then subtracted, and the remainder of the series smoothed to find the trend. The overall time-series mean value is removed from the seasonal component and added to the trend component.

The process is iterative. What remains is the difference between the actual monthly values and the sum of the seasonal and trend components. If you have change points in your time series you can use the bfast package and the bfast() function instead to decompose your time series. In this case the trend component has the change points.

To plot the raw and component series the data are prepared as follows. First a vector of dates is constructed using the seq.dates() function from the chron package. This allows you to display the graphs at the points that correspond to real dates.

date = seq.dates(from = "01/01/1856", to = "12/31/2010", by = "months")

Next a data frame is constructed that contains the vector of dates, the raw monthly SST time series, and the corresponding components from the seasonally decomposition.

datw = data.frame(Date = as.Date(date), Raw = as.numeric(sst.ts), Seasonal = as.numeric(sdts$time.series[, 
    1]), Trend = as.numeric(sdts$time.series[, 2]), Residual = as.numeric(sdts$time.series[, 
##         Date   Raw Seasonal Trend  Residual
## 1 1856-01-01 19.14  -1.6208 20.73  0.025866
## 2 1856-02-01 18.62  -2.0596 20.72 -0.040363
## 3 1856-03-01 18.68  -2.0685 20.72  0.024530
## 4 1856-04-01 19.02  -1.6399 20.71 -0.056306
## 5 1856-05-01 19.94  -0.7563 20.71 -0.013180
## 6 1856-06-01 21.18   0.4777 20.70  0.001586

Here the data are in the 'wide' form like a spreadsheet. To make them easier to plot as separate time series graphs you create a 'long' form of the data frame with the melt() function in the reshape package.

The function melds your data frame into a form suitable for casting (Wickham and Hadley 2007). You specify the data frame and your Date column as your id variable. The function assumes remaining variables are measure variables (non id variables) with the column names turned into a vector of factors.

datl = melt(datw, id = "Date")
##         Date variable value
## 1 1856-01-01      Raw 19.14
## 2 1856-02-01      Raw 18.62
## 3 1856-03-01      Raw 18.68
## 4 1856-04-01      Raw 19.02
## 5 1856-05-01      Raw 19.94
## 6 1856-06-01      Raw 21.18
##            Date variable    value
## 7435 2010-07-01 Residual  0.08074
## 7436 2010-08-01 Residual  0.14887
## 7437 2010-09-01 Residual  0.06011
## 7438 2010-10-01 Residual -0.05015
## 7439 2010-11-01 Residual -0.11573
## 7440 2010-12-01 Residual -0.16662

Here you make use of the ggplot2 functions to create a facet grid to display your time series plots with the same time axis. The argument scale=“free_y” allows the y axes to have different scales. This is important as the decomposition results in a large seasonal component centered on zero, while the trend component is smaller, but remains on the same scale as the raw data.

ggplot(datl, aes(x = Date, y = value)) + geom_line() + facet_grid(variable ~ 
    ., scale = "free_y") + ylab("SST [C]") + xlab("") + theme_bw()

plot of chunk timeSeriesDecompositionPlot

The observed (raw) values are shown in the top panel. The seasonal component, trend component, and residuals are also shown in separate panels on the same time-series axis. Temperatures increase by more than 0.5$\circ$C over the past 100 years. But the trend is not monotonic.

The residuals show large year-to-year variation generally between $-$0.15 and $+\( 0.15 \)\circ$C with somewhat larger variation before about 1871.

You can build separate time series models for each component. For example, for the residual component (\( R_t \)) an autoregressive moving average (ARMA) model can be used. An ARMA model with \( p \) autoregressive terms and \( q \) moving average terms [ARMA(\( p \), \( q \))] is given by \[ R_t = \sum_{i=1}^p \phi_i R_{t-i} + \sum_{i=1}^q \theta_i \varepsilon_{t-i} + \varepsilon_t \] where the \( \phi_i \)'s and the \( \theta_i \)'s are the parameters of the autoregressive and moving average terms, respectively and \( \varepsilon_t \)'s is random white noise assumed to be described by independent normal distributions with zero mean and variance \( \sigma^2 \).

For the trend component an ARIMA model is more appropriate. An ARIMA model generalizes the ARMA model by removing the non-stationarity through an initial differencing step.

Here you use the ar() function to determine the autoregressive portion of the series using the AIC.

## Call:
## ar(x = datw$Trend)
## Coefficients:
##      1       2       3       4       5       6       7       8       9  
##  1.279  -0.058  -0.101  -0.062  -0.045  -0.032  -0.008  -0.023   0.010  
##     10      11  
## -0.034   0.067  
## Order selected 11  sigma^2 estimated as  0.000298

Result shows an autoregressive order of 11 months. Continuing you assume the integrated and moving average orders are both one.

model = arima(datw$Trend, order = c(11, 1, 1))

You then use the model to make monthly forecasts out to 36 months using the predict() method. Predictions are made at times specified by the newxreg argument.

nfcs = 36
fcst = predict(model, n.ahead = nfcs)

You plot the forecasts along the corresponding date axis by typing

newdate = seq.dates(from = "01/01/2011", to = "12/01/2013", by = "months")
plot(c(datw$Date[1801], newdate[nfcs]), c(min(datw$Trend), max(datw$Trend) + 
    0.3), type = "n", ylab = "SST [C]", xlab = "Date (MM/YY)")
lines(datw$Date, datw$Trend, lwd = 2)
pm = fcst$pred
pl = fcst$pred - 1.96 * fcst$se
pu = fcst$pred + 1.96 * fcst$se
xx = c(newdate, rev(newdate))
yy = c(pl, rev(pu))
polygon(xx, yy, border = NA, col = "gray")
lines(newdate, pm, lwd = 2, col = "red")

plot of chunk plotForecasts

The observed values are in black and the forecast values are in red. A 95% confidence band is shown in gray. The confidence band is quite large after a few months. A forecast of the actual SST must include forecasts for the seasonal and residual components as well.

Time Series Network

Network analysis is the application of graph theory. Graph theory is the study of mathematical structures used to model pairwise relations between objects. Objects and relations can be many things with the most familiar being people and friendships.

Network analysis was introduced into climatology by (Tsonis and Roebber 2004). They used values of geopotential height on a spatial grid and the relationships were based on pairwise correlation. Here you use network analysis to examine year-to-year relationships in hurricane activity.

The idea is relatively new (Lacasa et al. 2008) and requires mapping a time series to a network. The presentation follows the work of Elsner et al. (2009).

Time series visibility

How can a time-series of hurricane counts be represented as a network? Consider the following plot.

yr = 1851:1870
net1 = get.visibility(annual$US.1[1:length(yr)])
par(las = 1, mgp = c(2, 0.4, 0), tcl = -0.3)
plot(yr, annual$US.1[1:length(yr)], type = "h", xlab = "Year", ylab = "Hurricane count", 
    lwd = 10, xaxt = "n")
axis(1, at = seq(1851, 1870, 1), labels = seq(1851, 1870, 1), cex.axis = 0.8)
for (i in 1:nrow(net1$sm)) {
    lines(c(yr[net1$sm[i, 1]], yr[net1$sm[i, 2]]), c(annual$US.1[net1$sm[i, 
        1]], annual$US.1[net1$sm[i, 2]]), col = "lightgrey")

plot of chunk timeSeriesLinks

The time series of U.S. hurricane counts forms a discrete landscape. A bar is connected to another bar if there is a line of sight (visibility line) between them. Here visibility lines are drawn for all ten bars.

It is clear that 1869 by virtue of its relatively high hurricane count (4) can see 1852, 1854, 1860, 1861, 1867, 1868, and 1870, while 1868 with its zero count can see only 1867 and 1869. Lines do not cut through bars. In this way, each year in the time series is linked in a network. The nodes are the years and the links (edges) are the visibility lines.

More formally, let \( h_a \) be the hurricane count for year \( t_a \) and \( h_b \) the count for year \( t_b \), then the two years are linked if for any other year \( t_i \) with count \( h_i \) \[ h_i \leq h_b + (h_a - h_b)\frac{t_b-t_i}{t_b-t_a} \] By this definition each year is visible to at least its nearest neighbors (the year before and the year after), but not itself. The network is invariant under rescaling the horizontal or vertical axes of the time series as well as under horizontal and vertical translations (Lacasa et al. 2008).

In network parlance, years are nodes and the visibility lines are the links (or edges). The network arises by releasing the years from chronological order and treating them as nodes linked by visibility lines.

Here we see that 1869 is well connected while 1853 is not. Years featuring many hurricanes generally result in more links especially if neighboring years have relatively few hurricanes. This can be seen by comparing 1853 with 1858.

Both years have a single hurricane, but 1858 is adjacent to years that also have a single hurricane so it is linked to four other years. In contrast, 1853 is next to two years each with three hurricanes so it has the minimum number of two links. The degree of a node is the number of links connected to it.

The function get.visibility() available in get.visibility.R computes the visibility lines. It takes a vector of counts as input and returns three lists; one containing the incidence matrix (sm), another a set of node edges (node), and the third a degree distribution (pk), indicate the number of years with \( k \) number of edges.

Compute the visibility lines by typing,

vis = get.visibility(annual$US.1)

Network plot

You use the network() function from the network package (Butts et al. 2011) to create a network object from the incidence matrix by typing

## Warning: package 'network' was built under R version 2.15.3
net = network(vis$sm, directed = FALSE)

Then use the plot() method for network objects to graph the network.

plot(net, label = 1851:2010, label.cex = 0.6, vertex.cex = 1.5, label.pos = 5, 
    edge.col = "grey")

plot of chunk plotNetwork

Node color indicates the number of links (degree) going from light purple (few) to red.

breaks = c(0, 2, 5, 10, 20, 100)
adj = as.sociomatrix(net)
deg = sna::degree(adj, gmode = "graph")
catcut = as.numeric(cut(deg, breaks))
cls1 = c("#F1EEF6", "#D7B5D8", "#DF65B0", "#DD1C77", "#980043")
cls2 = c(rep("black", 4), "white")
plot(net, label = 1851:2010, mode = "kamadakawai", label.cex = 0.4, vertex.cex = 2.5, 
    vertex.lty = 0, edge.col = "lightgrey", edge.lwd = 0.2, vertex.col = cls1[catcut], 
    label.pos = 5, label.col = cls2[catcut])

plot of chunk plotNetwork2

The placement of years on the network plot is based on simulated annealing (Kamada and Kawai 1989) and the nodes colored based on the number of edges. Years with the largest number of edges are more likely to be found in dense sections of the network and are colored dark red. Years with fewer edges are found near the perimeter of the network and are colored light purple.

The sna package (Butts 2010) contains functions for computing properties of your network. First create a square adjacency matrix where the number of rows is the number of years and each element is a zero or one depending on whether the years are linked and with zeros along the diagonal (a year is not linked with itself). Then compute the degree of each year indicating the number of years it can see and find which years can see farthest.

adj = as.sociomatrix(net)
deg = sna::degree(adj, gmode = "graph")
## [1]  1  8  2 11  2  6  4  5  5
annual$Year[order(deg, decreasing = TRUE)][1:9]
## [1] 1886 1933 1985 1893 1950 1964 1906 1909 1924

The year with the highest degree is 1886 with

Error in parse(text = code[i], srcfile = NULL) : 1:33: unexpected symbol
1: I(sort(deg,decreasing=TRUE)[1]) links.

r I(annual$Year[order(deg,decreasing=TRUE)][2]) with 31 links and 1985 with 28 links.

Other relatively highly connected years are 1893, 1950, 1964, and 1906 in that order. The average degree is 6.6, but the degree distribution is skewed so this number says little about a typical year.

Degree distribution and anomalous years

The total number of links in the network (sum of the links over all nodes) is 1054. There are 160 nodes, so 20% of the network consists of 32 them.

If you rank the nodes by number of links, you find that the top 20% account for 40% of the links.

You plot the degree distribution of your network by typing,

plot(vis$pk$k, cumsum(vis$pk$P), pch = 16, log = "x", ylab = "Proportion of Years With k or Fewer Links", 
    xlab = "Number of Links (k)")

plot of chunk plotDegreeDistribution

The distribution is the cumulative percentage of years with \( k \) or fewer links as a function of the number of links. The horizontal axis is plotted using a log scale. Just over 80% of all years have ten or fewer links and over 50% have five or fewer.

Although the degree distribution is skewed to the right it does not appear to represent a small-world network.

You perform a Monte Carlo (MC) simulation by randomly drawing counts from a Poisson distribution with the same number of years and the same hurricane rate as the observations.

A visibility network is constructed from the random counts and the degree distribution computed as before. The process is repeated 1000 times after which the median and quantile values of the degree distributions obtained.

kk = numeric()
cs = numeric()
for (i in 1:1000) {
    visR = get.visibility(rpois(length(annual$Year), mean(annual$US.1)))
    kk = c(kk, visR$pk$k)
    cs = c(cs, cumsum(visR$pk$P))
K = sort(deg, decreasing = TRUE)[1]
u = numeric()
m = numeric()
l = numeric()
for (k in 1:K) {
    u[k] = quantile(cs[kk == k], prob = 0.975)
    m[k] = quantile(cs[kk == k], prob = 0.5)
    l[k] = quantile(cs[kk == k], prob = 0.025)
par(las = 1, mgp = c(2, 0.4, 0), tcl = -0.3)
plot(vis$pk$k, cumsum(vis$pk$P), pch = 16, log = "x", ylab = "Proportion of years with $\\le k$ links", 
    xlab = "Number of links ($k$)")
abline(v = c(1, 2, 5, 10, 20), col = "lightgrey", lty = 2)
abline(h = seq(0, 1, 0.2), col = "lightgrey", lty = 2)
xxx = c(1:K, rev(1:K))
yyy = c(l, rev(u))
polygon(xxx, yyy, border = NA, col = "lightgray")
lines(1:K, m, col = "red", lwd = 2)
points(vis$pk$k, cumsum(vis$pk$P), pch = 16)

plot of chunk degreeDistribution2

The median distribution is shown as a red line and the 95% confidence interval shown as a gray band. Results indicate that the degree distribution of your hurricane count data does not deviate significantly from the degree distribution of a Poisson random time series.

However it does suggest a new way to think about anomalous years. Years are anomalous not in a statistical sense of violating a Poisson assumption, but in the sense that the temporal ordering of the counts identifies a year that is unique in that it has a large count but is surrounded before and after by years with low counts.

Thus we contend that node degree is a useful indicator of an anomalous year. That is, a year that stands above most of the other years, but particularly above its 'neighboring' years represents more of an anomaly in a physical sense than does a year that is simply well-above the average.

Node degree captures information about the frequency of hurricanes for a given year and information about the relationship of that frequency to the frequencies over the given year's recent history and near future.

The relationship between node degree and the annual hurricane count is tight, but not exact. Years with a low number of hurricanes are ones that are not well connected to other years, while years with an above normal number are ones that are more connected on average.

The Spearman rank correlation between year degree and year count is 0.73. But this is largely a result of low count years.

The correlation drops to 0.48 when considering only years with more than two hurricanes. Thus high count is necessary but not sufficient for characterizing the year as anomalous, as perhaps it should be.

Global metrics

Global metrics are used to compare networks from different data. One example is the diameter of the network as the length of the longest geodesic path between any two years for which a path exits. A geodesic path (shortest path) is a path between two years such that no shorter path exists.

For instance you see that 1861 is connected to 1865 directly and through a connection with 1862. The direct connection is a path of length one while the connection through 1862 is a path of length two.

The igraph package (Csardi and Nepusz 2006) contains functions for computing network analytics. To find the diameter of your visibility network load the package, create the network (graph) from the list of edges, then use the diameter() function.

Prefix the function name with the package name and two colons to avoid a conflict with the same name from another loaded package.

vis = get.visibility(annual$US.1)
g = graph.edgelist(vis$sm, directed = FALSE)
## [1] 5

The result indicates that any two years are separated by at most 5 links although there is more than one such geodesic.

Transitivity measures the probability that the adjacent nodes are themselves connected. Given that year \( i \) can see years \( j \) and \( k \), what is the probability that year \( j \) can see year \( k \)? In a social context it indicates the likelihood that two of your friends are themselves friends. To compute the transitivity for your visibility network type,

tran = transitivity(g)
round(tran, 3)
## [1] 0.468
visG = get.visibility(annual$G.1)
gG = graph.edgelist(visG$sm, directed = FALSE)
tranG = transitivity(gG)
visF = get.visibility(annual$F.1)
gF = graph.edgelist(visF$sm, directed = FALSE)
tranF = transitivity(gF)

Transitivity tells you that there is a 46.8% chance that two adjacent nodes of a given node are connected. The higher the probability, the greater the network density.

The visibility network constructed from Gulf hurricane counts has a transitivity of 0.563, which compares with a transitivity of 0.479 for the network constructed from Florida counts.

The network density is inversely related to interannual variance, but this rather large difference provides some evidence to support clustering of hurricanes in the vicinity of Florida relative to the Gulf coast region. An MC simulation helps you interpret the difference against the backdrop of random variations.

An important global property is the minimum spanning tree. A tree is a connected network that contains no closed loops. By 'connected' we mean that every year in the network is reachable from every other year via some path through the network (Newman 2010).

A tree is said to span if it connects all the years together. A network may have more than one spanning tree. The minimum spanning tree is the one with the fewest number of edges. A network may contain more than one minimum spanning tree. You compute the minimum spanning tree by typing

mst = minimum.spanning.tree(g)
net = network(get.edgelist(mst))

The result is an object of class igraph. This is converted to a network object by specifying the edge list in the network() function. You plot the network tree by typing