[3-1] Distributed non-linear effects over 1D Lags

Authors
Affiliations

Thomas Cornulier thomas.cornulier@bioss.ac.uk

Biomathematics & Statistics Scotland (https://www.bioss.ac.uk/)

Dave Miller dave.miller@bioss.ac.uk

BioSS & UKCEH

Published

Invalid Date

The model

This vignette covers models where the response is a function of predictor values, measured at a collection (vector) of regular distances increments, which may represent distance in space (spatial lags), or in the past (time lags), for example.

The model extends the basic signal regression model covered in vignette 2-1, by assuming that the effect of the predictor is non-linear and lag-dependent, i.e., varies smoothly with both the value of the predictor and the distance at which it is measured.

This model may be relevant in scenarios where:

  • the response variable depends (non-linearly) on environmental conditions experienced in the past (may be termed “memory”, “lagged” or “carry-over” effects, depending on contexts). Lags may or may not extend to present time.
  • the response variable depends (non-linearly) on environmental conditions over a neighborhood (the “zone of influence” of these environmental factors).
  • the response variable depends (non-linearly) on predictors measured over any ordered index of sorts (e.g., distance). A classic application is in spectrometry, where the whole series of fluorescence levels at increasing light frequencies can be used as predictor.

and typically, where the relative relevance of different lags/distances/indices for predicting the response is unknown and needs to be inferred from the data.

Mathematical description

For each observation \(i\), the linear predictor includes the additive effects of a vector predictor \(X_{i}\) consisting of values \(x_{ij}\) measured at a range of regular distance increments \(d_{ij}\) forming the distance vector \(D_i\). In typical applications, \(D_i\) is invariant between observations, so index \(i\) can be omitted.

The model is of the form:

\[ \mathbb{E}(y_i) = g^{-1}\left( \beta_0 + \sum_j f(d_{j}, x_{ij})\right) \]

Function \(f(d_{j}, x_{ij})\) acts as a bivariate smooth interaction between \(d_j\) and \(x_{ij}\), that varies smoothly with value of \(x\) and distance \(d\). The shape of \(f(d_{j}, x_{ij})\) is typically unknown and needs to be estimated from the data, so the function should have sufficient flexibility to ensure a good fit to the data, while being sufficiently constrained to avoid overfitting. In this tutorial we will represent \(f\) with tensor product smooths, since they meet these requirements and are easily implemented using standard software.

R implementation

The R implementation of the model, using package mgcv, is of the form:

thisModel<- gam(y ~ te(D, X), data= exampleData)

Where y is a dataframe column with \(N\) observations, and D and X are both \(N \times J\) matrices, with the number of columns \(J\) corresponding to the number of distance classes over which predictor \(X\) was measured.

Matrix D encodes the actual distances values, with equal intervals and constant across rows (if this is not the case, appropriate integration weights should be applied). The units chosen for expressing \(D\) are arbitrary and do not affect the predictions of the model (only its interpretation).

The summation convention applied in mgcv means that when the data fed to the smooth terms are multi-column matrices, the sum of the function evaluation over all columns (after multiplication by the corresponding entry of any by= argument) is returned.

The input data should have the following structure (illustrative example with 8 distances and 100 observations):

(here showing only the first 4 observations)

head(exampleData$y, n= 4)
[1] 1.45 1.61 3.29 2.10
head(exampleData$D, n= 4)
     D1 D2 D3 D4 D5 D6 D7 D8
[1,]  0  5 10 15 20 25 30 35
[2,]  0  5 10 15 20 25 30 35
[3,]  0  5 10 15 20 25 30 35
[4,]  0  5 10 15 20 25 30 35
head(exampleData$X, n= 4)
      X_D1  X_D2  X_D3  X_D4  X_D5  X_D6  X_D7  X_D8
[1,]  2.58  0.81  0.25 -0.75 -1.99 -2.05 -2.34 -1.04
[2,] -2.79 -2.54 -2.53 -0.34  0.03  1.93  0.61  1.70
[3,]  1.63  2.81  2.61  2.23  2.14  3.04  1.74  2.74
[4,]  0.41 -2.64 -2.34 -1.75 -3.27 -2.66 -2.44 -3.48

Visually (yellow is lowest value, blue highest):

Key features:

  • y (response) is typically a single-column vector (may have two columns for binomial family or an arbitrary number for likelihoods that make use of this, such as multivariate normal).
  • D has one column per distance (index) value and is normally row-invariant.
  • X has same dimension as D and is NOT row-invariant. If there is correlation in the \(x_{ij}\) values between adjacent distance classes, rows of \(X\) will tend to display serial correlation patterns, as in the illustrative example above.

Further notes

This model extends the “signal regression” model family, by allowing non-linear effects of \(X\) values, the effect of which also varies smoothly over the \(D\) increments.

Under typical circumstances (i.e., the \(d\) increments are regular and there is no missing value in \(X\)), the model is also a form of “distributed lag model”.

Illustration: analysis of kittiwake breeding success

The motivation for the model, its construction and interpretation are best shown by example. Background information about the kittiwake case study can be found in the introductory vignette (“1_Introduction”).

Specific research question

“Is there a (non-linear) effect of SST on kittiwake yearly breeding success and how does it change across time lags”

A distributed lag model with just temporal lags

A direct answer to the “Specific research question” can be given by a distributed lag model using SST values at suitable temporal lags, as predictors.

Here, we will use weekly SST values and assume that the effect of SST does not extend beyond 30 weeks (~ 6 months) in the past.

For this example, we further assume that the influence of SST doesn’t extend further than 80 km from the breeding colony, and that the effect doesn’t vary over this distance range, so that SST values can be averaged over a 80 km radius.

Data preparation

# loading the data
# depending on the configuration of
# your system, one of these should work
try(load("kit_SST_sandeel.RData"), silent= T)
try(load("../kit_SST_sandeel.RData"), silent= T)

# we'll work with the smaller subset of 7 colonies
kit1<- kit2[kit2$Site %in% kit_sub1, ]

Modelling parameters

# number of weeks to use as predictors
nweeks<- 30
# time lags (in weeks until Jun 30th)
lags_ti<- nweeks:1

Preparing the predictor matrix (30 columns = weekly SST values). For each week, average SST calculated as the sum of SST values for all pixels within 80 km of the colony, divided by number of non-NA pixels, for the relevant colony and year.

# SST temporal lag matrix only, assuming fixed buffer @80 km
# with SiteYears in rows and time lags (weeks since 1st Jan) in columns
SST_mean_80km_tlag<- t(SST_sum_buffer[, "80", kit1$SiteYear] / 
            SST_noNA_buffer[, "80", kit1$SiteYear])

# visualising the first 3 rows and 8 columns of the matrix:
round(SST_mean_80km_tlag, 2)[1:3, 1:8]
                         week
SiteYear                     1    2    3    4    5    6    7    8
  Coquet Island RSPB.1993 8.73 8.46 8.24 8.03 7.87 7.76 7.68 7.57
  Coquet Island RSPB.1999 9.61 9.25 9.05 8.86 8.85 8.53 8.34 8.29
  Coquet Island RSPB.2000 8.32 8.07 7.88 7.68 7.51 7.44 7.23 7.10

Now, the corresponding temporal lag index matrix. We find convenient to annotate lags as negative week numbers, with -30 corresponding to the farthest week in time (w/c 1st Jan of that year) and -1 corresponding to the most recent. Other annotation systems can be used, without effect on the predictions of the model as long as intervals remain all equal.

tlag_mat<- t(matrix((-nweeks):(-1), nrow= nweeks, ncol= nrow(kit1)))

# visualising the first 3 rows and 8 columns of the matrix:
tlag_mat[1:3, 1:8]
     [,1] [,2] [,3] [,4] [,5] [,6] [,7] [,8]
[1,]  -30  -29  -28  -27  -26  -25  -24  -23
[2,]  -30  -29  -28  -27  -26  -25  -24  -23
[3,]  -30  -29  -28  -27  -26  -25  -24  -23

Visual inputs check (yellow is lowest value, blue highest):

Fitting the model

  • An offset offset(log(AON + 1)) is used to standardise the number of chicks by the number of monitored nests (see Introduction vignette)
  • a site random effect is included to account for unknown variation in the mean breeding success between colonies: s(Site, bs= "re")
  • Data are assumed to follow a Tweedie distribution with a log link, which helps modelling overdispersion in the data.

SST values at each of 30 time lags are contained in matrix SST_mean_80km_tlag. Their (non-linear) effect on kittywake breeding success is assumed to vary smoothly with the time lag itself (encoded in matrix tlag_mat) and captured by the term te(tlag_mat, SST_mean_80km_tlag).

library(mgcv)
Loading required package: nlme
This is mgcv 1.9-1. For overview type 'help("mgcv-package")'.
m_t_DLM<- gam(Fledg ~ offset(log(AON + 1)) + s(Site, bs= "re") + 
                    te(tlag_mat, SST_mean_80km_tlag), 
                    data= kit1, family= tw(),
                    method= "REML")

Model output & interpretation

summary(m_t_DLM)

Family: Tweedie(p=1.351) 
Link function: log 

Formula:
Fledg ~ offset(log(AON + 1)) + s(Site, bs = "re") + te(tlag_mat, 
    SST_mean_80km_tlag)

Parametric coefficients:
            Estimate Std. Error t value Pr(>|t|)    
(Intercept) -0.37383    0.09085  -4.115 6.34e-05 ***
---
Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1

Approximate significance of smooth terms:
                                  edf Ref.df     F p-value   
s(Site)                         4.632  6.000 2.873 0.00179 **
te(tlag_mat,SST_mean_80km_tlag) 3.357  4.065 1.618 0.16528   
---
Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1

Rank: 31/32
R-sq.(adj) =  0.773   Deviance explained = 15.8%
-REML = 885.69  Scale est. = 6.2033    n = 161

The model summary suggests that there is clear variation in average breeding success between colonies, but the effect of SST within 80 km on kittiwake breeding success isn’t signifiant at any time lag.

Plotting the estimated effects provides us with a richer interpretation:

par(mfrow= c(1, 2), cex.axis= 1.4, cex.lab= 1.5)
plot(m_t_DLM, scheme= 1, theta= 65)

The plot to the left shows a quantile-quantile plot of the random colony effects. With just 7 colonies, this is not terribly informative: some colonies do better than others (+/- 20% around the mean).

The plot to the right is the time lag function, of greatest interest to us. On the x-axis, is the time lag as we encoded it. The interaction between time lag and SST suggests that lag acts as a modifier for the non-linear effect of SST. Contrary to the basic linear signal regression model, knowing if the function is above or below zero is now irrelevant, as it can no longer be interpreted as a linear regression coefficient. The new function suggests that more extreme (low or high) SST values tend to be associated with higher breeding success, and their effect on kittiwake productivity tends to be maximal early in the year (week beginning 1st Jan)

Model validation

We should look at traditional GAM diagnostics, using deviance residuals.

par(mfrow= c(2, 2))
gam.check(m_t_DLM)


Method: REML   Optimizer: outer newton
full convergence after 5 iterations.
Gradient range [-3.529494e-05,5.089274e-05]
(score 885.6863 & scale 6.203337).
Hessian positive definite, eigenvalue range [3.521578e-05,182.3275].
Model rank =  31 / 32 

Basis dimension (k) checking results. Low p-value (k-index<1) may
indicate that k is too low, especially if edf is close to k'.

                                   k'   edf k-index p-value
s(Site)                          7.00  4.63      NA      NA
te(tlag_mat,SST_mean_80km_tlag) 24.00  3.36      NA      NA

When assuming linear effects of a predictor (here, SST), it’s often worth checking for unwanted trends in the residuals against predictor values. With SST broken down into so many distinct predictors (here, 30 lags) panel plots using a sliding window of lags can an effective way of visualizing the patterns.

m_t_DLM_res<- data.frame(resid= residuals(m_t_DLM), 
                    SST= as.vector(SST_mean_80km_tlag),
                    lag= rep((-nweeks):(-1), each= nrow(kit1)))

coplot(resid ~ SST | lag, data= m_t_DLM_res, 
        panel= panel.smooth, number= 12, lwd= 3)

Both the mean and variance of SST increase, as we move away from winter (left to right, then bottom to top). The moving averages (red) suggest that there could be some remaining non-linearity in the relationship with SST, despite our attempt at capturing it in this model.

Proposed exercises

  • Level 1: repeat the analysis above (temporal lag of SST). Parameters you could try and vary include a fixed distance buffer of your choice, and the coding of lag values.
  • Level 2: fit a model with 1D spatial lags, of either
    • sandeel predictor sandeel_mean_ring (use object kit_sub1 for subsetting it to match the rows of data set kit1)
    • SST (fixing the temporal range to your liking)

In all cases, pay attention to the structure of the model inputs (vectors and matrices) and take some time to reflect on the interpretation of the model outputs.