[2-4] Cross-predictor interaction with lags

Authors
Affiliations

Thomas Cornulier thomas.cornulier@bioss.ac.uk

Biomathematics & Statistics Scotland (https://www.bioss.ac.uk/)

Dave Miller dave.miller@bioss.ac.uk

BioSS & UKCEH

Published

Invalid Date

The model

This vignette covers models where the response is a function of the interaction between two predictors, each predictor being measured at a collection (vector) of regular distances increments, which may represent distance in space (spatial lags), or in the past (time lags), for example.

The model assumes that the coefficient of the predictors (both the main effects and their interaction) vary smoothly with the distance at which they are measured.

This model may be relevant in scenarios where

  • the relative relevance of different lags/distances/indices for predicting the response is unknown and needs to be inferred from the data.
  • the effect of a predictor at a given distance depends on the value of the other predictor at the same distance.

Mathematical description

For each observation \(i\), the linear predictor includes the additive effects of two vector predictors \(X_{i}\) and \(Z_{i}\) consisting of values \(x_{ik}\) and \(z_{ik}\) measured at a range of regular distances increments \(d_{ik}\) forming the distance vector \(D_i\). In typical applications, \(D_i\) is invariant between observations, so index \(i\) can be omitted.

Building on the conventional interaction model with predictors \(A\) and \(B\): \[ \mathbb{E}(y_i) = g^{-1}\left( \beta_0 + \beta_1 A_i + \beta_2 B_i + \beta_3 A_i B_i\right), \]

The lagged interaction model is of the form:

\[ \mathbb{E}(y_i) = g^{-1}\left( \beta_0 + \sum_k f_1(d_{k}) x_{ik} + \sum_k f_2(d_{k}) z_{ik} + \sum_k f_3(d_{k}) x_{ik} z_{ik}\right) \]

or in compact vector form,

\[ \mathbb{E}(y_i) = g^{-1}\left( \beta_0 + X_{i} f_1(D) + Z_{i} f_2(D) + X_{i} Z_{i} f_3(D)\right) \]

Functions \(f_{.}(d_k)\) acts as a coefficients for the \(x_{ik}\) and \(z_{ik}\) main effects and their interaction, that vary smoothly with distance \(d_{k}\). The shape of \(f_{.}(d_k)\) is typically unknown and needs to be estimated from the data, so the function should have sufficient flexibility to ensure a good fit to the data, while being sufficiently constrained to avoid overfitting. In this tutorial we will represent \(f_{.}\) with penalized regression splines, since they meet these requirements and are easily implemented using standard software.

R implementation

The R implementation of the model, using package mgcv, is of the form:

thisModel<- gam(y ~ s(D, by= X) + s(D, by= Z) + s(D, by= XZ), data= exampleData)

Where y is a dataframe column with \(N\) observations, and D, X, Z and XZ are all \(N \times K\) matrices, with the number of columns \(K\) corresponding to the number of distance classes over which the predictors were measured, and XZ = X * Z.

Matrix D encodes the actual distances values, with equal intervals and constant across rows (if this is not the case, appropriate integration weights should be applied). The units chosen for expressing \(D\) are arbitrary and do not affect the predictions of the model (only its interpretation).

The summation convention applied in mgcv means that when the data fed to the smooth terms are multi-column matrices, the sum of the function evaluation over all columns (after multiplication by the corresponding entry of any by= argument) is returned.

The input data should have the following structure (illustrative example with 8 distances and 100 observations):

(here showing only the first 4 observations)

head(exampleData$y, n= 4)
[1] 3.18 2.66 2.30 2.65
head(exampleData$D, n= 4)
     D1 D2 D3 D4 D5 D6 D7 D8
[1,]  0  5 10 15 20 25 30 35
[2,]  0  5 10 15 20 25 30 35
[3,]  0  5 10 15 20 25 30 35
[4,]  0  5 10 15 20 25 30 35
head(exampleData$X, n= 4)
      X_D1  X_D2  X_D3  X_D4  X_D5  X_D6  X_D7  X_D8
[1,] -0.39 -1.67 -1.33 -2.00 -2.93 -2.38 -3.06 -3.20
[2,] -5.04 -5.35 -3.04 -1.32 -1.50 -0.49  0.56 -0.32
[3,]  0.20  1.28  1.41  1.45  1.26  2.26  0.95  1.58
[4,]  1.92  1.05  1.03  1.57  1.43  0.16 -2.00 -0.72
head(exampleData$Z, n= 4)
      Z_D1  Z_D2  Z_D3  Z_D4  Z_D5  Z_D6  Z_D7  Z_D8
[1,]  0.63  0.30  0.14 -0.78 -0.29 -0.45  2.47  1.84
[2,] -0.49 -0.01  0.37 -0.06  0.57  1.71  0.30  0.21
[3,]  1.32  1.65  0.76  1.32  0.73 -1.99 -1.26 -2.37
[4,] -0.87 -0.06 -0.84 -1.75 -0.08  0.32 -1.02  0.48
head(exampleData$XZ, n= 4)
     XZ_D1 XZ_D2 XZ_D3 XZ_D4 XZ_D5 XZ_D6 XZ_D7 XZ_D8
[1,] -0.25 -0.50 -0.19  1.56  0.85  1.08 -7.57 -5.89
[2,]  2.46  0.06 -1.14  0.08 -0.85 -0.84  0.16 -0.07
[3,]  0.26  2.12  1.08  1.90  0.92 -4.49 -1.20 -3.73
[4,] -1.67 -0.06 -0.87 -2.74 -0.11  0.05  2.03 -0.35

Visually (yellow is lowest value, blue highest):

Key features:

  • y (response) is typically a single-column vector (may have two columns for binomial family or an arbitrary number for likelihoods that make use of this, such as multivariate normal).
  • D has one column per distance (index) value and is normally row-invariant.
  • X, Z and XZ have same dimension as D and are NOT row-invariant. If there is correlation in the \(x_{ik}\) or in the \(z_{ik}\) values between adjacent distance classes, rows of these matrices will tend to display serial correlation patterns, as in the illustrative example above.

Illustration: analysis of kittiwake breeding success

The motivation for the model, its construction and interpretation are best shown by example. Background information about the kittiwake case study can be found in the introductory vignette (“1_Introduction”).

Specific research question

“At a given distance lag, is kittiwake yearly breeding success predicted by lag-specific:

  • presence of sandeels?
  • SST value?
  • an interaction between the effects of sandeel presence and SST?”

A signal regression model with just temporal lags

A direct answer to the “Specific research question” can be given by a signal regression using SST and sandeel values at suitable spatial lags, as predictors.

Here, we will use SST values averaged over the weeks 16-25 of the year, taking pixel averages over 10 km distance bands and assume that the effect of SST does not extend beyond 210 km away from the colony.

Data preparation

# loading the data
# depending on the configuration of
# your system, one of these should work
try(load("kit_SST_sandeel.RData"), silent= T)
try(load("../kit_SST_sandeel.RData"), silent= T)

# we'll work with the smaller subset of 7 colonies
kit1<- kit2[kit2$Site %in% kit_sub1, ]
# corresponding sandeel data matrix (time-constant)
sandeel1<- sandeel_mean_ring[kit1$Site, ]

Modelling parameters

# distance lags (in km from colony)
lags_sp<- seq(0, 210, by= 10)

N<- nrow(kit2)
D<- length(lags_sp[-1])

Preparing the predictor matrix (21 columns = mean SST values up to 210 km, by 10 km increments).

For each distance band, average SST is calculated firstly by differencing between nested buffers, and then by taking the ratio of the sum of SST values for all pixels within distance band, divided by number of non-NA pixels over the same band, for the relevant colony and year (see code below for more details).

# SST spatial lag matrix, assuming mean SST over weeks 16 to 25:
# compute mean per period
SST_sum_buffer_16_25weeks<- 
        apply(SST_sum_buffer[16:25, , kit1$SiteYear], 
                c(2, 3), mean)
SST_noNA_buffer_16_25weeks<- 
        apply(SST_noNA_buffer[16:25, , kit1$SiteYear], 
                c(2, 3), mean) # expecting little/no variation between weeks
# compute rings from difference between successive buffers
SST_sum_ring_16_25weeks<- apply(
        SST_sum_buffer_16_25weeks, 
                c(2), FUN= function(x){diff(c(0, x))})
SST_noNA_ring_16_25weeks<- apply(
        SST_noNA_buffer_16_25weeks, 
                c(2), FUN= function(x){diff(c(0, x))})
# compute rings mean from SSTsum/Npixels
SST_mean_ring_16_25weeks<- 
        t(SST_sum_ring_16_25weeks / SST_noNA_ring_16_25weeks)

# corresponding spatial lag index matrix
splag_mat<- t(matrix(lags_sp[-1], 
                nrow= length(lags_sp[-1]), ncol= nrow(kit1)))

head(round(SST_mean_ring_16_25weeks, 2), 3)
                         dist
SiteYear                     10    20    30    40    50    60    70    80    90
  Coquet Island RSPB.1993 10.55 10.59 10.60 10.80 10.77 10.73 10.75 10.80 10.77
  Coquet Island RSPB.1999 11.51 11.31 11.23 11.22 11.18 11.16 11.11 11.07 11.08
  Coquet Island RSPB.2000  8.31  8.37  8.36  8.42  8.52  8.63  8.71  8.82  8.84
                         dist
SiteYear                    100   110   120   130   140   150   160   170   180
  Coquet Island RSPB.1993 10.81 10.93 10.99 11.02 10.99 11.12 11.13 11.22 11.17
  Coquet Island RSPB.1999 11.13 11.15 11.18 11.22 11.29 11.34 11.43 11.49 11.61
  Coquet Island RSPB.2000  8.91  8.97  9.04  8.99  8.98  9.01  9.02  9.03  9.02
                         dist
SiteYear                    190   200   210
  Coquet Island RSPB.1993 11.29 11.18 11.14
  Coquet Island RSPB.1999 11.73 11.72 11.69
  Coquet Island RSPB.2000  9.01  9.01  9.06
head(splag_mat, 3)
     [,1] [,2] [,3] [,4] [,5] [,6] [,7] [,8] [,9] [,10] [,11] [,12] [,13] [,14]
[1,]   10   20   30   40   50   60   70   80   90   100   110   120   130   140
[2,]   10   20   30   40   50   60   70   80   90   100   110   120   130   140
[3,]   10   20   30   40   50   60   70   80   90   100   110   120   130   140
     [,15] [,16] [,17] [,18] [,19] [,20] [,21]
[1,]   150   160   170   180   190   200   210
[2,]   150   160   170   180   190   200   210
[3,]   150   160   170   180   190   200   210

Now, pre-compute interaction term XZ= X * Z

sandeel_SST_16_25_prod1<- sandeel1 * SST_mean_ring_16_25weeks

Visual inputs check (yellow is lowest value, blue highest):

Warning in cbind(kit1$Fledg, matrix(NA, N, D)): number of rows of result is not
a multiple of vector length (arg 1)

Lots of missing values in both sandeel and SST data, leading to loss of sample size.

Fitting the model

  • An offset offset(log(AON + 1)) is used to standardise the number of chicks by the number of monitored nests (see Introduction vignette)
  • a site random effect is included to account for unknown variation in the mean breeding success between colonies: s(Site, bs= "re")
  • Data are assumed to follow a Tweedie distribution with a log link, which helps modelling overdispersion in the data.

Average SST values for weeks 16-25 of the year, at each of 21 distance lags are contained in matrix SST_mean_ring_16_25weeks. Sandeel modelled probability of presence (logit scale) are contained in matrix sandeel1. Their interactive effect on kittywake breeding success is assumed to vary smoothly with the distance lag itself (encoded in matrix splag_mat) and captured by the terms s(splag_mat, by= sandeel1), s(splag_mat, by= SST_mean_ring_16_25weeks), s(splag_mat, by= sandeel_SST_16_25_prod1.

library(mgcv)
Loading required package: nlme
This is mgcv 1.9-1. For overview type 'help("mgcv-package")'.
msp_xp1<- gam(Fledg ~ offset(log(AON + 1)) + Year + s(Site, bs= "re") + 
                s(splag_mat, by= sandeel1, k= 12) + 
                s(splag_mat, by= SST_mean_ring_16_25weeks, k= 12) +           
                s(splag_mat, by= sandeel_SST_16_25_prod1, k= 12),           
                data= kit1, family= tw(),
                method= "REML")

Model output & interpretation

summary(msp_xp1)

Family: Tweedie(p=1.769) 
Link function: log 

Formula:
Fledg ~ offset(log(AON + 1)) + Year + s(Site, bs = "re") + s(splag_mat, 
    by = sandeel1, k = 12) + s(splag_mat, by = SST_mean_ring_16_25weeks, 
    k = 12) + s(splag_mat, by = sandeel_SST_16_25_prod1, k = 12)

Parametric coefficients:
             Estimate Std. Error t value Pr(>|t|)
(Intercept) 18.312695  13.551248   1.351    0.181
Year        -0.008320   0.006757  -1.231    0.222

Approximate significance of smooth terms:
                                         edf Ref.df     F p-value  
s(Site)                               0.7339  1.000 2.777   0.053 .
s(splag_mat):sandeel1                 2.0059  2.007 0.822   0.443  
s(splag_mat):SST_mean_ring_16_25weeks 2.0000  2.000 1.641   0.201  
s(splag_mat):sandeel_SST_16_25_prod1  2.0003  2.000 1.624   0.205  
---
Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1

R-sq.(adj) =  0.747   Deviance explained = 23.7%
-REML = 473.98  Scale est. = 0.58576   n = 79

The model summary does not support an effect of the interaction between SST and sandeel presence.

Plotting the estimated effects:

par(mfrow= c(1, 3))
plot(msp_xp1, scale= F, select= 2)
abline(h= 0, lty= 2, col= 2)
plot(msp_xp1, scale= F, select= 3)
abline(h= 0, lty= 2, col= 2)
plot(msp_xp1, scale= F, select= 4)
abline(h= 0, lty= 2, col= 2)

Should the effects be significant, the plots above would suggest sandeel presence and SST main effects going from near null close to the colony, to negative as distance increases, whereas the interaction becomes larger and positive as distance increases.

Model validation

We should also look at traditional GAM diagnostics, using deviance residuals.

par(mfrow= c(2, 2))
gam.check(msp_xp1)


Method: REML   Optimizer: outer newton
full convergence after 7 iterations.
Gradient range [-0.0001666128,2.821341e-07]
(score 473.9829 & scale 0.5857614).
eigenvalue range [-2.278913e-09,66.73415].
Model rank =  42 / 42 

Basis dimension (k) checking results. Low p-value (k-index<1) may
indicate that k is too low, especially if edf is close to k'.

                                          k'    edf k-index p-value
s(Site)                                4.000  0.734      NA      NA
s(splag_mat):sandeel1                 12.000  2.006      NA      NA
s(splag_mat):SST_mean_ring_16_25weeks 12.000  2.000      NA      NA
s(splag_mat):sandeel_SST_16_25_prod1  12.000  2.000      NA      NA

There is a long left tail to the residuals distribution, indicating that some observations are badly underpredicted. This model isn’t a great model for the data, despite its high \(R^2\).

When assuming linear effects of a predictor (here, SST), it’s often worth checking for unwanted trends in the residuals against predictor values. With SST broken down into so many distinct predictors (here, 30 lags) panel plots using a sliding window of lags can an effective way of visualizing the patterns.

msp_xp1_res<- data.frame(resid= residuals(msp_xp1), 
                SST= as.vector(SST_mean_ring_16_25weeks[-msp_xp1$na.action, ]),
                sandeel= as.vector(sandeel1[-msp_xp1$na.action, ]),
                lag= rep(lags_sp[-1], each= nrow(kit1[-msp_xp1$na.action, ])))

coplot(resid ~ SST | lag, data= msp_xp1_res, 
        panel= panel.smooth, number= 12, lwd= 3)

coplot(resid ~ sandeel | lag, data= msp_xp1_res, 
        panel= panel.smooth, number= 12, lwd= 3)

The moving averages (red) suggest that there could be some non-linearity in the relationship with SST, which was not considered in this model. High sandeel1 values are only present at the shorter range of distances from the colonies, so great care required for interpreting the model, and particularly the interaction, in this case.

Simulations

Loading a function to generate random 1D smooth functions, which will allow us to simulate the “true”/known effect (coefficient) of predictor vector \(X\) measured over a series of \(K\) lags \(D\).

Function name: sim_spline_1D(k= 7, lambda= c(0.05, 0.005), inter= 0, x_l= 100) Arguments:

  • k: spline basis size
  • lambda: a vector of length 2 specifying smoothing parameters ‘lambda’ for the 1D thin plate spline
  • inter: intercept value
  • x_l: number of x values a which to evaluate 1D smooth
require(mgcv)
require(mvtnorm)
Loading required package: mvtnorm
sim_spline_1D<- function(k= 7, lambda= c(0.05, 0.005), inter= 0, x_l= 100) {
    # Define the spatial domain
    x<- seq(0, 1, length.out= x_l)
    resp<- rnorm(x_l)

    # Open a null file connection to discard the 'jagam()' text output
        # Check the operating system
    if (.Platform$OS.type == "unix") {
        null_device<- "/dev/null"
    } else { # Windows
        null_device<- "NUL"
    }

    con<- file(null_device, open= "wt")

    # generate a template gam model object for JAGS to extract the relevant components
    jd<- jagam(resp ~ s(x, bs= "tp", k= k), file= con, diagonalize= F)

    close(con)

    S1<- jd$jags.data$S1
    zero<- jd$jags.data$zero
    
    # penalty matrix
    K1<- S1[1:(k-1),1:(k-1)] * lambda[1] + 
            S1[1:(k-1),k:(2*k-2)] * lambda[2]

    # simulate multivariate normal basis coefficients
    beta_sim<- c(inter, mvtnorm::rmvnorm(n= 1, mean= zero[2:k], sigma= K1, method= "chol"))

    # evaluate simulated function at x values
    fct_sim<- jd$jags.data$X %*% beta_sim

    return(fct_sim)
}

# example
# sim_1D<- sim_spline_1D(k= 7, lambda= c(0.05, 0.005), inter= 0, x_l= 100)
# plot(sim_1D, type= "l")

Now, a function to simulate data from the model, given the known parameter values.

Output values:

  • X: lagged predictor matrix
  • Z: lagged predictor matrix
  • XZ: product of X and Z
  • Lag: lag-encoding matrix
  • tf1, tf2, tf3: true lagged-effect function for main effect of X, main effect of Z and XZ interaction, respectively
  • Y: response vector
# function to simulate 1D-lagged process with cross-predictor interaction
sim_Xpred_lag_of_1D<- function(N= 200, nlags= 20, residSD= 1){
    # simulate time-varying predictor, where each column represents a time lag w.r.t. the observation
    X<- matrix(runif(N*nlags), N, nlags)
    Z<- matrix(runif(N*nlags), N, nlags)
    XZ<- X * Z
    # true coefficient functions: main X effect
    tf1<- sim_spline_1D(x_l= nlags)
    # true coefficient functions: main Z effect
    tf2<- sim_spline_1D(x_l= nlags)
    # true coefficient functions: XZ interaction
    tf3<- sim_spline_1D(x_l= nlags)
    # Linear predictor
    LP<- X %*% tf1 + Z %*% tf2 + XZ %*% tf3
    # observations
    Y<- rnorm(n= N, mean= LP, sd= residSD)
    # index matrix
    Lag<- matrix(0:(nlags-1), N, nlags, byrow= T)
    list(X= X, Z= Z, XZ= XZ, Y= Y, tf1= tf1, tf2= tf2, tf3= tf3, Lag= Lag)
}

Let’s simulate 4 random data sets from 4 different sets of random model parameters, and plot the estimates of \(f_.(d)\) (labelled “f.(Lag)”) with their 95% confidence intervals (black), together with the true values (red). The x-axis shows \(d\) (labelled “Lag”).

par(mfcol= c(3, 4))
set.seed(53) # for reproducibility
for(i in 1:4){
  dat<- sim_Xpred_lag_of_1D(N= 200, nlags= 20, residSD= 1)
  mod<- gam(Y ~ s(Lag, by= X, k= ncol(dat$Lag)-1) +
                s(Lag, by= Z, k= ncol(dat$Lag)-1) +
                s(Lag, by= XZ, k= ncol(dat$Lag)-1), data= dat)

  plot(mod, se= 1.96, select= 1, main= paste("Simulation", i, "\nX main effect"), ylab= "f1(Lag)")
  lines(0:(ncol(dat$Lag)-1), dat$tf1, col= 2)
  abline(h= 0, lty= 3, col= grey(0.6))

  plot(mod, se= 1.96, select= 2, main= paste("Simulation", i, "\nZ main effect"), ylab= "f2(Lag)")
  lines(0:(ncol(dat$Lag)-1), dat$tf2, col= 2)
  abline(h= 0, lty= 3, col= grey(0.6))

  plot(mod, se= 1.96, select= 3, main= paste("Simulation", i, "\nXZ interaction"), ylab= "f3(Lag)")
  lines(0:(ncol(dat$Lag)-1), dat$tf3, col= 2)
  abline(h= 0, lty= 3, col= grey(0.6))
}

In all cases, \(f(d)\), the effect of \(X\) across lags \(D\) is well recovered by the model, with the true value most of the time contained in the 95% CI of the estimates.

Proposed exercises

  • Level 1: repeat the analysis above (spatial lags of SST and sandeel).
  • Level 2: fit a model with the effects of two temporal layers of SST (of your choice) interacting, over a range of spatial lags.

In all cases, pay attention to the structure of the model inputs (vectors and matrices) and take some time to reflect on the interpretation of the model outputs.