Running Event Based GLMs in SPM

Today I’ll jot some notes for running event based GLMs in SPM. These may be a bit more scattered than usual, but hopefully will be useful for documenting these steps for running basic GLMs.

First you will want to create a stimulus onset file for extracting the betas from your nifti files. I’ve posted about stimulus onsets (SOTS) previous, and you can reference this page on Andy’s Brain Blog for more detail on why timing files are important.

makeSOTS.m

Below is a code chunk from a larger script that consolidates creating SOTS files for the different GLMs. (TCB_makeSOTS.m). Basically, here there are 4 regressors we care about (Cue onsets for the 4 conditions), plus a task bug interval regressor, which serves as a nuisance regressor because we want to remove the BOLD signal where there was a task bug in the code (oops). Regardless, this will create a .mat file, and subjects without the task bug (the study code was fixed halfway through the study), which will be fed into batch script for specifying the model in SPM.

Here’s an example of a Parametric modulator GLM:

Finally, here is the GLM for “AllIntervals“, which extracts the beta estimates for cue, interval, and feedback for each interval. This will be most useful if we want to do follow-up analyses in R or another software program down the road.

I’ve done this for both “Event” and “Parametric modulator” GLMs, as well as the “AllIntervals” GLM, and here are the the sots files for all of the GLMs I’ve set up. You can ignore any of them that have “Event_AllIntervPmod” in them, the bottom 10 are the relevant GLMs for the current analysis (e.g., TCB_Event… and TCB_Pmod…).

makeRegressor.m

Next, we run a “makeRegressor.m” script that creates the mean intercepts across the different runs (M1 – M7) and the trends across the different runs (T1-T8). This helps accounts for the differences between the runs that occur at the run-level (for a given Run X, MX = 1), as well as any trends or drift across the run (a scaled vector from -1 to 1 binned by the number of TRs (repetition time) within each nifti). In my current study, the number of TRs = 310 for each run. Below is a screenshoot to demonstrate how these matrices are created. Note that even though there 8 runs, there are only 7 mean intercepts. That is because the Level1 analyses within SPM estimates a global mean, so we do not need to account for this. Thus, you should have (2n-1) columns, where n = number of runs. Overall, this file is important because they will be fed into the fMRI Model Spec Estimate later.

runLevel1_fMRIModelSpec_Estimate.m

Next, we will want to create the batch file that feeds into spm. You will need to specify a few variables, which can be done from the GUI to generate these batch files. Because I already generated these from the GUI a while back, I’m going to use the scripts directly. But more info about batch processing with the GUI can be found here.

Importantly, you will want to make sure that spm is initialized if you plan to run these analyses on the cluster and use parallel processing and the command line.

runLevel1_contrast.m

I haven’t modified these contrast files yet, but we can additionally specify contrasts of interest. This requires putting “1” and “-1” values for the contrasts of interest, as well as including zeros for the nuisance regressors (Task Bugs, Mean Intercepts, Trends).

runLevel1_parallel.sh

Finally, we will want to run these scripts using our bash script. I have set this up with sbatch headers so that we can parallelize the Level1 analyses so each subject is run on a separate job (an array is created for the participant numbers). I’ve also added some “echos” with notes so that we can read the log file and understand what is happening while the script is running. Once we have clarified the GLMs we want to run, we can call all of these matlab scripts to do the following:

  1. make SOTS files
  2. make Regressor files
  3. runLevel1 Model Spec Estimate
  4. runLevel1 contrast

Note that Steps 1+2 are redundant since they create files for each subject every single time for the same GLM, but the redundancy doesn’t hurt and I haven’t come up yet with a more clever or more efficient way to do this. Maybe I can do an if-statement to run if these files doesn’t already exist for a GLM, but if I need to re-run these regressor for any reason it might be useful to ahve in there.

To run the scripts, we just run them in the command line.

Also, a useful tip to maximize resources is to check the efficiency of your jobs.

Beta Estimates / Nifti Output

When we are done running the Level 1 analysis, we will get beta files for the regressors from the model. Here, for glm1, betas 1-5 will be the task (4 cues + task bug regressor), plus the 15 mean interception and trend regressors, followed by 1 global regressor (5 + 15 + 1 = 21 betas).

We can confirm this in the SPM.mat file, in the variable “SPM.xX.X”, which should contain the values of the design matrix.

We can open this in SPM as well (SPM > Review > SPM.mat), which should display the regression matrix for the GLM of interest. We want to check the bottom row of “parameter estimability”, because if any of the boxes are gray it suggest collinearity (the regressors overlap with one another) which is problematic for running the GLM and estimating variance attributed to specific regressor.

Okay, I think that is it for now. Will add additional notes as I continue to run analyses. 🙂

Advertisement