Running Event Based GLMs in SPM

Today I’ll jot some notes for running event based GLMs in SPM. These may be a bit more scattered than usual, but hopefully will be useful for documenting these steps for running basic GLMs.

First you will want to create a stimulus onset file for extracting the betas from your nifti files. I’ve posted about stimulus onsets (SOTS) previous, and you can reference this page on Andy’s Brain Blog for more detail on why timing files are important.

makeSOTS.m

Below is a code chunk from a larger script that consolidates creating SOTS files for the different GLMs. (TCB_makeSOTS.m). Basically, here there are 4 regressors we care about (Cue onsets for the 4 conditions), plus a task bug interval regressor, which serves as a nuisance regressor because we want to remove the BOLD signal where there was a task bug in the code (oops). Regardless, this will create a .mat file, and subjects without the task bug (the study code was fixed halfway through the study), which will be fed into batch script for specifying the model in SPM.

Here’s an example of a Parametric modulator GLM:

Finally, here is the GLM for “AllIntervals“, which extracts the beta estimates for cue, interval, and feedback for each interval. This will be most useful if we want to do follow-up analyses in R or another software program down the road.

I’ve done this for both “Event” and “Parametric modulator” GLMs, as well as the “AllIntervals” GLM, and here are the the sots files for all of the GLMs I’ve set up. You can ignore any of them that have “Event_AllIntervPmod” in them, the bottom 10 are the relevant GLMs for the current analysis (e.g., TCB_Event… and TCB_Pmod…).

makeRegressor.m

Next, we run a “makeRegressor.m” script that creates the mean intercepts across the different runs (M1 – M7) and the trends across the different runs (T1-T8). This helps accounts for the differences between the runs that occur at the run-level (for a given Run X, MX = 1), as well as any trends or drift across the run (a scaled vector from -1 to 1 binned by the number of TRs (repetition time) within each nifti). In my current study, the number of TRs = 310 for each run. Below is a screenshoot to demonstrate how these matrices are created. Note that even though there 8 runs, there are only 7 mean intercepts. That is because the Level1 analyses within SPM estimates a global mean, so we do not need to account for this. Thus, you should have (2n-1) columns, where n = number of runs. Overall, this file is important because they will be fed into the fMRI Model Spec Estimate later.

runLevel1_fMRIModelSpec_Estimate.m

Next, we will want to create the batch file that feeds into spm. You will need to specify a few variables, which can be done from the GUI to generate these batch files. Because I already generated these from the GUI a while back, I’m going to use the scripts directly. But more info about batch processing with the GUI can be found here.

Importantly, you will want to make sure that spm is initialized if you plan to run these analyses on the cluster and use parallel processing and the command line.

runLevel1_contrast.m

I haven’t modified these contrast files yet, but we can additionally specify contrasts of interest. This requires putting “1” and “-1” values for the contrasts of interest, as well as including zeros for the nuisance regressors (Task Bugs, Mean Intercepts, Trends).

runLevel1_parallel.sh

Finally, we will want to run these scripts using our bash script. I have set this up with sbatch headers so that we can parallelize the Level1 analyses so each subject is run on a separate job (an array is created for the participant numbers). I’ve also added some “echos” with notes so that we can read the log file and understand what is happening while the script is running. Once we have clarified the GLMs we want to run, we can call all of these matlab scripts to do the following:

  1. make SOTS files
  2. make Regressor files
  3. runLevel1 Model Spec Estimate
  4. runLevel1 contrast

Note that Steps 1+2 are redundant since they create files for each subject every single time for the same GLM, but the redundancy doesn’t hurt and I haven’t come up yet with a more clever or more efficient way to do this. Maybe I can do an if-statement to run if these files doesn’t already exist for a GLM, but if I need to re-run these regressor for any reason it might be useful to ahve in there.

To run the scripts, we just run them in the command line.

Also, a useful tip to maximize resources is to check the efficiency of your jobs.

Beta Estimates / Nifti Output

When we are done running the Level 1 analysis, we will get beta files for the regressors from the model. Here, for glm1, betas 1-5 will be the task (4 cues + task bug regressor), plus the 15 mean interception and trend regressors, followed by 1 global regressor (5 + 15 + 1 = 21 betas).

We can confirm this in the SPM.mat file, in the variable “SPM.xX.X”, which should contain the values of the design matrix.

We can open this in SPM as well (SPM > Review > SPM.mat), which should display the regression matrix for the GLM of interest. We want to check the bottom row of “parameter estimability”, because if any of the boxes are gray it suggest collinearity (the regressors overlap with one another) which is problematic for running the GLM and estimating variance attributed to specific regressor.

Okay, I think that is it for now. Will add additional notes as I continue to run analyses. 🙂

Advertisement

What is orthogonalization and how do you implement it in SPM?

Over the past few days, I’ve been learning about how to deal with orthogonalization of regressors in your fMRI level 1 GLMs. Consequently, I’ve fallen deep into a new rabbit hole and now I figured it would be useful to the universe to share my collated knowledge over the last 24 hours.

For more leisure learning, I would highly recommend Jeanette Mumford’s excellent video tutorial on orthogonalization as not ideal for dealing with collinearity of your regressors.

Anyway, it turns out (as of June 2021) orthogonalization of modulations are default turned ON in spm 12, in case you were wondering. So we’re going to want to turn this off in our Level 1 GLMs. If you set up your models via the batch editor, you can just do this manually. However, pragmatically this is incredibly tedious and you may want to just tweak existing batch scripts you’ve already generated… for example. 😅

After much googling for how to actually change these settings via batch scripting, I stumbled across this useful link on the spm forum, where some folks were discussing how to turn off orthogonalization of parametric modulator regressors in spm 12. In brief, there are two ways to do this.

Option 1: Set orthogonalization on and off flexibly within each condition

To accomplish this, you will need to define each condition separately in your matlabbatch script. This is a bit more tenable if you are entering your conditions manually on the GUI or even in script with few conditions, but this sort of gets unwieldy fast depending on the number of conditions you want to set.

matlabbatch{2}.spm.stats.fmri_spec.sess(1).cond(1).name = 'ConditionName'
matlabbatch{2}.spm.stats.fmri_spec.sess(1).cond(1).onset = yourOnsetVector 
matlabbatch{2}.spm.stats.fmri_spec.sess(1).cond(1).pmod(1).name = 'PmodName'
matlabbatch{2}.spm.stats.fmri_spec.sess(1).cond(1).pmod(1).param = yourPmodVector 
matlabbatch{2}.spm.stats.fmri_spec.sess(1).cond(1).pmod(1).poly = 1
matlabbatch{2}.spm.stats.fmri_spec.sess(1).cond(1).orth = 0 or 1

Here’s a screenshot of my whole batch, for reference, in case you were wondering where this code snippet sits with respect to the other lines of code and spm commands.

Option 2: Set orthogonalization on and off flexibly using the “multiple conditions” option

Here, this assumes that you’ve set nicely set up your stimulus onsets in a .mat file, a cell arrays for “names”, “onsets”, “durations”, and “pmod” variables on your model (note your pmod should be a struct). You will also want to define an “orth” cell array (True/False or 1/0), for which of these variables you would like to be orthogonalized. If you want to remove orthogonalization for all of them, this is pretty straightforward and you set all of your array variables to 0. One addition note is that “orth()” is technically a function in matlab, but you can assign values to it without any problem, but you cannot just call orth by itself otherwise you will run into errors.

Here is a screenshot of all of these arrays. Basically the important thing to keep in mind is that the number of columns should be the same across all of your arrays/struct variables, and they should correspond to your regressors. There’s a bit more coding that goes into how I’ve set this up, but I have not gone into that here.

Here is a snippet of code for the multiple conditions version, as well as a screenshot

matlabbatch{2}.spm.stats.fmri_spec.sess.cond = struct('name',{},'onset',{},'duration',{},'tmod',{}, 'pmod', {}, 'orth', {});
matlabbatch{2}.spm.stats.fmri_spec.sess.multi = {'../SOTS/TCB_Event_AllIntervPmod_RewPenRTacc_sots_allTcat.mat'};
matlabbatch{2}.spm.stats.fmri_spec.sess.regress = struct('name', {}, 'val', {});

Alright, that is pretty much it. Feel free to let me know if there’s anything I’ve missed, or if you have any other useful suggestions for implementing orthogonalization in your GLM models in spm12!

fMRIPrep – Parallelizing Bash Scripts

It has been a while, but alas I am resurrecting the fMRIprep preprocssing pipeline. There have been a few updates and tweaks to the xnat-tools put out by the CCV folks, which also has great documentation. Shoutout to Isabel Restrepo. 🙂

I won’t put the scripts in this post, but you should be able to access and download them from my GitHub Repo.

Step 1: TCB_xnat2bids_bidspostprocess_parallel.sh
This script pulls Dicoms from XNAT to the server and stores in xnat-export folder (note: naming irregularities should be noted in a .json file), converts dicoms to nifti format, and then converts to bids format which is stored in a bids folder. Note that you will need an Ascension number associated with each subject, which you can find from the XNAT GUI. Alternatively, you can go to your “https://bnc.brown.edu/xnat/data/experiments/” URL to identify the ascension number associated with your subject. Different institutions may have different specifics wrt the URL, but it should generate a list of your acquired scans like the following:

List of the scans on XNAT

Next, you will want to edit and submit the scripts to the cluster. A minor addition to these scripts are that since we have collected. field maps, the “bidspostprocess” function updates the associated .json file to include the runs that the fieldmaps should be applied for (more detail below). My scripts are set up so all you will need is are edit is the input to the associative array. Note that our T1 mprage scan spits out both and RMS and expanded image, and we want to ignore the non-RMS T1 scan. Once we update this info, then we can run the script.

Navigate to the directory with the scripts and then submit job to cluster.

Using “sbatch” to submit jobs to the cluster. You can also use “sacct” to check the status of your submitted job
Using “myq” to list running and queued jobs

Once the job has been completed successfully, we should see the following:

Step 2: TCB_bids-validator.sh
Next we will want to check that the data are BIDS validated. If you look in the “bids” folder BIDS Validation, and specifically in the func folder (for task-based fMRI), the data should be in BIDS format.

Additionally, you will want to check that the bidspostprocess worked, and if it works you you will see an “IntendedFor” argument in the .json file with the associated functional runs it will be applied for.

bidspostprocess adds an “IntendedFor” argument in the .json files for each fieldmap to indicate which BOLD runs to apply fieldmap correction for

Next, you can run the bids-validator script, which tests for bids compliance within the bids folder. Here, I haven’t specified the events.tsv files (will do that later), so the bids-validator is just warning me. But otherwise it looks great.

Step 3: TCB_fmriprep_fieldmap_ica_parallel.sh
Next, we will run fMRIprep with fieldmap correction and ICA aroma. The script is parallelized so that participants will be sent to separate nodes. (There is more information in the header of the bash script not included below).

Step 4: TCB_postfmriprep_data.sh
This step is for unzipping the .nii.gz files that are generated after fmriprep. This puts all of the relevant functional and anatomical nifti images into an spm-data file for GLM analyses. This is an optional step, and highly variable depending on what software you wish to perform your analyses in.

Anyways, that’s all for preprocessing. Stay tuned for more updated scripts regarding generating stimulus onset files, and setting up first and second level GLMs in SPM.

Generating an rsa key to ssh

If you want to set up an an rsa key to ssh into the server without

Generating an SSH key locally on terminal. First you will need to open a terminal.

(base) username@Debbies-MBP-2 ~ % ssh-keygen

Then you will want to copy your RSA code

ssh-copy-id -i ~/.ssh/id_rsa.pub transfer.ccv.brown.edu
ssh-copy-id -i ~/.ssh/id_rsa.pub ssh.ccv.brown.edu

You can validate that this works by ssh-ing in to the server name (or you can specify any name)

ssh transfer.ccv.brown.edu
ssh ssh.ccv.brown.edu

Finally, to see your keys, you can use this command

(base) username@Debbies-MBP-2 ~ % ls ~/.ssh            
authorized_keys	config		id_rsa		id_rsa.pub	known_hosts	known_hosts.old

And here the manual for ssh: https://www.man7.org/linux/man-pages/man5/ssh_config.5.html

Here is info for how to set up Oscar via VSCode: https://ask.cyberinfrastructure.org/t/vscode-remote-ssh-in-oscar/1533/2

Steps:

  1. Download the Remote Development Extension Pack for VS code
  2. Select Remote-SSH:Connect to Host, and enter username@transfer.ccv.brown.edu. To pull up the function, you can press F1.
  3. Once you enter the host, it will autofill a config file for you. You can access this the SSH target from the Remote Explorer button on your VS code. If don’t see SSH Targets” automatically, check the dropdown menu for it.
  4. Voila! Once you press the target, you should be able to type your password (or if you have your RSA key set up it will be set up for you). Then you should be connected to the server.

More tips on ssh from the CCV website: https://docs.ccv.brown.edu/oscar/managing-files/version-control/ssh-agent-forwarding