Transferring data from XNAT and Converting to BIDS format

In today’s post, I will cover transferring data from XNAT (storage server for MRI data) and converting it to BIDS format. This is an important precursor step to running fMRIPrep. Unfortunately, the steps I’ll walk through are specific to the Brown community

Before I get too deep into the weeds, I want to give a HUGE shoutout to the Behavioral Neuroimaging Core (BNC), who have provided the bulwark of these tools to the neuroimaging community. I wouldn’t have been able to implement half these tools without their support and heroic efforts to set up this infrastructure. Also, FYI for those following at home outside of the Brown community, all of these tools are available on Github and can be implemented using your own system.

Github Repo for XNAT Tools: https://github.com/brown-bnc/xnat-tools
Brown-Specific XNAT Portal: https://bnc.brown.edu/xnat/

Okay, so let’s get started.

First, our group has set up an XNAT, and neuroinformatics platform to store MRI data developed at Washington University in St. Louis. If you’ve worked with any of the Human Connectome Project Datasets or larger neuroimaging datasets generally, you may have used this interface before (e.g., IntraDB, CNDA, etc). It’s a great interface that allows the user to click buttons to look at what data they’ve stored after each scan. As someone who has spent a lot of time with this interface, I highly recommend it and think it’s super easy to use.

Screenshot 2020-03-30 14.50.56.pngOnce you’ve set up your project and your data are all stored, you should be all set to run the ‘xnat2bids’ function from the xnat tools singularity container.  The way it was set up for us, we need to extract the subject number and the accession number.  (See image below).

Screenshot 2020-03-30 15.41.52.png

As a side note, there may be MRI sessions in which you have extra runs you don’t want to use, and there are numerous ways you can deal with that. I tend to prefer to not delete raw data and just remove data at a post-processing step. So for our study, we’re just specifying which blocks to use and not use (relevant for scripts later).

Screenshot 2020-03-30 15.49.39.png

Finally, you want to set up your scripts. Here, I’ve created two loops – one that is more general and calls ‘xnat2bids’ for the “perfect” runs (i.e., the number of BOLD runs collected is as expected), and the “imperfect” runs (i.e. the number of BOLD runs collected is more than expected). For the imperfect/irregular runs, I’ve used the “skiplist” argument to skip the scan numbers that we do not want to use (see image above).

To run the script, you’ll want to navigate to the folder where the script is, and then rn it using sbatch (if you’re on a supercomputer) or bash (if you’re running locally or don’t have access to cluster). If the latter, make sure you set your computer to rocket launch for a period of time.

Screenshot 2020-03-30 15.47.14.png

*NOTE*: By default, this script will prompt you to login your password for every subject. you can circumvent this by setting your XNAT_USER and XNAT_PASSWORD in your .bashrc file. Some more detailed instructions are here. Here’s a bit of code below.

# Navigate to home directory
cd 

# Edit the .bashrc file
gedit .bashrc

# Run the bashrc file to implemenet changes in current Termminal window
. ~/.bashrc

Here is what my .bashrc file looks like (without my password):Screenshot 2020-03-30 16.55.04.png

Screenshot 2020-03-30 17.02.06.png

When your scripts are done, you should have your BIDS formatted data. You can check that your data are formatted corrected. In my dataset, I have T1mprage anatomical images (in anat folder), 8 functional task runs (in func folder), fieldmap images (in fmap folder).

Screenshot 2020-03-30 17.04.15.png

A quick note on file naming conventions

What if my DICOM files are named differently across runs or from what I want to name them in after conversion?

Here, we’ve created a bidsmaps file to edit the filenames so they are all the same. If the naming convention is correct/consistent across all studies, you won’t need this file.

Screenshot 2020-03-30 16.44.32.png

Well, that’s it for now. Please feel free to let me know if there’s anything else that should be added in terms of BIDS naming convention.

 

 

Running fMRIPrep on your BIDS-Compatible MRI Dataset

In this post, I’ll describe the steps for how I run fMRIPrep. (sorry that this is coming before some of the earlier steps, I’m currently in the middle of this so I figured I’d document this now before I forget. I’ll get to the earlier steps later, when I work with some additional data). As an aside, through some googling, I found this github repo  by a doctoral student to be very helpful for articulating some of these more details.

I tend to prefer to edit my scripts locally, so I will mount the server onto my computer so I have access to the files locally. Here is an example script below. Note that I’m using ‘singularity’ instead of ‘docker’, since I have access to fMRIPrep installed on a cluster. So if you decide to run fMRIPrep locally, you can user the docker command instead.

In my script, I plan to run both fMRIPrep and Freesurfer, with fieldmap correction (see also here) and ica-aroma.

singularity run --cleanenv \

   --bind ${bids_root_dir}/${investigator}/study-${study_label}:/data \

   --bind /gpfs/scratch/dyee7:/scratch \

   --bind /gpfs/data/bnc/licenses:/licenses \

   /gpfs/data/bnc/simgs/fmriprep/fmriprep-${fmriprep_version}.sif \

   /data/bids /data/bids/derivatives/fmriprep-${fmriprep_version}-nofs \

   participant \

   --participant-label ${pid} \

   --fs-license-file /licenses/freesurfer-license.txt \

   -w /scratch/fmriprep \

   --stop-on-first-crash \

   --nthreads 64 \

   --write-graph \

   --use-aroma \
Here are some arguments that are useful for consideration, especially if you don’t have fieldmaps and want to apply some signal distortion correction.
# If you don't want to run freesurfer
   --fs-no-reconall

# If you don't have field maps, or want to do fieldmap-less correction
   --use-syn-sdc
   --force-syn

Screenshot 2020-03-27 16.33.13.png

 

If you want to do fieldmap correction, you’re going to need to add an “IntendedFor” in your .json files for your fieldmaps. Here, because I am applying gradient echo field map, I want to apply the “phasediff” compressed nifti to all of my functional runs, so I added this argument below in alphabetical order, because I’m only a little neurotic. You’ll want to check that this file also has two Echo Times, EchoTime1 and EchoTime2.

(*It is worth noting that if you have spinecho field maps, ideally would have collected both A-P and P-A direction, if you are alternating AP and PA directions in your functional acquisition, and need to apply the opposite direction of the fieldmap to your function run. This may be more relevant if you analyze any of the connection datasets like HCP and ABCD, which default collect spinecho field maps, as well as other files like SB-refs.)

(**It is also worth noting that while there has been discussion about whether to automate this process in fMRIPrep, because there is so much variability in how fieldmaps are applied, they are deciding (as of right now) not to implement some general purpose tool (See Forum Here). There are some folks who are creating custom scripts that crawl through their data to automate this process, but it seems reasonable to me that because of the diversity of how fieldmaps are used in preprocessing, having some type of automation be project-specific or lab-specific make more sense.

Screenshot 2020-03-27 16.41.06.png

When you log on the VNC, you can open a terminal window and navigate to the directory where your scripts are located. I like to use the ‘gedit‘ function to check that script is correct.

gedit TCB-fmriprep_fieldmap2_ica.sh

Screenshot 2020-03-27 14.31.18.png

This is what the text file should look like, and should match the script you were editing outside of the VNC. Don’t worry if the spacing is a bit off, though it may be aesthetically unpleasant, it still works.

Screenshot 2020-03-27 14.31.09.png

To look at your available scripts before running, I always like to use an ‘ls’ function.

Screenshot 2020-03-27 14.32.08.png

Hooray! you are ready to run your script. You can run the script using sbatch function (or bash if you are not working on a supercomputer).

sbatch TCB-fmriprep_fieldmap2_ica.sh

If you are submitting a job on a cluster, you will also want to check that your script submitted the job and check the status of your job. More details on the Oscar website can be found here.

# check the status or your queued jobs
myq

# check the status of ongoing or recently completed jobs
sacct

Screenshot 2020-03-27 14.33.43.png

Occasionally you will run into ERRORS, in which the state will say “FAILED” or “INCOMPLETE”. When it says failed, it usually means something in your scripts (or the data) prevented the script from completing without issues. However, when you see this, don’t panic! You can easily investigate what happened by looking at your log and error files. These files live in a folder in the directory where your scripts are located. As you can see at the top of my script, I included n argument to output errors and logs by the job ID (%J), which you can see easily in your status table above. (As you can see, I have a lot of logs and errors lately…)

Screenshot 2020-03-27 17.10.14.png

Navigate to the folder, and you can use “ls -lt” to order your files by time and date. I used the ‘head’ argument to list only the top 20 of these recent files, since I don’t want to look at a million in my terminal window.

Screenshot 2020-03-27 17.06.01.png

You can use ‘gedit’ to open the log and error files, or you can just click them open if you have the server mounted onto your desktop.

Example Log File

Screenshot 2020-03-27 17.07.46.png

Example Error File

Screenshot 2020-03-27 17.07.01.png

Assuming that you don’t have any major errors and that your job is completed, then your ‘derivatives folder within your BIDS folder will have the preprocessed data.

Screenshot 2020-03-27 17.17.29.png

There’s a nice HTML file that gives a summary of all of the preprocessing steps, as well as some nice text about how to include these steps in your manuscript.

Screenshot 2020-03-27 17.18.30.png

Screenshot 2020-03-27 17.19.20.png

Happy scripting and debugging! Let me know if there are any key steps that I’ve missed here. 

 

 

Getting started with fMRIPrep on a computer cluster

It’s currently day n of shelter-at-home of covid-19 pandemic times, and I’ve started to dig myself into a hole with fMRI preprocessing using some (relatively) fancy new tools on a fancy supercomputer cluster, which I, fortunately, have access to at Brown. One of the blessings (and/or curse, depending on how you view it) of being a postdoc is that despite these pandemic times, you realize you really have no serious responsibilities and much freedom. In some ways, you can potentially be disposable, but also potentially indispensable depending on the circumstances. That being said, one of the true perks of being a postdoc is that your primary job is to do a lot of data crunching and analysis (and writing), so as long as you have data, it’s not too bad of a gig, and as long as you have a generous PI who is willing to pay for you to analyze the data.

Okay, so given that abundance of time I have to dig into the weeds of fMRI preprocessing, I’ve decided it’s not a terrible time to start documenting some of these analyses adventures, in case it may potentially be useful to others. Also, a colleague of mine who blogs her work regularly (check out her blog on mvpa here) encouraged me to do this a while back when I was fidgeting with using syringe pumps in the scanner. (I highly discourage any PhD candidate from doing this, unless you really enjoy tinkering with equipment). Basically, I’ve spend the last decade of my life tinkering around with different fMRI analyses Softwares, and recently have decided to make the full switch to fMRIPrep to make my life easier — so I suppose that these blogposts in upcoming weeks (months? lets hope not…) will be directly most useful folks who know a think or two (or many) about fMRI, and may want to get their feet wet with using this cool innovative wrapper that makes preprocessing less of a headache. That, and I have a terrible memory, so this will hopefully helpful for me 5 to 10 years down the road. Anyways, since here goes nothing.

So, you want to get started with fMRIprep (on a computer cluster)? 

Some basics to get familiar with, if you have limited computer programming background:

Some Excellent Books on getting started with fMRI analyses:

Here are some more Brown-specific resources:

fMRIPrep Resources

Since I’ll be working on our university’s supercomputer, I’ll likely provide more detailed instructions regarding those types of analyses, but if you plan to run some of these analyses from a local computer, you can easily apply much of the scripts using “bash” instead of “sbatch” commands, since you won’t have parallel computing available. The only downside is that your computer may sound like a rocket when you’re running such analyses, and it may take a longer time for your scripts to finish. I will probably have separate blog posts that address each of these steps as I do them, so stay tuned if your appetite for fMRI preprocessing is still growing. Nevertheless, hopefully, this will be useful to some of your hypothetical readers *fingers crossed.*

Some general high-level steps to consider

Step 1: Exporting Data and Converting to BIDS

Step 2: Validating that your MRI dataset is BIDS-compatible

(Optional) Step 2a: Quality Control checks for your MRI dataset

Step 4: Run fMRIprep on your MRI dataset using singularity or docker containers

It is worth noting that there are still a few quirks to this seemingly magical tool. Although fMRIPprep is pretty good automating most of the preprocessing steps, here are some things that it does NOT do automatically. It is still possible to implement these steps, it just requires a bit of tweaking, and such things would be good to consider in the data processing steam in the future.

 

I think I’ve inundated you with enough links for now. I will probably update this post as I think of more resources. Feel free to leave a comment below for suggestions on resources or anything I may have missed!

Solitude vs. Loneliness: A case study of affective states

As an academic, I spend a significant amount of time by contemplating about questions and issues of the world, more often flying solo rather than not. It is one of the great paradoxes of science and discovery — the painted image of a scientist and thinker analyzing and thinking (sometimes in angst and sometimes with passionate curiosity) — requiring both the emptiness to be creative but also relying on the input and knowledge from others who have preceded them. Over the years, as I’ve spent more time in isolation and surrounded by my own thoughts, I find it fascinating that I have become more aware that I frequently fluctuate between feelings of loneliness and feelings of solitude. It is interesting in that on the surface, painting an image of an individual in either of these conditions would not look very different. You could imagine like a painter in a field or a hiker in the middle of an empty forest. Without knowing the individual in this caricature one might have difficulty discerning between these two states, even though they are fundamentally and vastly different.

So what it is then, that makes these two states so starkly different? Loneliness describes a typically negative emotion, in which the individual is aware of the one-ness and feels distance from others in the world (whether physically or socially). An individual in the middle of a crowded room could feel “lonely,” due to the lack of social connectedness to others in the room, perhaps. Solitude, on the other hand, describes a cognitive and emotional state in which the individual is almost at peace with their separation from others in the world and/or environment. An individual in solitude is still aware of their one-ness, but does not have the negative emotion associated with loneliness, rather, the solitude individual is almost arguably experience positive emotion from escaping from the social connections of the world and reflecting on their current present state.

The puzzling question, however, is this: how can the same individual experience one state of the other (or even switch between the two), being in the same exact circumstance? Is it a cognitive thought or neural signal that suppresses or modify our emotional states to reflect a more desirably affective state? Do we pursue solitude as a coping mechanism simply because we wish to avoid the aversive state of being lonely? If it is so easily controlled, then what is the mechanism (both neural and computational) by which we can achieve this amazing feat so effortlessly (or perhaps not so effortlessly, but with sufficient mindfulness training of some sort?).

As a researcher who studies cognitive control, I would imagine that this ability must require some type of cognitive bias that is used to alter and modulate our emotional states. Whether it is something like cognitive reappraisal or emotion regulation, it is amazing to me that not only can humans discern between complex affective states, but also have contain the ability to seamlessly transition between them. By the definitions I’ve presented them above, it seems that one cannot exist without the other (or least if there is a spectrum between loneliness and solitude, that one must dominate), and yet it is unknown how the controller is able to discern when to influence and modulate one affective state or the other. In the cognitive control literature, one of the biggest unanswered questions is that of the homunculus problem — that is, how do a bunch of “dumb” parts act in synchrony and coordinate complex temporally extended goals? Many have published various computational models with hypotheses regarding how this mechanism arises in goal pursuit, but something that seems to be missing from the equation (at least from my perspective), is the inclusion of affect. What makes us unique and human, to be frank, and not just cyborgs, is that we have the capacity to feel and express emotion in a manner that is hardwired into our system. But how these affective signals combine with constructs devoid of affect like “cognitive control” remain to be understood. And furthermore, it is still a puzzle as to how these signals organize in the first place — is this there a topology and hierarchical organization of control? Is there more than just the hierarchical organization, and is there more information that is encoded in the map beyond just the temporal abstract of knowledge that enables us to perform these actions flexibly? (As an aside, neural maps are very compelling and I should probably spend another blog post on cognitive maps alone). But regardless of how the brain is structured, a compelling question remains: how are humans able to implement such control modulate our affective states.

Those who are dualists may argue that this ability stems from some spiritual aspect of humanity that most closely relates to the soul, although that will open another can of worms that is beyond the scope of this spontaneous blog post. So it remains to be a puzzling question that I will continue to contemplate, as to understand how our cognitive abilities are able to control our emotions without cues from the external world.

As an aside, and after a brief google scholar search, there appears to be an article by Jan Szczepański published in 1977 on this topic specifically, though it is housed in a more humanities focused journal with a philosophical tilt. We’ll see if I can download the article when I have access, but it is a fascinating topic. And shall hopefully be continued as I ponder of into slumber.

 

Your brain as a computer: outdated metaphor or ripe for revision?

A recent article was published by Robert Epstein regarding the outdated metaphor of comparing a brain to an information processor (or in modern terms, a computer). He argues that the parallel between the human nervous system and computing machines drawn by scientists over the past few decades is long overdue for an upgrade, although it is difficult for many a neuroscientist to even imagine what that alternative could be.

The faulty logic, according to Epstein, goes like this:

Reasonable Premise 1: All computers are capable of behaving intelligently.
Reasonable Premise 2: All computers are information processors.
Faulty Conclusion: All entities that are capable of behaving intelligently are information processes.

Examining this logic, one can see why the argument that humans must be information processes before computers are information processors is absolutely ludicrous, but yet, almost every neuroscientist I’ve met has made this comparison. I admit, that I have even made this analogy myself when explaining my work to the average individual who doesn’t spend their lifetime caring about systems neuroscience. But it still remain unclear what the brain actually does. But Epstein makes a valiant attempt to describe what it CANNOT do.

“We don’t store words or the rules that tell us how to manipulate them. We don’t create representations of visual stimuli, store them in a short-term memory buffer, and then transfer the representation to a long-term memory device. We don’t retrieve information or images or words from memory registers. Computer do all of these things, but organisms do not.”

In this way, Epstein raises a valid point – human brains operate fundamentally differently from computers (the latter which quite literally processes information, creates a set of rules calls a ‘program’ or ‘algorithm,’ and uses these algorithms to do things). When it comes to processing information, it is clear that the brain is not very good at it. You can examine the results of any memory quiz (e.g., drawing a dollar from memory, or the penny memory game), and you’ll quickly realize that your memory bank is not as infallible as you may believe.

Continue reading “Your brain as a computer: outdated metaphor or ripe for revision?”