Education, tips and tricks to help you conduct better fMRI experiments.
Sure, you can try to fix it during data processing, but you're usually better off fixing the acquisition!

Tuesday, December 16, 2014

Updated checklist for fMRI acquisition methods reporting in the literature


This post updates the checklist that was presented back in January, 2013. The updated checklist is denoted Version 1.2. The main update is to include reporting for simultaneous multi-slice (SMS) (a.k.a. multi-band, MB) EPI.

Explanatory notes for parameter names appear in the lower portion of the post. Note that the present checklist was devised by considering typical fMRI experiments conducted on 1.5 T and 3 T scanners but the list should work reasonably well for 7 T sequences, too.

Please keep the comments and feedback coming. This is an ongoing, iterative process.




Release notes for Version 1.2

All changes from Version 1.1 have been highlighted in yellow, both on the list PDF and on the explanatory notes (below).

1. The "Spatial Encoding" parameter categories have been renamed "In-Plane Spatial Encoding" to better differentiate in-plane acceleration (e.g. GRAPPA) from slice dimension acceleration (SMS/MB).

2. When using slice dimension acceleration (i.e. SMS/MB), certain parameters that are listed as Supplemental for other EPI variants should be considered Essential. Specifically, it is suggested to report:
Matrix coil mode
Coil combination method

All the In-Plane Spatial Encoding parameters in the Supplemental category should be considered because there is a tendency to use SMS/MB to attain high spatial resolution, requiring long readout echo trains that can have higher distortion than found in typical EPI scans.

The In-plane reconstructed matrix parameter should be reported whenever partial Fourier sampling is used, as it often is for SMS/MB EPI.

All the RF & Contrast parameters in the Supplemental category should be reported because the shape, duration and amplitude of the excitation RF pulse are all integral components of the acceleration method.

The Shim routine should be reported if a non-standard shim is performed before SMS/MB EPI.

3. Pre-scan normalization has been added to the Supplemental section of RF & Contrast parameters. Large array coils produce strong receive field heterogeneity and the use of pre-scan normalization may improve the performance of post hoc motion correction.

Monday, December 8, 2014

Concomitant physiologic changes as potential confounds for BOLD-based fMRI: a checklist


Many thanks for all the feedback on the draft version of this post.

Main updates since the draft:
  • Added DRIFTER to the list of de-noising methods
  • Added a reference for sex differences in hematocrit and the effects on BOLD
  • Added several medication classes, including statins, sedatives & anti-depressants
  • Added a few dietary supplements, under Food

Please do continue to let me know about errors and omissions, especially new papers that get published. I'll gladly do future updates to this post.


UPDATES:

(Since this post release on 8th Dec, 2014.) 

17th Dec 2014: Update for cortisol, highlighted in yellow.
18th Dec 2014: Update for methylphenidate, atomoxetine & amphetamine, highlighted in orange.
19th Dec 2014: Update for oxytocin, highlighted in turquoise.
13th Jan 2015: Update for effects of the scanner itself, highlighted in green.
27th Feb 2015: Added a new reference, hematocrit effects on resting-state fMRI.
27th May 2016: Added new references on altitude, sleep, pharmacological fMRI (with morphine & alcohol).
1st Feb & 2nd Mar 2017: Added new references for flavonoids in foods, highlighted in red.
______________________



A recent conversation on Twitter led to the suggestion that someone compile a list of physiological effects of concern for BOLD. That is, a list of potentially confounding physiological changes that could arise sympathetically in an fMRI experiment, such as altered heart rate due to the stress of a task, or that could exist as a systematic difference between groups. What follows is the result of a PubMed literature search (mostly just the abstracts) where I have tried to identify either recent review articles or original research that can be used as starting points for learning more about candidate effects. Hopefully you can then determine whether a particular factor might be of concern for your experiment.

This is definitely not a comprehensive list of all literature pertaining to all potential physiological confounds in fMRI, and I apologize if your very important contribution didn't make it into the post. Also, please note that I am not a physiologist so if I go seriously off piste in interpreting the literature, please forgive me and then correct my course. I would like to hear from you (comments below, or via Twitter) if I have omitted critical references or effects from the list, or if I have misinterpreted something. As far as possible I've tried to restrict the review to work in humans unless there was nothing appropriate, in which case I've included some animal studies if I think they are directly relevant. I'll try to keep this post up-to-date as new studies come out and as people let me know about papers I've missed.

A final caution before we begin. It occurs to me that some people will take this list as (further) proof that all fMRI experiments are hopelessly flawed and will use it as ammunition. At the other extreme there will be people who see this list as baseless scare-mongering. How you use the list is entirely up to you, but my intent is to provide cautious fMRI scientists with a mechanism to (re)consider potential physiologic confounds in their experiments, and perhaps stimulate the collection of parallel data that might add power to those experiments.


Getting into BOLD physiology


There are some good recent articles that introduce the physiological artifacts of prime concern. Tom Liu has reviewed neurovascular factors in resting-state functional MRI and shows how detectable BOLD signals arise from physiological changes in the first place. Kevin Murphy et al. then review some of the most common confounds in resting-state fMRI and cover a few ways these spurious signal changes can be characterized and even removed from data. Finally, Dan Handwerker et al. consider some of the factors causing hemodynamic variations within and, in particular, between subjects.

Once you start really looking into this stuff it can be hard not to get despondent. Think of the large number of potential manipulations as opportunities, not obstacles! Perhaps let The Magnetic Fields get you in the mood with their song, "I don't like your (vascular) tone." Then read on. It's a long list.

Friday, November 14, 2014

A failed quench circuit?

UPDATE: 23rd Feb 2015, courtesy of Tobias Gilk on Twitter

An article in Diagnostic Imaging claims to cover "everything you need to know about the GE MRI recall." Not sure about that, but it's a step in the right direction.

UPDATE: 19th Feb 2015, courtesy of Tobias Gilk on Twitter

The FDA has just ordered a recall of over 10,000 GE superconducting MRI systems worldwide. Some news articles here and here. Based on a quick read of the early reports it does look as if the Mumbai event precipitated the recall.

UPDATE: 20th Nov 2014, courtesy of Greg Brown on Twitter

It is being reported that the quench button was disabled by GE Healthcare engineers to the point that it was only usable by authorized personnel, presumably thus requiring a specific piece of kit that neither the hospital staff nor the first GE engineers to arrive on-site either possessed or perhaps even knew about. This story is set to run and run....

___________________

No doubt you've seen this news doing the rounds:

Two stuck to MRI machine for 4 hours

There was, of course, a huge procedural failure that allowed a large, magnetic oxygen cylinder into the MRI facility in the first place. No doubt the investigation will find ample blame to spread around. But the solution to the problem is rather simple: education/training coupled with standard operating procedures to nix the threat. As procedures go it's not especially difficult. (By comparison, over 34,000 people manage to get themselves killed on US roads every single year. Clearly, we can't drive for shit. Our procedures are severely wanting in this department.) And if you're ever in doubt as to whether an item can be brought safely into the MRI suite there is always - always! - someone you can go to for an expert opinion. In my facility no equipment is allowed through the door without that expert opinion being cast.

So let's shift to the part of this fiasco that really got my attention: the claim that the magnet quench circuit malfunctioned. From the second article, above:
"At a press conference on Wednesday, a day after this newspaper broke the story, senior officials of Tata Memorial-run Advance Centre or Treatment Research and Education in Cancer (ACTREC) in Khargar said that because a switch to disable the machine's magnetic field malfunctioned, it took engineers four hours to disengage the two employees - a ward boy and a technician -- stuck to the machine, when it should not have taken more than 30 seconds."

Thursday, October 30, 2014

Concomitant physiological changes as potential confounds for BOLD-based fMRI: a (draft) checklist

**Please let me know of errors or omissions!**

This post is a work-in-progress. It will be updated based on feedback. I will remove (draft) from the title when I consider this version to be complete.


A recent conversation on Twitter led to the suggestion that someone compile a list of physiological effects of concern for BOLD. That is, a list of potentially confounding physiological changes that could arise sympathetically in an fMRI experiment, such as altered heart rate due to the stress of a task, or that could exist as a systematic difference between groups. What follows is the result of a PubMed literature search (mostly just the abstracts) where I have tried to identify either recent review articles or original research that can be used as starting points for learning more about candidate effects. Hopefully you can then determine whether a particular factor might be of concern for your experiment.

This is definitely not a comprehensive list of all literature pertaining to all potential physiological confounds in fMRI, and I apologize if your very important contribution didn't make it into the post. Also, please note that I am not a physiologist so if I go seriously off piste in interpreting the literature, please forgive me and then correct my course. I would like to hear from you (comments below, or via Twitter) if I have omitted critical references or effects from the list, or if I have misinterpreted something. As far as possible I've tried to restrict the review to work in humans unless there was nothing appropriate, in which case I've included some animal studies if I think they are directly relevant. I'll try to keep this post up-to-date as new studies come out and as people let me know about papers I've missed. As it says at the top, I'll consider this a draft post pending feedback. Subsequent posts will be designated with a version number.

A final caution before we begin. It occurs to me that some people will take this list as (further) proof that all fMRI experiments are hopelessly flawed and will use it as ammunition. At the other extreme there will be people who see this list as baseless scare-mongering. How you use the list is entirely up to you, but my intent is to provide cautious fMRI scientists with a mechanism to (re)consider potential physiologic confounds in their experiments, and perhaps stimulate the collection of parallel data that might add power to those experiments.


Getting into BOLD physiology


There are some good recent articles that introduce the physiological artifacts of prime concern. Tom Liu has reviewed neurovascular factors in resting-state functional MRI and shows how detectable BOLD signals arise from physiological changes in the first place. Kevin Murphy et al. then review some of the most common confounds in resting-state fMRI and cover a few ways these spurious signal changes can be characterized and even removed from data. Finally, Dan Handwerker et al. consider some of the factors causing hemodynamic variations within and, in particular, between subjects

Once you start really looking into this stuff it can be hard not to get despondent. Think of the large number of potential manipulations as opportunities, not obstacles! Perhaps let The Magnetic Fields get you in the mood with their song, "I don't like your (vascular) tone." Then read on. It's a long list.

Wednesday, October 1, 2014

i-fMRI: My initial thoughts on the BRAIN Initiative proposals


So we finally have some grant awards on which to judge the BRAIN Initiative. What was previously a rather vague outline of some distant, utopian future can now be scrutinized for novelty, practicality, capability, etc. Let's begin!

The compete list of awards across six different sections is here. The Next Generation Human Imaging section has selected nine diverse projects to lead us into the future. Here are my thoughts (see Note 1) based mostly on the abstracts of these successful proposals.

Friday, August 15, 2014

QA for fMRI, Part 3: Facility QA - what to measure, when, and why


As I mentioned in the introductory post to this series, Facility QA is likely what most people think of whenever QA is mentioned in an fMRI context. In short, it's the tests that you expect your facility technical staff to be doing to ensure that the scanner is working properly. Other tests may verify performance - I'll cover some examples in future posts on Study QA - but the idea with Facility QA is to catch and then diagnose any problems.

We can't just focus on stress tests, however. We will often need more than MRI-derived measures if we want to diagnose problems efficiently. We may need information that might be seem tangential to the actual QA testing, but these ancillary measures provide context for interpreting the test data. A simple example? The weather outside your facility. Why should you care? We'll get to that.


An outline of the process

Let's outline the steps in a comprehensive Facility QA routine and then we can get into the details:

  • Select an RF coil to use for the measurements. 
  • Select an appropriate phantom.
  • Decide what to measure from the phantom.
  • Determine what other data to record at the time of the QA testing.
  • Establish a baseline.
  • Make periodic QA measurements.
  • Look for deviations from the baseline, and decide what sort of deviations warrant investigation.
  • Establish procedures for whenever deviations from "normal" occur.
  • Review the QA procedure's performance whenever events (failures, environment changes, upgrades) occur, and at least annually.

In this post I'll deal with the first six items on the list - setting up and measuring - and I'll cover analysis of the test results in subsequent posts.

Tuesday, July 29, 2014

Free online fMRI education!


UCLA has their excellent summer Neuroimaging Training Program (NITP) going on as I type. Most talks are streamed live, or you can watch the videos at your leisure. Slides may also be available. Check out the schedule here.

I am grateful to Lauren Atlas for tweeting about the NIH's summer fMRI course. It's put together by Peter Bandettini's FMRI Core Facility (FMRIF). It started in early June and runs to early September, 3-4 lectures a week. The schedule is here. Videos and slides are available a few days after each talk.

Know of others? Feel free to share by commenting!

Saturday, July 26, 2014

QA for fMRI, Part 2: User QA


Motivation

The majority of "scanner issues" are created by routine operation, most likely through error or omission. In a busy center with harried scientists who are invariably running late there is a tendency to rush procedures and cut corners. This is where a simple QA routine - something that can be run quickly by anyone - can pay huge dividends, perhaps allowing rapid diagnosis of a problem and permitting a scan to proceed after just a few minutes' extra effort.

A few examples to get you thinking about the sorts of common problems that might be caught by a simple test of the scanner's configuration - what I call User QA. Did the scanner boot properly, or have you introduced an error by doing something before the boot process completed? You've plugged in a head coil but have you done it properly? And what about the magnetic particles that get tracked into the bore, might they have become lodged in a critical location, such as at the back of the head coil or inside one of the coil sockets? Most, if not all, of these issues should be caught with a quick test that any trained operator should be able to interpret.

User QA is, therefore, one component of a checklist that can be employed to eliminate (or permit rapid diagnosis of) some of the mistakes caused by rushing, inexperience or carelessness. At my center the User QA should be run when the scanner is first started up, prior to shut down, and whenever there is a reason to suspect the scanner might not perform as intended. It may also be used proactively by a user who wishes to demonstrate to the next user (or the facility manager!) that the scanner was left in a usable state.

Monday, June 2, 2014

QA for fMRI, Part 1: An outline of the goals


For such a short abbreviation QA sure is a huge, lumbering beast of a topic. Even the definition is complicated! It turns out that many people, myself included, invoke one term when they may mean another. Specifically, quality assurance (QA) is different from quality control (QC). This website has a side-by-side comparison if you want to try to understand the distinction. I read the definitions and I'm still lost. Anyway, I think it means that you, as an fMRIer, are primarily interested in QA whereas I, as a facility manager, am primarily interested in QC. Whatever. Let's just lump it all into the "QA" bucket and get down to practical matters. And as a practical matter you want to know that all is well when you scan, whereas I want to know what is breaking/broken and then I can get it fixed before your next scan.


The disparate aims of QA procedures

The first critical step is to know what you're doing and why you're doing it. This implies being aware of what you don't want to do. QA is always a compromise. You simply cannot measure everything at every point during the day, every day. Your bespoke solution(s) will depend on such issues as: the types of studies being conducted on your scanner, the sophistication of your scanner operators, how long your scanner has been installed, and your scanner's maintenance history. If you think of your scanner like a car then you can make some simple analogies. Aggressive or cautious drivers? Long or short journeys? Fast or slow traffic? Good or bad roads? New car with routine preventative maintenance by the vendor or used car taken to a mechanic only when it starts smoking or making a new noise?

Saturday, April 26, 2014

Sharing data: a better way to go?


On Tuesday I became involved in a discussion about data sharing with JB Poline and Matthew Brett. Two days later the issue came up again, this time on Twitter. In both discussions I heard a lot of frustration with the status quo, but I also heard aspirations for a data nirvana where everything is shared willingly and any data set is never more than a couple of clicks away. What was absent from the conversations, it seemed to me, were reasonable, practical ways to improve our lot.*  It got me thinking about the present ways we do business, and in particular where the incentives and the impediments can be found.

Now, it is undoubtedly the case that some scientists are more amenable to sharing than others. (Turns out scientists are humans first! Scary, but true.) Some scientists can be downright obdurate when faced with a request to make their data public. In response, a few folks in the pro-sharing camp have suggested that we lean on those who drag their feet, especially where individuals have previously agreed to share data as a condition of publishing in a particular journal; name and shame. It could work, but I'm not keen on this approach for a couple of reasons. Firstly, it makes the task personal which means it could mutate into outright war that extends far beyond the issue at hand and could have wide-ranging consequences for the combatants. Secondly, the number of targets is large, meaning that the process would be time-consuming.


Where might pressure be applied most productively?

Tuesday, April 1, 2014

i-fMRI: A virtual whiteboard discussion on multi-echo, simultaneous multi-slice EPI

Disclaimer: This isn't an April Fool!

I'd like to use the collective wisdom of the Internet to discuss the pros and cons of a general approach to simultaneous multislice (SMS) EPI that I've been thinking about recently, before anyone wastes time doing any actual programming or data acquisition.


Multi-echo EPI for de-noising fMRI data


These methods rest on one critical aspect: they use in-plane parallel imaging (GRAPPA or SENSE, usually depending on the scanner vendor) to render the per slice acquisition time reasonable. For example, with R=2 acceleration it's possible to get three echo planar images per slice at TEs of around 15, 40 and 60 ms. The multiple echoes can then be used to characterize BOLD from non-BOLD signal variations, etc.
The immediate problem with this scheme is that the per slice acquisition time is still a lot longer than for normal EPI, meaning less brain coverage. The suggestion has been to use MB/SMS to regain speed in the slice dimension. This results in the combination of MB/SMS in the slice dimension and GRAPPA/SENSE in-plane, thereby complicating the reconstruction, possibly (probably) amplifying artifacts, enhancing motion sensitivity, etc. If we could eliminate the in-plane parallel imaging and do all the acceleration through MB/SMS then that would possibly reduce some of the artifact amplification, might simplify (slightly) the necessary reference data, etc.


A different approach? 

Thursday, March 13, 2014

WARNING! Stimulation threshold exceeded!


When running fMRI experiments it's not uncommon for the scanner to prohibit what you'd like to do because of a gradient stimulation limit. You may even hit the limit "out of the blue," e.g. when attempting an oblique slice prescription for a scan protocol that has run just fine for you in the past. I'd covered the anisotropy of the gradient stimulation limit as a footnote in an old post on coronal and sagittal fMRI, but it's an issue that causes untold stress and confusion when it happens so I decided to make a dedicated post.

Some of the following is take from Siemens manuals but the principles apply for all scanners. There may be vendor-specific differences in the way the safety checking is computed, however. Check your scanner manuals for details on the particular implementation of stimulus monitoring on your scanner.

According to Siemens, then:



The scanner monitors the physiological effects of the gradients and prohibits initiating scans that exceed some predefined thresholds. On a Siemens scanner the limits are established according to two models, used simultaneously:



The scanner computes the expected stimulation that will arise from the gradient waveforms in the sequence you are attempting to run. If one or both models suggests that a limit will be exceeded, you get an error message. I'll note here that the scanner also monitors in real time the actual gradients being played out in case some sort of fault occurs with the gradient control.

Thursday, February 27, 2014

Using someone else's data


There was quite a lot of activity yesterday in response to PLOS ONE's announcement regarding its data policy. Most of the discussion I saw concerned rights of use and credit, completeness of data (e.g. the need for stimulus scripts for task-based fMRI) and ethics (e.g. the need to get subjects' consent to permit further distribution of their fMRI data beyond the original purpose). I am leaving all of these very important issues to others. Instead, I want to pose a couple of questions to the fMRI community specifically, because they concern data quality and data quality is what I spend almost all of my time dealing with, directly or indirectly. Here goes.


1.  Under what circumstances would you agree to use someone else's data to test a hypothesis of your own?

Possible concerns: scanner field strength and manufacturer, scan parameters, operator experience, reputation of acquiring lab.

2. What form of quality control would you insist on before relying on someone else's data?

Possible QA measures: independent verification of a simple task such as a button press response encoded in the same data, realignment "motion parameters" below/within some prior limit, temporal SNR above some prior value.


If anyone has other questions related to data quality that I haven't covered with these two, please let me know and I'll update the post. Until then I'll leave you with a couple of loaded comments. I wouldn't trust anyone's data if I didn't know the scanner operator personally and I knew first-hand that they had excellent standard operating procedures, a.k.a. excellent experimental technique. Furthermore, I wouldn't trust realignment algorithm reports (so-called motion parameters) as a reliable proxy for data quality in the same way that chemicals have purity values, for instance. The use of single value decomposition - "My motion is less than 0.5 mm over the entire run!" - is especially nonsensical in my opinion, considering that the typical voxel resolution exceeds 2 mm on a side. Okay, discuss.


UPDATE 13:35 PST

Someone just alerted me to the issue of data format. Raw? Filtered? And what about custom file types? One might expect to get image domain data, perhaps limited to the magnitude images that 99.9% of folks use. So, a third question is this: What data format(s) would you consider (un)acceptable for sharing, and why?

Tuesday, January 28, 2014

Partial Fourier versus GRAPPA for increasing EPI slice coverage


This is the final post in a short series concerning partial Fourier EPI for fMRI. The previous post showed how partial Fourier phase encoding can accelerate the slice acquisition rate for EPI. It is possible, in principle, to omit as much as half the phase encode data, but for practical reasons the omission is generally limited to around 25% before image artifacts - mainly enhanced regional dropout - make the speed gain too costly for fMRI use. Omitting 25% of the phase encode sampling allows a slice rate acceleration of up to about 20%, depending on whether the early or the late echoes are omitted and whether other timing parameters, most notably the TE, are changed in concert.

But what other options do you have for gaining approximately 20% more slices in a fixed TR? A common tactic for reducing the amount of phase-encoded data is to use an in-plane parallel imaging method such as SENSE or GRAPPA. Now, I've written previously about the motion sensitivity of parallel imaging methods for EPI, in particular the motion sensitivity of GRAPPA-EPI, which is the preferred parallel imaging method on a Siemens scanner. (See posts here, here and here.) In short, the requirement to obtain a basis set of spatial information - that is, a map of the receive coil sensitivities for SENSE and a set of so-called auto-calibration scan (ACS) data for GRAPPA - means that any motion that occurs between the basis set and the current volume of (accelerated) EPI data is likely to cause some degree of mismatch that will result in artifacts. Precisely how and where the artifacts will appear, their intensity, etc. will depend on the type of motion that occurs, whether the subject's head returns to the initial location, and so on. Still, it behooves us to check whether parallel imaging might be a better option for accelerating slice coverage than partial Fourier.


Deciding what to compare

Disclaimer: As always with these throwaway comparisons, use what you see here as a starting point for thinking about your options and perhaps determining your own set of pilot experiments. It is not the final word on either partial Fourier or GRAPPA! It is just one worked example.

Okay, so what should we look at? In selecting 6/8ths partial Fourier it appears that we can get about 15-20% more slices for a fixed TR. It turns out that this gain is comparable to using GRAPPA with R=2 acceleration with the same TE. To keep things manageable - a five-way comparison is a sod to illustrate - I am going to drop the low-resolution 64x48 full Fourier EPI that featured in the last post in favor of the R=2 GRAPPA-EPI that we're now interested in. For the sake of this comparison I'm assuming that we have decided to go with either pF-EPI or GRAPPA, but you should note that the 64x48 full Fourier EPI remains an option for you in practice. (Download all the data here to perform for your own comparisons!)

I will retain the original 64x64 full Fourier EPI as our "gold standard" for image quality as well as the two pF-EPI variants, yielding a new four-way comparison: 64x64 full Fourier EPI, 6/8pF(early), 6/8pF(late), and GRAPPA with R=2. Partial Fourier nomenclature is as used previously. All parameters except the specific phase encode sampling schemes were held constant. Data was collected on a Siemens TIM/Trio with 12-channel head coil, TR = 2000 ms, TE = 22 ms, FOV = 224 mm x 224 mm, slice thickness = 3 mm, inter-slice gap = 0.3 mm, echo spacing = 0.5 ms, bandwidth = 2232 Hz/pixel, flip angle = 70 deg. Each EPI was reconstructed as a 64x64 matrix however much actual k-space was acquired. Partial Fourier schemes used zero filling prior to 2D FT. GRAPPA reconstruction was performed on the scanner with the default vendor reconstruction program. (Siemens users, see Note 1.)