In this article we'll examine one of the fundamental image processing tasks in neuroimaging: mutually aligning two image files. (We won't describe mutually coregistering more than two images simultaneously via template-building or other procedures here.)

We'll use the ANTs suite of tools since it's comprehensive, fairly easy (if not extremely user-friendly) to use, and produces good results (at the cost of some computation time).  We'll also give pointers to relevant MINC concepts and programs.

Background

Images

What is an image? More than just a 3d grid of voxels, a medical image should be thought of as continuously varying over its domain of definition.  Thus, an image I is a domain D (typically some region of three-dimensional space R3) together with a function f mapping values in D to the real numbers R (of course, vector or tensor-valued images are also possible). In short, an image is a pair (D, f : D → R).  (We may also consider segmentation images or other discrete-valued images to be of form (D, f : D → Int) or similar.)

We need to acquire, store, and manipulate images, which is typically achieved through the use of an array of voxels (and the use of computer arithmetic rather than real numbers).  We consider the voxel values to record the value of the image at the exact centre of each voxel (at least in the ITK/ANTs convention).  To compute the value of an image at some point not at a voxel centre, some numerical interpolation amongst the surrounding voxel values must be used.  For instance, in trilinear interpolation one finds the surrounding eight voxel centres and takes a distance-weighted average of their intensities.  Interpolating using splines or other families is also possible.  In nearest neighbour interpolation, the value at the point is simply the value at the nearest voxel centre.  (Nearest neighbour interpolation is most often used on image segmentations, discrete-valued images whose values are labels.)  The type of interpolation used (at least continuous vs nearest-neighbour) is often considered a property of a resampling procedure but you might consider it a property of the image itself.  Interpolating a segmentation using trilinear interpolation just doesn't make sense.

We can use interpolation to resample an image to a new grid by interpolating the value of each new voxel centre.   We might do this in order to change the spatial resolution or to bring the voxel coordinates from one image into agreement with those of a second image for statistical purposes.   Resampling is an essential part of transforming an image; see below.

Transformations

Given our above definition of images, the problem of aligning two images can then be formulated as the problem of finding a suitable mapping between their domains.  We won't define 'suitable' here, only noting the problem is only well-posed after making certain choices such as the cost, or loss, function; see e.g. the Elastix manual.  Intuitively, we want the mapping to be able to 'warp' one image into the domain or 'space' of the second, where it should align well with the 'features' or 'geometry' of the latter.  We call the warped image the moving image and the image it is warped to the fixed image.  The outcome of this process (called registration) is a warped version M' of the moving image, now on domain DF, such that M' and F look geometrically similar (as determined by cross-correlation, mutual information, or similar evaluation), and an object TF,M called a transformation or transform which can be applied to M to produce M'.  The reader only interested in performing registrations can in principle skip to the examples. However, the underlying mathematics are described next and understanding them can prove useful.

We will leave aside the question of how the registration process itself is implemented.  The ANTs algorithm is mathematically rather sophisticated.  Algorithms simpler to understand include affine registration or spline-based nonlinear algorithms such as used in Elastix (see the manual), both of which are based on optimizing some image metric (potentially penalized in some way) with respect to a finite set of parameters using gradient-based optimization methods, or discrete methods such as the ART algorithm described in Ardekani et al. (2011) which uses local grid search.  We simply note that one common feature of image registration methods is the use of a spatial pyramid, i.e. repeatedly performing the same registration at increasingly fine spatial scales (using blurring and/or downsampling in the earlier stages).  Instead, we will concern ourselves with the question of what is a transform, and how is it applied?

The reader might guess that a transform of M to F is simply a mapping DM → DF.  That's not quite correct: this would let us transform a point in DM to a point in DF, but an image is not a collection of points, but rather a function on some domain.  So, we need a transformation to be an object that can transform a function I : DF → R into a function I' : DM → R.  To use a function DM → DF for this we'd need to assume we can invert it, in which case we are actually using a function DF → DM.  Since we don't want to assume invertibility (even if an inverse exists, it might be hard to compute quickly and accurately), this will be our definition: a transformation warping image M to image F is a mapping DF → DM - this is what the registration algorithm must output.  To warp or 'resample' the image, one looks up the value of the original image at each new grid point using this mapping, interpolating between voxels as necessary.  Our definition of image has guided us to the correct definition of transform and the correct way of applying it.

In symbols, apply(TF,M, M) is a new image M' defined by M'(q) = M(TF,M(q)) for q in DF (on a computer, calculation of TF,M(q) requires an interpolation step).  We can also write apply(TF,M, M) = M o TF,M for short, where 'o' is function composition.

In short: a transform (warping moving image to fixed image) is a function mapping between regions in the opposite direction (domain of fixed to domain of moving image).  One also says that "images transform backward across functions" (while by definition functions transform points "forward across functions").

Basic registration example

We'll use the antsRegistrationSyNQuick.sh script to illustrate the above concepts due to its simplicity.  Assume $fixed and $moving are image files in approximately the same orientation but not necessarily the same spatial location (if not, consider doing a rigid alignment either manually via register or via antsAI, rotational_minctracc.py, or similar); for instance, on MICe systems you may

$ export  fixed=/hpf/largeprojects/MICe//tools//pydpiper-test-images/lsq6-aligned/img_27jul16.10_dc_N_I_lsq6.mnc
$ export moving=/hpf/largeprojects/MICe//tools//pydpiper-test-images/lsq6-aligned/img_27jul16.11_dc_N_I_lsq6.mnc

Then we can register them via

$ antsRegistrationSyNQuick.sh -d 3 -t s -f ${fixed} -m ${moving} -o $(basename ${fixed})_to_$(basename ${moving})

Here -d 3 specifies the image dimensionality and -t s specifies the transformation type (in this case consecutive rigid, affine, and nonlinear SyN registrations).  -f and -m specify (what else?) the fixed and moving images, and -o specifies a prefix to be added to the output file names.

The following output files are produced:

  • fixed_to_moving0GenericAffine.mat (the rigid + affine transformation)
  • fixed_to_moving1Warp.nii.gz (the nonlinear transformation)
  • fixed_to_moving1InverseWarp.nii.gz (the inverse nonlinear transformation)
  • fixed_to_movingWarped.nii.gz (the fixed image warped to the moving image)
  • fixed_to_movingInverseWarped.nii.gz (the moving image warped to the fixed image)

Since SyN is a 'symmetric' algorithm, it produces inverse transforms and warped images.

If we didn't have the fixed_to_movingWarped.nii.gz file, we could create it by applying the transforms produced above as follows:

$ antsApplyTransforms -d 3  \
   -i ${moving} -r ${fixed} -o fixed_to_movingWarped.nii.gz  \
   -t fixed_to_moving1Warp.nii.gz -t fixed_to_moving0GenericAffine.mat

Here -r specifies the reference image used to determine the voxel grid of the new image, and -t is used repeatedly to specify all transforms to apply, listed in reverse order (this order is simply what the ANTs implementers chose, perhaps to mimic the usual right-to-left order of function composition).

Both of these programs take many other options, e.g. antsRegistrationSyNQuick.sh can accept masks or initial transformations of the fixed or moving image; antsApplyTransforms can take options to change the interpolator (the default is trilinear, while you need to use a nearest neighbour/generic label interpolator for segmentation volumes), or apply inverse transforms via  -t [transform_to_invert,1]. Note that only transforms known to be invertible such as affine transforms can be inverted this way; ANTs won't numerically invert an arbitrary transform for you.  Also, inverting a composite is achieved by composing the inverses in reverse order, so to apply the inverse transform we'd use -t [fixed_to_moving0GenericAffine.mat,1] -t fixed_to_moving1InverseWarp.nii.gz.

Note that antsRegistrationSyNQuick.sh won't generate MINC-format images (though Display and register can read Nifti files) or MINC-compatible transforms.  That is easily addressed using the techniques of the next section.

Using antsRegistration directly

The above command is a good starting point, but let's look at the generated command it printed when it began running (here $fixed and $moving will be expanded):

/axiom2/projects/software/arch/linux-xenial-xerus/ANTs/20201016/bin/antsRegistration   \
  --verbose 1 --dimensionality 3 --float 0 --collapse-output-transforms 1    \
  --output [ fixed_to_moving,fixed_to_movingWarped.nii.gz,fixed_to_movingInverseWarped.nii.gz ]   \
  --interpolation Linear --use-histogram-matching 0 --winsorize-image-intensities [ 0.005,0.995 ]   \
  --initial-moving-transform [ /hpf/largeprojects/MICe//tools//pydpiper-test-images/lsq6-aligned/img_27jul16.10_dc_N_I_lsq6.mnc,/hpf/largeprojects/MICe//tools//pydpiper-test-images/lsq6-aligned/img_27jul16.11_dc_N_I_lsq6.mnc,1 ]   \
  --transform Rigid[ 0.1 ] --metric MI[ /hpf/largeprojects/MICe//tools//pydpiper-test-images/lsq6-aligned/img_27jul16.10_dc_N_I_lsq6.mnc,/hpf/largeprojects/MICe//tools//pydpiper-test-images/lsq6-aligned/img_27jul16.11_dc_N_I_lsq6.mnc,1,32,Regular,0.25 ]   \
  --convergence [ 1000x500x250x0,1e-6,10 ] --shrink-factors 12x8x4x2 --smoothing-sigmas 4x3x2x1vox   \
  --transform Affine[ 0.1 ]   \
  --metric MI[ /hpf/largeprojects/MICe//tools//pydpiper-test-images/lsq6-aligned/img_27jul16.10_dc_N_I_lsq6.mnc,/hpf/largeprojects/MICe//tools//pydpiper-test-images/lsq6-aligned/img_27jul16.11_dc_N_I_lsq6.mnc,1,32,Regular,0.25 ]   \
  --convergence [ 1000x500x250x0,1e-6,10 ] --shrink-factors 12x8x4x2 --smoothing-sigmas 4x3x2x1vox   \
  --transform SyN[ 0.1,3,0 ]  \
  --metric MI[ /hpf/largeprojects/MICe//tools//pydpiper-test-images/lsq6-aligned/img_27jul16.10_dc_N_I_lsq6.mnc,/hpf/largeprojects/MICe//tools//pydpiper-test-images/lsq6-aligned/img_27jul16.11_dc_N_I_lsq6.mnc,1,32]   \
  --convergence [ 100x100x70x50x0,1e-6,10 ] --shrink-factors 10x6x4x2x1 --smoothing-sigmas 5x3x2x1x0vox

This command is daunting, but we can simply run it as-is or with slight modifications.  For instance, we could change the output files to .mnc instead of .nii.gz in the --output section, or add the --minc flag to generate MNI .xfm files instead of ITK-style transform files (.mat and .nii.gz).  I also recommend removing the random sampling from the above metrics (e.g MI[$fixed,$moving,1,32] instead of MI[$fixed,$moving,1,32,Regular,0.25]) since it can cause your registrations to stochastically fail at the linear/affine step, presumably at earlier stages of the spatial pyramid. We won't go into full details here, but see the page Anatomy of an antsRegistration Call.  Note that a mutual information ("MI") metric was used.  This metric is suitable for inter-modality registration (e.g. MRI to CT, or structural MRI to dMRI).  For intra-modality registration, the cross-correlation ("CC") metric could be used instead.  In fact, the slower antsRegistrationSyN.sh script generates a CC-based antsRegistration command (What does this mean/Why is this important?).  Note that the MI parameters don't have a spatial scale, but one of the CC parameters is a radius (in mm), and should be adjusted to some much smaller value (say 1mm or even less, e.g. --metric CC[$fixed,$moving,1,1]) than the antsRegistrationSyN.sh defaults when registering rodent images (currently 4mm), otherwise immense amounts of time and memory will be needed.

Transformation example

Suppose we want to warp a DTI image to an anatomical template, and we have the following data:

  • a DTI image (dti_image.mnc);
  • a sMRI image of the same subject;
  • a nonlinear sMRI template (template.mnc);
  • rigid transforms approximately aligning the DTI and sMRI images to the template (remember that, as always, this means the template was the fixed image);
  • a single MNI .xfm file (smri_lsq6_to_template.xfm) nonlinearly warping the rigidly-aligned sMRI image to the template (this would be the inverse of the corresponding file produced by currently-existing Pydpiper 2.0.14 and below, but may change in the next release; see the MINC convention section below);
  • separate affine (dti_to_sMRI_lsq60GenericAffine.mat) and nonlinear transforms (dti_to_sMRI_lsq61Warp.nii.gz) in ITK format produced by antsRegistration and together warping the rigidly-aligned DTI to the rigidly-aligned sMRI images.

In this case we would run

$ antsApplyTransforms -d 3                \
  --input dti_image.mnc                   \
  --output dti_image_warped.mnc           \
  --reference-image template.mnc          \
  -t smri_lsq6_to_template.xfm            \
  -t dti_to_smri_lsq61Warp.nii.gz         \
  -t dti_to_smri_lsq60GenericAffine.mat   \
  -t rigid_dti_alignment.mat

We leave aside here the question of whether this is the optimal strategy, or whether e.g. directly registering the DTI image to the template might also work well.

Hints and caveats

You can often increase fine accuracy by adding some nonzero number of iterations at the native resolution (e.g. --convergence [ 100x100x100x100x50 ] or similar in the --transform SyN part of the registration) at the cost of greatly increased time and memory usage.  To save yourself lots of time, make sure the registration is working at a coarse level before doing this!

You can add multiple cost functions (e.g. to use both cross-correlation and mutual information, or for multimodal acquisitions, or to use existing masks or segmentations in a registration, or to use some manually or automatically determined point-sets, or to apply extra filters such as the gradients, which are used in Pydpiper pipelines) with weights of your choice.

Masking is quite useful - the ANTs developers currently recommend applying both a hard mask to the image file itself and supplying a soft mask delineating some extraneous (blank, because of the hard mask) space around the ROI.  Of course, obtaining such masks may not always be possible prior to completing an initial registration.

Cropping the image via mincbbox or similar can increase computation speed, though care should be taken to leave enough space around the ROI for reasonable deformations to be created.

Connection to MINC transformation conventions

The MINC suite has various registration programs (minctracc) and utilities for warping .mnc images (mincresample) and manipulating transforms stored as MNI transform (.xfm) files (xfminvert, xfmconcat).  You can also use the utilities itk_convert and itk_convert_xfm to convert images or transforms between formats (but I suggest using recent versions of mnc2nii and nii2mnc when possible due to more correct handling of coordinate systems).

MINC registration programs such as minctracc typically refer to 'source' and 'target' rather than 'fixed' and 'moving' images, e.g., minctracc ${options} -source $source -target $target -o source_to_target.xfm followed by mincresample source -like target -transform source_to_target.xfm -o source_to_target_resampled.mnc.  From this you might guess the (incorrect) correspondence source ~ moving, target ~ fixed, mincresample ~ antsApplyTransform.  (error)nope(error)!  However, mincresample and antsApplyTransforms, which should both implement the mathematical procedure of resampling described above up to numerical error, give geometrically different results. To explain this discrepancy, one can inspect mincresample's source and note that without the -invert flag, mincresample calls the libminc function invert_general_transform, while mincresample -invert calls general_transform.  Thus, to use mincresample to apply a transform as done (mathematically correctly) by antsApplyTransforms, one must use mincresample -invert. For this to make sense, we must have source ~ fixed, target ~ moving.  (Also, we should use -invert when possible to avoid numerically inverting a transform!)

The resulting oddity to keep in mind is that in the MINC world, individual subjects are typically assigned to 'source' and templates 'target', while in the ANTs world, subjects are typically 'moving' and templates 'fixed', so the 'direction' in which a registration algorithm has been applied will likely be different (assuming one is interested in warping a subject to a template in both cases) between the two worlds, resulting in different transformations (since most registration algorithms are not symmetric or invertible and even ANTs's SyN transformations are only so up to various numerical approximations.).  It's the above-mentioned inversion hidden in mincresample's code which reconciles everything, of course.

Another issue to keep in mind is that often one wants to initialize the transformation of the moving image to some with some existing warping, say from a previous stage of registration.  (This is not equivalent to resampling the moving image first since the cost function applied to the transform won't consider this pre-applied part, and also introduces some extra numerical error.)  ANTs tools allow --initial-moving-transform as well as --initial-fixed-transform, but minctracc allows only -transform which is applied to the source, i.e. fixed, image.  One way to get around this might be to use minctracc's -invert option to recover a transform going in the opposite direction, effectively identifying source with moving and target with fixed, but it should be checked whether this actually inverts the transform or just swaps the images in the registration process.

Jacobian determinants

Recall that given a differentiable function f : D → Rn defined on some n-dimensional region D, we can compute the Jacobian determinant function detJ(f) : D → R.  (Mathematically, this is the composition of the derivative operator to produce a linear approximation of the function - which can be represented as a matrix of partial derivatives - at each point and the usual determinant, which gives the ratio by which a linear operator changes volumes.)  The Jacobian determinant is therefore itself an image with domain D.  The Jacobian determinant of f at a point p is interpreted as a "local volume change" at p: intuitively, it's the ratio by which f changes the volume of an infinitesimal cube near p, where a negative value means the orientation (aka handedness or chirality) of the image has been reversed (a 'left-right flip').  In practice, this is computed numerically by finite difference methods.

In neuroimaging the Jacobian determinant is often referred to as simply the Jacobian or the determinant, although this might confuse nearby mathematicians.  Furthermore, we often compute (pointwise, of course) the logarithm of the Jacobian for biological and statistical reasons and then also refer to this as the Jacobian!

The Jacobian is exactly what we need to understand the volumetric implications of a transformation (along with ROI-based statistics).  If as usual our moving image is a subject and our fixed image is a template (or second subject), we obtain a transform TF,M which is a function DF → DM.  Therefore, a large Jacobian of this transform at a point p on the fixed image means the moving image (subject) at f(p) is relatively larger than the fixed image (template) at p (as usual, assuming that we believe the registration process has produced a transform corresponding to some notion of homology).

The ANTs suite contains the CreateJacobianDeterminantImage command for this purpose.  In the MINC world, one typically obtains a transform between source and target and then inverts this transform and computes the resulting Jacobian (using a tedious sequence of commands - smooth_vector; minc_displacement; mincblob -det; mincmath -add -const 1 due to a quirk of mincblob; and finally mincmath -log).  Using our dictionary from the previous section, such a transform is in the usual parlance a transform from fixed to moving; inverting it gives a transform from moving to fixed, of which one then computes the determinant, roughly approximating the numerically more accurate ANTs procedure.

See the ANTs documentation (manual, 'anatomy of an antsRegistration call'); Elastix documentation; MINC documentation.