**Dan's**** comments in RED.**

In this article we'll examine one of the fundamental image processing tasks in neuroimaging: mutually aligning two image files. (We won't describe mutually coregistering multiple images simultaneously via template-building or other procedures here.)

We'll use the ANTs suite of tools since it's comprehensive, fairly easy (if not extremely user-friendly) to use, and performs well. We'll also give pointers to relevant MINC concepts and programs.

## Background

### Images

What is an image? More than just a 3d grid of voxels, a medical image should be thought of as continuously varying over its *domain* of definition. Thus, an **image** I is a domain D (typically some region of three-dimensional space R^{3}) together with a function f mapping values in D to the real numbers R (of course, vector or tensor-valued images are also possible). In short, an image is a pair (D, f : D → R). (We may also consider segmentation images or other discrete-valued images to be of form (D, f : D → Int) or similar.)

We need to acquire, store, and manipulate images, which is typically achieved through the use of an array of voxels (and the use of computer arithmetic rather than real numbers). We consider the voxel values to record the value of the image at the exact centre of each voxel (at least in the ITK/ANTs convention). To compute the value of an image at some point not at a voxel centre, some numerical **interpolation** amongst the surrounding voxel values must be used. For instance, in **trilinear interpolation** one finds the surrounding eight voxel centres and takes a distance-weighted average of their intensities. Interpolating using splines or other families is also possible. In **nearest neighbour interpolation**, the values are assumed to be discrete, and the nearest voxel values are used for a majority vote for the value of the point. (Nearest neighbour interpolation is most often used on image *segmentations*, discrete-valued images whose values are labels.) By using interpolation at each voxel centre of a new grid, we can thus **resample** an image to that grid, say to produce an image with a different spatial resolution or to bring the voxel coordinates into agreement with those of a second image for statistical purposes. (The interpolation is often considered a property of a resampling procedure - see below - but is really a property of the image itself. Resampling an anatomical image using nearest-neighbour interpolation or a segmentation using trilinear interpolation just doesn't make sense.) Resampling is also an essential part of transforming an image; see below.

### Transformations

Given our above definition of images, the problem of aligning two images can then be formulated as the problem of finding a suitable mapping between their domains. We won't define 'suitable' here, only noting the problem is only well-posed after making certain choices such as the cost, or loss, function; see e.g. the Elastix manual. Intuitively, we want the mapping to be able to 'warp' one image into the domain or 'space' of the second, where it should align well with the 'features' or 'geometry' of the latter. We call the warped image the **moving image** and the image it is warped to the **fixed image**. The outcome of this process (called registration) is a warped version F' of the fixed image, now on domain D_{M}, such that F' and M look similar, and an object T_{F,M} called a **transformation** or **transform** which can be *applied* to F to produce F'. The reader only interested in performing registrations can in principle skip to the examples. However, the underlying mathematics are described next and understanding them can prove useful.

We will leave aside the question of how the registration process itself is implemented. The ANTs algorithm is mathematically rather sophisticated. Simpler algorithms to understand include affine registration, or, for nonlinear algorithms, the ART algorithm described in Ardekani et al. (2011), or the Elastix algorithm (see the manual). Instead, we will concern ourselves with the question of what is a transform, and how is it applied?

The reader might guess that a transform of M to F is simply a mapping D_{M} → D_{F}. That's not quite correct: this would let us transform a point in D_{M} to a point in D_{F}, but an image is not a collection of points, but rather a function on some domain. So, we need a transformation to be an object that can transform a function I : D_{F} → R into a function I' : D_{M} → R. There's no way of using a function D_{M} → D_{F} to do this without assuming we can invert the function, in which case we are actually using a function D_{F} → D_{M}. Since we don't want to assume invertibility (even if an inverse exists, it might be hard to compute quickly and accurately), this will be our definition: *a transformation warping image M to image F is a mapping D _{F} → D_{M}* - this is what the registration algorithm must output. To warp or 'resample' the image, one looks up the value of the original at each new grid point, interpolating between voxels as necessary. Our definition of image has guided us to the correct definition of transform and the correct way of applying it.

In symbols: apply T_{F,M} F is a new image F', where F'(q) = F(T_{F.M}(q)) (or apply T_{F,M} F = F o T_{F,M} for short).

In short: a *transform (warping moving image to fixed image) is a function mapping between regions in the opposite direction (domain of fixed to domain of moving image)*. One also says that "images transform backward across functions" (while by definition functions transform points "forward across functions").

## Basic example

We'll use the antsRegistrationSyNQuick.sh script to illustrate the above concepts since its arguments correspond quite simply. Assume $fixed and $moving are image files in approximately the same orientation but not necessarily the same spatial location (if not, consider doing a rigid alignment either manually via register or via antsAI, rotational_minctracc.py, or similar); for instance, on MICe systems you may

$ export fixed=/hpf/largeprojects/MICe//tools//pydpiper-test-images/lsq6-aligned/img_27jul16.10_dc_N_I_lsq6.mnc $ export moving=/hpf/largeprojects/MICe//tools//pydpiper-test-images/lsq6-aligned/img_27jul16.11_dc_N_I_lsq6.mnc

Then we can register them via

$ antsRegistrationSyNQuick.sh -d 3 -t s -f ${fixed} -m ${moving} -o $(basename ${fixed})_to_$(basename ${moving})

Here `-d 3`

specifies the image dimensionality and `-t s`

specifies the transformation type (in this case consecutive rigid, affine, and nonlinear SyN registrations). -f and -m of course specify the fixed and moving images, and `-o`

some arbitrary prefix for the output files.

The following output files are produced:

- fixed_to_moving0GenericAffine.mat (the rigid + affine transformation)
- fixed_to_moving1Warp.nii.gz (the nonlinear transformation)
- fixed_to_moving1InverseWarp.nii.gz (the inverse nonlinear transformation)
- fixed_to_movingWarped.nii.gz (the fixed image warped to the moving image)
- fixed_to_movingInverseWarped.nii.gz (the moving image warped to the fixed image)

Since SyN is a 'symmetric' algorithm, it produces inverse transforms and warped images.

If we didn't have the fixed_to_movingWarped.nii.gz file, we could create it by applying the transforms produced above as follows:

$ antsApplyTransforms -d 3 -t \ -i ${moving} -r ${fixed} -o fixed_to_movingWarped.nii.gz \ -t fixed_to_moving1Warp.nii.gz -t fixed_to_moving0GenericAffine.mat

Here -r specifies the reference image used to determine the voxel grid of the new image, and `-t`

is used repeatedly to specify all transforms to apply, listed in reverse order (this order is simply what the ANTs implementers chose, perhaps to mimic the usual right-to-left order of function composition).

Both of these programs take many other options, e.g. antsRegistrationSyNQuick.sh can accept masks or initial transformations of the fixed or moving image; antsApplyTransforms can take options to change the interpolator (the default is trilinear, while you need to use a nearest neighbour/generic label interpolator for segmentation volumes), or apply inverse transforms via `-t [`

`transform_to_invert,1]`

. Regarding inverse transforms, note that only transforms known to be invertible such as affine transforms will work; ANTs won't numerically invert an arbitrary transform for you. Also, inverting a composite is achieved by composing the inverses in reverse order, so to apply the inverse transform we'd use `-t [fixed_to_moving0GenericAffine.mat,1] -t fixed_to_moving1InverseWarp.nii.gz`

.

Note that antsRegistrationSyNQuick.sh won't generate MINC-format images (though `Display`

and `register`

can read Nifti files) or MINC-compatible transforms. That is easily addressed using the techniques of the next section.

## Using antsRegistration directly

The above command is a good starting point, but let's look at the generated command it printed when it began running (here $fixed and $moving will be expanded):

/axiom2/projects/software/arch/linux-xenial-xerus/ANTs/20201016/bin/antsRegistration \ --verbose 1 --dimensionality 3 --float 0 --collapse-output-transforms 1 \ --output [ fixed_to_moving,fixed_to_movingWarped.nii.gz,fixed_to_movingInverseWarped.nii.gz ] \ --interpolation Linear --use-histogram-matching 0 --winsorize-image-intensities [ 0.005,0.995 ] \ --initial-moving-transform [ /hpf/largeprojects/MICe//tools//pydpiper-test-images/lsq6-aligned/img_27jul16.10_dc_N_I_lsq6.mnc,/hpf/largeprojects/MICe//tools//pydpiper-test-images/lsq6-aligned/img_27jul16.11_dc_N_I_lsq6.mnc,1 ] \ --transform Rigid[ 0.1 ] --metric MI[ /hpf/largeprojects/MICe//tools//pydpiper-test-images/lsq6-aligned/img_27jul16.10_dc_N_I_lsq6.mnc,/hpf/largeprojects/MICe//tools//pydpiper-test-images/lsq6-aligned/img_27jul16.11_dc_N_I_lsq6.mnc,1,32,Regular,0.25 ] \ --convergence [ 1000x500x250x0,1e-6,10 ] --shrink-factors 12x8x4x2 --smoothing-sigmas 4x3x2x1vox \ --transform Affine[ 0.1 ] \ --metric MI[ /hpf/largeprojects/MICe//tools//pydpiper-test-images/lsq6-aligned/img_27jul16.10_dc_N_I_lsq6.mnc,/hpf/largeprojects/MICe//tools//pydpiper-test-images/lsq6-aligned/img_27jul16.11_dc_N_I_lsq6.mnc,1,32,Regular,0.25 ] \ --convergence [ 1000x500x250x0,1e-6,10 ] --shrink-factors 12x8x4x2 --smoothing-sigmas 4x3x2x1vox \ --transform SyN[ 0.1,3,0 ] \ --metric MI[ /hpf/largeprojects/MICe//tools//pydpiper-test-images/lsq6-aligned/img_27jul16.10_dc_N_I_lsq6.mnc,/hpf/largeprojects/MICe//tools//pydpiper-test-images/lsq6-aligned/img_27jul16.11_dc_N_I_lsq6.mnc,1,32] \ --convergence [ 100x100x70x50x0,1e-6,10 ] --shrink-factors 10x6x4x2x1 --smoothing-sigmas 5x3x2x1x0vox

This command is daunting, but we can simply run it as-is or with slight modifications. For instance, we could change the output files to .mnc instead of .nii.gz in the --output section, or add the --minc flag to generate MNI .xfm files instead of ITK-style transform files (.mat and .nii.gz). We won't go into full details here, but see the page Anatomy of an antsRegistration Call. Note that a mutual information ("MI") metric was used. This metric is suitable for inter-modality registration (e.g. MRI to CT, or structural MRI to dMRI). For intra-modality registration, the cross-correlation ("CC") metric could be used instead. In fact, the slower antsRegistrationSyN.sh script generates a CC-based antsRegistration command. Note that the MI parameters don't have a spatial scale, but one of the CC parameters is a radius (in mm), and should be adjusted to some much smaller value than the antsRegistrationSyN.sh defaults when registering rodent images, otherwise immense amounts of time and memory will be needed.

## Hints and caveats

You can often increase fine accuracy by adding some nonzero number of iterations at the native resolution (e.g. `--convergence [ 100x100x100x100x50 ]`

or similar in the `--transform SyN`

part of the registration) at the cost of greatly increased time and memory usage. To save yourself lots of time, make sure the registration is working at a coarse level before doing this!

You can add multiple cost functions (e.g. to use both cross-correlation and mutual information, or for multimodal acquisitions, or to use existing masks or segmentations in a registration, or to use some manually or automatically determined point-sets, or to apply extra filters such as the gradients, which are used in Pydpiper pipelines) with weights of your choice.

Masking is quite useful - the ANTs developers currently recommend applying both a hard mask to the image file itself and supplying a soft mask delineating some extraneous (blank, because of the hard mask) space around the ROI. Of course, obtaining such masks may not always be possible prior to completing an initial registration.

Cropping the image via mincbbox or similar can increase computation speed, though care should be taken to leave enough space around the ROI for reasonable deformations to be created.

## Connection to MINC transformation conventions

The MINC suite has various registration programs (minctracc) and utilities for warping .mnc images (mincresample) and manipulating transforms stored as MNI transform (.xfm) files (xfminvert, xfmconcat). You can also use the utilities itk_convert and itk_convert_xfm to convert images or transforms between formats (but I suggest using recent versions of mnc2nii and nii2mnc when possible due to more correct handling of coordinate systems).

MINC registration programs such as minctracc typically refer to 'source' and 'target' rather than 'fixed' and 'moving' images, e.g., `minctracc ${options} -source $source -target $target -o source_to_target.xfm`

followed by `mincresample source -like target -transform source_to_target.xfm -o source_to_target_resampled.mnc`

. From this you might guess the (incorrect) correspondence source ~ moving, target ~ fixed, mincresample ~ antsApplyTransform . However, mincresample and antsApplyTransforms give different results, which is resolved by inspecting mincresample source code to reveal that without the -invert flag the libminc function invert_general_transform is used, while mincresample -invert calls general_transform. Thus, applying a transform such as done by antsApplyTransforms is actually achieved by mincresample -invert, and for this to make sense we must have source ~ fixed, target ~ moving. (Also, we should prefer to use `-invert`

when possible to avoid numerically inverting a transform!)

The resulting oddity to keep in mind is that in the MINC world, individual subjects are typically 'source' and templates 'target', while in the ANTs world, subjects are typically 'moving' and templates 'fixed', so the 'direction' in which a registration algorithm has been applied will likely be different (assuming one is interested in warping a subject to a template in both cases) between the two worlds, resulting in different transformations (since most registration algorithms are not symmetric or invertible and even ANTs's SyN transformations are only so up to various numerical approximations.). It's the extra inversion hidden in `mincresample`

which reconciles everything, of course.

Another issue to keep in mind is that often one wants to initialize the transformation of the moving image to some with some existing warping, say from a previous stage of registration. (This is not equivalent to resampling the moving image first since the any cost function applied to the transform won't consider this pre-applied part, and also introduces some extra numerical error.) ANTs tools allow --initial-moving-transform as well as --initial-fixed-transform, but `minctracc` allows only `-transform` which is applied to the source, i.e. fixed, image. One way to get around this might be to use minctracc's -invert option to recover a transform going in the opposite direction, effectively identifying source with moving and target with fixed, but it should be checked whether this actually inverts the transform or just swaps the images in the registration process.

## Jacobian determinants

Recall that given a differentiable function f : D → R^{n }defined on some n-dimensional region D, we can compute the **Jacobian determinant** function detJ(f) : D → R. (Mathematically, this is the composition of the derivative operator to produce a linear approximation of the function - which can be represented as a matrix of partial derivatives - at each point and the usual determinant, which gives the ratio by which a linear operator changes volumes.) The Jacobian determinant is therefore itself an image with domain D. The Jacobian determinant of f at a point *p* is interpreted as a "local volume change" at *p*: intuitively, it's the ratio by which f changes the volume of an infinitesimal cube near *p*, where a negative value means the orientation (aka handedness or chirality) of the image has been reversed (a 'left-right flip'). In practice, this is computed numerically by finite difference methods.

In neuroimaging the Jacobian determinant is often referred to as simply *the Jacobian* or *the determinant*, although this might confuse nearby mathematicians. Furthermore, we often compute the logarithm of the Jacobian for biological and statistical reasons and then also refer to *this* as the Jacobian!

The Jacobian is exactly what we need to understand the volumetric implications of a transformation (along with ROI-based statistics). If as usual our moving image is a subject and our fixed image is a template (or second subject), we obtain a transform T_{F,M} which is a function D_{F} → D_{M}. Therefore, a large Jacobian of this transform at a point p on the fixed image means the moving image (subject) at* f(p)* is relatively larger than the fixed image (template) at *p* (assuming we believe the registration process has produced a transform corresponding to some notion of homology).

The ANTs suite contains the `CreateJacobianDeterminantImage`

command for this purpose. In the MINC world, one typically obtains a transform between source and target and then inverts this transform and computes the resulting Jacobian (using a tedious sequence of commands - smooth_vector; minc_displacement; mincblob -det; mincmath -add -const 1 due to a quirk of mincblob; and finally mincmath -log). Using our dictionary from the previous section, such a transform is in the usual parlance a transform from fixed to moving; inverting it gives a transform from moving to fixed, of which one then computes the determinant, roughly approximating the numerically more accurate ANTs procedure.

## Related articles

See the ANTs documentation (manual, 'anatomy of an antsRegistration call'); Elastix documentation; MINC documentation.