Child pages
  • Two level registration (twolevel_model_building)

Versions Compared


  • This line was added.
  • This line was removed.
  • Formatting was changed.

Table of Contents


The code was written for the following use case: all subjects in your data set are scanned multiple times, but in contrast to the type of longitudinal data used for the longitudinal registration tools (, all timepoints for a given subject can be registered together. This is done using iterative group-wise registration to create a subject-specific average. All of these averages are then registered together, again using a iterative group-wise procedure, to create a population average.

Typical run

The input files to the pipeline are read from a comma separated file containing the (string) columns 'file' and 'group' (and possibly others); for instance:


Code Block
languagebash \
--num-executors=600 \
--init-model=/hpf/largeprojects/MICe/tools/initial-models/Pydpiper-MEMRI-90micron-saddle-july-2015/p65_MEMRI_mouse_brain.mnc \
--pipeline-name=test \
--lsq12-protocol=/hpf/largeprojects/MICe/tools/protocols/linear/Pydpiper_testing_default_lsq12.csv \
--nlin-protocol=/hpf/largeprojects/MICe/tools/protocols/nonlinear/ \
--registration-method=ANTS \
--output-dir=test \
--no-run-maget \
--default-job-mem=8 \
--maget-no-mask \

What the output looks like

The first level registrations will be stored in the directory:


Code Block

# which will contain:

Manual verification of the pipeline

Generating the following images will give you a sense of how well the registration worked.

Code Block
# the first level registrations:
# set scale to 1 for human data, and to 20 for mouse data
for nlindir in *_first_level/*_processed;
  do topleveldir=`dirname $nlindir`;
  subjectbase=`basename $nlindir _nlin`;
  for subj in ${topleveldir}/${subjectbase}/*/resampled/*N_I_lsq6_lsq12_and_nlin-resampled.mnc;
    do subjbase=`basename $subj .mnc`;
    mincpik -clobber -scale $scale  --triplanar $subj /tmp/${subjectbase}_sample_${subjbase}.png;
  mincpik -clobber  -scale $scale   --triplanar ${nlindir}/*/*nlin-3.mnc /tmp/${subjectbase}_avg.png;
  montage -geometry +2+2 /tmp/${subjectbase}_avg.png /tmp/${subjectbase}_sample_*png ${topleveldir}/${subjectbase}_QC.png;

# view the image from the command line:
eog  *_first_level/*_QC.png

# the second level registrations:
for secondlevelnlin in *_second_level/*_nlin;
  do secondleveltopdir=`dirname $secondlevelnlin`;
  for subjavg in ${secondleveltopdir}/second_level_processed/*/resampled/*final-nlin.mnc;
    do subjavgbase=`basename $subjavg .mnc`;
    mincpik -clobber -scale $scale  --triplanar $subjavg /tmp/second_level_sample_${subjavgbase}.png;
  mincpik -clobber -scale $scale  --triplanar *_secondlevel/*_nlin/*nlin-3.mnc /tmp/second_level_avg.png;
  montage -geometry +2+2 /tmp/second_level_avg.png /tmp/second_level_sample_*.png ${secondleveltopdir}/secondlevel_QC.png

eog *_secondlevel/*_QC.png

Stats volumes

The original intended purpose of the twolevel_model_building code can be explained by the following example: say you have a drug and you want to determine the effects the drug has on brain growth. The brain growth can be determined from the deformations (Jacobian determinant of the deformations) from the first level registrations. In order to compare brain growth between your subjects, these Jacobian determinants need to live in the same space. This is what the second level registrations are for. The Jacobian determinants from the first level are resampled into the second level final average space, and this is what you will do you statistics on:



There is another way to link all the input files together. This is by concatenating the transformations from a subject to the first level average, and concatenating that with the transformation from that first level average to the second level average and determining the Jacobian determinants based on those transformations. This is currently not being done by the pipeline.

Generating stats originating in the second level average

If you are using pydpiper version 1.17 or earlier you can generate the statistic files originating in the second level average as follows. The code below has some comments to indicate what's happening. In the second code block all the commands are submitted to sge, to have the processing be done on the farms.