Versions Compared


  • This line was added.
  • This line was removed.
  • Formatting was changed.


  1. Prior to logging in to the cluster you should organize your files such that HPF has access to them. So move your files over to /hpf if they are not already there. 
  2. Log in to the cluster

    Code Block
    ssh -AX

  3.  Start a larger than usual qlogin session

    Code Block
    qlogin -X -l vmem=20G,walltime=40:00:00

  4. Then you are ready to load up the required modules

    Code Block
    module use /hpf/largeprojects/MICe/tools/modulefiles && \
    module load mice-env

  5.  Then you can start an R session as usual and load up RMINC

    Code Block

  6.  To run an example mincLmer from Darren and Lily's work you can run the following

    Code Block
     df <- read.csv("/hpf/largeprojects/MICe/dfernandes/lqiu_long/all_relative_jacobians/files.csv")
     mask <- "/hpf/largeprojects/MICe/dfernandes/lqiu_long/test_p65dc2/mask/mask_dimorder_eroded.mnc"
     vslmer <- mincLmer(files~sex*age+(1|ID), parallel = c("pbs", 100), data=df, mask=mask)

    This example should run in 10-20 minutes. Note the parallel argument, this is where you set the number of jobs to run. In this case it is 100. The element ("pbs") is now checked only to see if it is "local", local tells RMINC to run the job on multiple cores of a single machine. Anything else defers to our underlying job scheduling package BatchJobs. See for more details, as well as a complete list of functions that can be parallelized in this way. Darren has also written a page on how to parallelize voxelwise statistics.