- Prior to logging in to the cluster you should organize your files such that HPF has access to them. So move your files over to /hpf if they are not already there.
Log in to the cluster
ssh -AX hpf.ccm.sickkids.ca
Start a larger than usual qlogin session
qlogin -X -l vmem=20G,walltime=40:00:00
Then you are ready to load up the required modules
module use /hpf/largeprojects/MICe/tools/modulefiles && \ module load pandoc/2.0.4 libtiff gcc/4.9.1 python/3.5.2 R/3.3.2 octave minc-toolkit/1.9.1114 pyminc/0.4751 minc-stuffs/0.1.1721 RMINC/1.5.01.0
Then you can start an R session as usual . HPF (unlike our local system) does not assume that you want to load RMINC by default so you load it explicitlyand load up RMINC
To run an example mincLmer from Darren and Lily's work you can run the following
df <- read.csv("/hpf/largeprojects/MICe/dfernandes/lqiu_long/all_relative_jacobians/files.csv") mask <- "/hpf/largeprojects/MICe/dfernandes/lqiu_long/test_p65dc2/mask/mask_dimorder_eroded.mnc" vslmer <- mincLmer(files~sex*age+(1|ID), parallel = c("pbs", 100), data=df, mask=mask)
This example should run in 10-20 minutes. Note the parallel argument, this is where you set the number of jobs to run. In this case it is 100. The element ("pbs") is now checked only to see if it is "local", local tells RMINC to run the job on multiple cores of a single machine. Anything else defers to our underlying job scheduling package BatchJobs. See https://rawgit.com/Mouse-Imaging-Centre/RMINC/master/inst/documentation/RMINC_Parallelism.html for more details, as well as a complete list of functions that can be parallelized in this way.