- Choose short file names, if possible. Although the exact nomenclature may change as Pydpiper becomes more mature, the output file names and transforms are based on the input files. An input file name like: img_04apr12.2_jan2011_distortion_corrected.mnc is going to result in very unwieldy file names, particularly if you are running the registration chain code.
- If you start the code and it is taking a long time to run (and you have additional computational resources), you can start another pipeline_executor.py from the command line. This will increase the number of pipeline stages that can be run at once, increasing the speed of your job.
- When running jobs using a batch queueing system (and, more specifically, the --queue=sge option here at MICe) be sure to specify a reasonable amount for --mem. Until we have made some updates to the executor, ask for the maximum amount of memory you will need, even if all stages of the pipeline do not need it. For standard 56 micron files, --mem=6 (6GB) should be sufficient. This is the current default. If you do need a large amount of memory, specify --sge-queue-opts=bigmem.q.
- Pay attention to your pipelines. Due to a known bug in the code (github issues #30, #36), even after the pipeline is complete, the client (executors) and server (the pipeline as constructed by MAGeT.py) do not disconnect properly. If you suspect that your pipeline should be complete and it has not yet finished (particularly when running on a cluster), check your output files. If the pipeline has completed, you will need to manually kill your jobs. (Either via Ctrl + C or qdel.)