Biowulf at the NIH
RSS Feed
ProbABEL on Biowulf

ProbABEL is a package for genome-wide association analysis of imputed data developed by Yurii Aulchenko etc. Currently, ProbABEL implements linear, logistic regression, and Cox proportional hazards models. The corresponding analysis programs are called palinear, palogist, and pacoxph.

The environment variable(s) need to be set properly first. The easiest way to do this is by using the modules commands as in the example below.

$ module avail probabel
-------------------- /usr/local/Modules/3.2.9/modulefiles ----------------------

$ module load probabel

$ module list
Currently Loaded Modulefiles:
1) probabel/0.2.0 $ module unload probabel $ module load probabel/0.2.0 $ module show probabel ------------------------------------------------------------------- /usr/local/Modules/3.2.9/modulefiles/probabel/0.2.0: module-whatis Sets up probabel 0.2.0 prepend-path PATH /usr/local/apps/probabel/0.2.0/bin -----------------------------------------------------------------


Submitting a single batch job

1. Create a script file along the following lines:

# This file is YourOwnFileName
#PBS -N yourownfilename
#PBS -m be
#PBS -k oe
module load probabel
cd /data/$USER/Directory
palogist \
-p logist_data.txt \
-d test.prob.fvipalogist \
-p logist_data.txt \
-d test.prob.fvi \
-i test.mlinfo \
-m \
--ngpreds=2 \
-c 19 \
-o logist_prob_fv

2. Submit the script.

$ qsub -l nodes=1 /data/username/theScriptFileAbove



Submitting a swarm of jobs

Using the 'swarm' utility, one can submit many jobs to the cluster to run concurrently.

Set up a swarm command file (eg /data/username/cmdfile). Here is a sample file:

module load probabel; cd /data/$USER/Dir1; palogist -p logist_data.txt -d test.prob.fvi .....
module load probabel; cd /data/$USER/Dir2; palogist -p logist_data.txt -d test.prob.fvi .....
module load probabel; cd /data/$USER/Dir3; palogist -p logist_data.txt -d test.prob.fvi .....

By default, each line of the commands above will be executed on '1' processor core of a node and uses 1GB of memory. If this is not what you want, you will need to specify '-g' flags when you submit the job on biowulf.

Say if each line of the commands above also will need to use 10gb of memory instead of the default 1gb of memory, make sure swarm understands this by including '-g 10' flag:

$ swarm -g 10 -f cmdfile

For more information regarding running swarm, see swarm.html


Running an interactive job

User may need to run jobs interactively sometimes. Such jobs should not be run on the Biowulf login node. Instead allocate an interactive node as described below, and run the interactive job there.

$ qsub -I -l nodes=1
qsub: waiting for job 2236960.biobos to start
      qsub: job 2236960.biobos ready 
$ cd /data/$USER/Dir1
$ module load probabel
$ palogist -p logist_data.txt -d test.prob.fvi .....
$ exit
qsub: job 2236960.biobos completed

User may add property of node in the qsub command to request specific interactive node. For example, if you need a node with 24gb of memory to run job interactively, do this:

$ qsub -I -l nodes=1:g24:c16