Biowulf at the NIH
RSS Feed
optiCall on Biowulf

optiCall is designed to make accurate genotype calls across the minor allele frequency spectrum. Using intensity information from across multiple individuals and multiple SNPs when calling genotypes, allows it to call both rare and common variants accurately.

When citing optiCall please use this paper.

The environment variable(s) need to be set properly first. The easiest way to do this is by using the modules commands as in the example below.

$ module avail opticall
-------------------- /usr/local/Modules/3.2.9/modulefiles ----------------------

$ module load opticall

$ module list
Currently Loaded Modulefiles:
  1) opticall/0.6.4

$ module unload opticall

$ module load opticall/0.6.4
$ module show opticall

module-whatis    Sets up optiCall 0.6.4
prepend-path     PATH /usr/local/apps/opticall/0.6.4 


Submitting a single batch job

1. Create a script file alone the following lines.

# This file is YourOwnFileName
#PBS -N yourownfilename
#PBS -m be
#PBS -k oe

module load opticall
cd /data/$USER/mydir
opticall -in -out example.out

2. Submit the script using the 'qsub' command on Biowulf.

$ qsub -l nodes=1 /data/$USER/theScriptFileAbove

Submitting a swarm of jobs

Using the 'swarm' utility, one can submit many jobs to the cluster to run concurrently.

Set up a swarm command file (eg /data/username/cmdfile). Here is a sample file which runs RNALfold on several sequences at two different temperatures.

opticall -in -out example1.out
opticall -in -out example2.out
opticall -in -out example3.out

Submit this job with

$ swarm -f cmdfile --module opticall

By default, each of the commands above will run on a single core of a node and use 1 GB of memory. If each command (a single line in the file above) requires more than 1 GB of memory, you must specify the required memory using the swarm -g # flag. For example:

$ swarm -g 4 -f cmdfile --module opticall

will tell swarm that each command above requires 4 GB of memory and setup optiCall environmental variables for each swarm job.

For more information regarding running swarm, see swarm.html


Running an interactive job

User may need to run jobs interactively sometimes. Such jobs should not be run on the Biowulf login node. Instead allocate an interactive node as described below, and run the interactive job there.

$ qsub -I -l nodes=1
qsub: waiting for job 2236960.biobos to start
qsub: job 2236960.biobos ready

$ module load opticall

$ cd /data/user/myruns
$ opticall -in -out example.out
$ exit
qsub: job 2236960.biobos completed

If you need a node with more memory, you can specify this on the qsub command line: For example, if you need a node with 8gb of memory to run job interactively, do this:

$ qsub -I -l nodes=1:g8