Biowulf at the NIH
RSS Feed
Picard on Biowulf

Picard comprises Java-based command-line utilities that manipulate SAM files, and a Java API (SAM-JDK) for creating new programs that read and write SAM files. Both SAM text format and SAM binary (BAM) format are supported.

The instruction webpage mentioned most of the commands are designed to run in 2GB of JVM, so the JVM argument -Xmx2g is recommended. However, we do notice some job require more than 2g otherwise the job will finish with wrong output files without errors (users won't know the output is incomplete).

Program Location

There are multiple versions of Picard available. An easy way of selecting the version is to use modules. To see the modules available, type

module avail picard

To select a module, type

module load picard/[ver]

where [ver] is the version of choice. This will set an environment variable $PICARDJARPATH, which points to a directory holding the version's jar files.

Submitting a single batch job

1. Create a script file. The file will contain the lines similar to the lines below. Modify the path of program location before running.

# This file is YourOwnFileName
#PBS -N yourownfilename
#PBS -m be
#PBS -k oe
module load picard
cd /data/user/somewhereWithInputFile

2. Submit the script using the 'qsub' command on Biowulf, e.g. Note, user is recommend to run benchmarks to determine what kind of node is suitable for his/her jobs.

[user@biowulf]$ qsub -l nodes=1 /data/username/theScriptFileAbove

Useful commands:

freen: see

qstat: search for 'qstat' on for it's usage.

jobload: search for 'jobload' on it's usage.

Submitting a swarm of jobs

Using the 'swarm' utility, one can submit many jobs to the cluster to run concurrently.

Set up a swarm command file (eg /data/username/cmdfile). Here is a sample file:

cd /data/user/run1/; java -Xmx8g -jar $PICARDJARPATH/AddOrReplaceReadGroups.jar INPUT=XXX OUTPUT=XXX OPTION1=XXX OPTION2=XXX

cd /data/user/run2/; java -Xmx8g -jar $PICARDJARPATH/AddOrReplaceReadGroups.jar INPUT=XXX OUTPUT=XXX OPTION1=XXX OPTION2=XXX



cd /data/user/run10/; java -Xmx8g -jar $PICARDJARPATH/AddOrReplaceReadGroups.jar INPUT=XXX OUTPUT=XXX OPTION1=XXX OPTION2=XXX

The -f flag is required. The -t and -g flags can possibly be needed to specify the number of threads and memory needed for each Picard process.

By default, each command line in the above swarm file will be executed on one processor core of a node and uses 1GB of memory. If this is not what you want, you will need to specify the -t and -g flags when you submit the job on biowulf.

For example, if each command above requires 10GB of memory instead of the default 1GB, you must include -g 10 in the swarm command:

[user@biowulf]$ swarm -g 10 -f cmdfile --module picard

For more information regarding running swarm, see swarm.html


Running an interactive job

User may need to run jobs interactively sometimes. Such jobs should not be run on the Biowulf login node. Instead allocate an interactive node as described below, and run the interactive job there.

[user@biowulf] $ qsub -I -l nodes=1
qsub: waiting for job 2236960.biobos to start
      qsub: job 2236960.biobos ready
[user@p4]$ cd /data/userID/picard/run1
[user@p4]$ module load picard
[user@p4]$ java -Xmx8g -jar $PICARDJARPATH/AddOrReplaceReadGroups.jar INPUT=XXX OUTPUT=XXX OPTION1=XXX OPTION2=XXX
[user@p4]$ java -Xmx8g -jar $PICARDJARPATH/AddOrReplaceReadGroups.jar INPUT=XXX OUTPUT=XXX OPTION1=XXX OPTION2=XXX
[user@p4]$ ...........
[user@p4] exit
qsub: job 2236960.biobos completed

User may add property of node in the qsub command to request specific interactive node. For example, if you need a node with 24gb of memory to run job interactively, do this:

[user@biowulf]$ qsub -I -l nodes=1:g24:c16