Biowulf at the NIH
RSS Feed
Pindel on Biowulf

Pindel can detect breakpoints of large deletions, medium sized insertions, inversions, tandem duplications and other structural variants at single-based resolution from next-gen sequence data. It uses a pattern growth approach to identify the breakpoints of these variants from paired-end short reads.

 

Programs Location

/usr/local/pindel

 

Important Note

If you use the program frequently, it may be convenient to add the program directory to your path.

For a single session:

% setenv PATH /usr/local/pindel:$PATH (csh or tcsh)
$ PATH=/usr/local/pindel:$PATH; export PATH (bash)

To add this directory to your path at login time automatically, add the appropriate line above to the end of your ~/.cshrc or ~/.bashrc file.

 

Submitting a single batch job

1. Create a script file. The file will contain the lines similar to the lines below. Modify the path of program location before running.

#!/bin/bash
# This file is YourOwnFileName
#
#PBS -N yourownfilename
#PBS -m be
#PBS -k oe
export PATH=/usr/local/pindel:$PATH
cd /data/user/somewhereWithInputFile
pindel command 1
pindel command 2
....
....

2. Submit the script using the 'qsub' command on Biowulf, e.g. Note, user is recommended to run benchmarks to determine what kind of node is suitable for his/her jobs.

[user@biowulf]$ qsub -l nodes=1 /data/username/theScriptFileAbove

Useful commands:

freen: see http://biowulf.nih.gov/user_guide.html#freen

qstat: search for 'qstat' on http://biowulf.nih.gov/user_guide.html for it's usage.

jobload: search for 'jobload' on http://biowulf.nih.gov/user_guide.html for it's usage.

 

Submitting a swarm of jobs

Using the 'swarm' utility, one can submit many jobs to the cluster to run concurrently.

Set up a swarm command file (eg /data/username/cmdfile). Here is a sample file:

export PATH=/usr/local/pindel:$PATH ; cd /data/user/run1/; pindel command 1; pindel command 2

export PATH=/usr/local/pindel:$PATH ; cd /data/user/run2/; pindel command 1; pindel command 2

........

........

export PATH=/usr/local/pindel:$PATH ; cd /data/user/run10/; pindel command 1; pindel command 2

 

There are one flag of swarm that's required '-f' and two other flags of swarm user most possibly needs to specify when submit a swarm job: '-t' and '-g'.

-f: the swarm command file name above (required)
-t: number of processors per node to use for each line of the commands in the swarm file above.(optional)
-g: GB of memory needed for each line of the commands in the swarm file above.(optional)

By default, each line of the commands above will be executed on '1' processor core of a node and uses 1GB of memory. If this is not what you want, you will need to specify '-t' and '-g' flags when you submit the job on biowulf.

Say if each line of the commands above also will need to use 10gb of memory instead of the default 1gb of memory, make sure swarm understands this by including '-g 10' flag:

[user@biowulf]$ swarm -g 10 -f cmdfile

For more information regarding running swarm, see swarm.html

 

Running an interactive job

User may need to run jobs interactively sometimes. Such jobs should not be run on the Biowulf login node. Instead allocate an interactive node as described below, and run the interactive job there.

[user@biowulf] $ qsub -I -l nodes=1
qsub: waiting for job 2236960.biobos to start
qsub: job 2236960.biobos ready

[user@p4]$ cd /data/user/myruns
[user@p4]$ export PATH=/usr/local/pindel:$PATH/
[user@p4]$ cd /data/userID/pindel/run1
[user@p4]$ pindel command 1
[user@p4]$ pindel command 2
[user@p4]$ ...........
[user@p4] exit
qsub: job 2236960.biobos completed
[user@biowulf]$

User may add property of node in the qsub command to request specific interactive node. For example, if you need a node with 24gb of memory to run job interactively, do this:

[user@biowulf]$ qsub -I -l nodes=1:g24

 

Documentation

https://trac.nbic.nl/pindel/wiki