Biowulf at the NIH
RSS Feed
NGS QC Toolkit on Biowulf

NGS QC Toolkit: A toolkit for the quality control (QC) of next generation sequencing (NGS) data. The toolkit comprises of user-friendly stand alone tools for quality control of the sequence data generated using Illumina and Roche 454 platforms with detailed results in the form of tables and graphs, and filtering of high-quality sequence data. It also includes few other tools, which are helpful in NGS data quality control and analysis.


The environment variable(s) need to be set properly first. The easiest way to do this is by using the modules commands as in the example below.

$ module avail ngsqctoolkit
------------------- /usr/local/Modules/3.2.9/modulefiles --------------------------------
ngsqctoolkit/2.3.2 $ module load ngsqctoolkit $ module list Currently Loaded Modulefiles:
1) ngsqctoolkit/2.3.2 $ module unload ngsqctoolkit $ module load ngsqctoolkit/2.3.2 $ module list Currently Loaded Modulefiles: 1) ngsqctoolkit/2.3.2 $ module show ngsqctoolkit ------------------------------------------------------------------- /usr/local/Modules/3.2.9/modulefiles/ngsqctoolkit/2.3.2: module-whatis Sets up ngsqctoolkit 2.3.2 prepend-path PATH /usr/local/apps/ngsqctoolkit/current/Format-converter prepend-path PATH /usr/local/apps/ngsqctoolkit/current/QC prepend-path PATH /usr/local/apps/ngsqctoolkit/current/Trimming prepend-path PATH /usr/local/apps/ngsqctoolkit/current/Statistics -------------------------------------------------------------------


Submitting a single batch job

1. Create a script file in line of following.

# This file is YourOwnFileName
#PBS -N yourownfilename
#PBS -m be
#PBS -k oe

module load ngsqctoolkit <options> <options

2. Submit the script using the 'qsub' command on Biowulf.

$ qsub -l nodes=1 /data/$USER/theScriptFileAbove

This will send this job to the first available node. If you want to specify specific node, for example, node with 8gb of memory, then do this instead:

$ qsub -l nodes=1:g8 /data/$USER/the ScriptFileAbove


Useful commands:

freen: see

qstat: search for 'qstat' on for it's usage.

jobload: search for 'jobload' on for it's usage.


Submitting a swarm of jobs

Using the 'swarm' utility, one can submit many jobs to the cluster to run concurrently.

1. Set up a swarm command file (eg /data/$USER/cmdfile). Here is a sample file:

cd dir1; <options> >[OUTPUT1]
cd dir2; <options> >[OUTPUT2]
cd dir3; <options> >[OUTPUT3]

2. Submit the swarm file.

-f: the swarm command file name above (required)
-g: GB of memory needed for each line of the commands in the swarm file above.(optional)
--module: setup environmental variables needed by freebayes

By default, each line of the commands above will be executed on '1' processor core of a node and uses 1GB of memory. If this is not what you want, you will need to '-g' flag when you submit the job on biowulf.

Say if each line of the commands above also will need to use 10gb of memory instead of the default 1gb of memory, make sure swarm understands this by including '-g 10' flag:

$ swarm -g 10 --module ngsqctoolkit -f cmdfile

For more information regarding running swarm, see swarm.html


Running an interactive job

User may need to run jobs interactively sometimes. Such jobs should not be run on the Biowulf login node. Instead allocate an interactive node as described below, and run the interactive job there.

[user@biowulf] $ qsub -I -l nodes=1
qsub: waiting for job 2236960.biobos to start
      qsub: job 2236960.biobos ready 
[user@p4]$ cd /data/$USER/myruns
[user@p4]$ module load ngsqctoolkit
[user@p4]$ <option1>
[user@p4] exit
qsub: job 2236960.biobos completed

User may add property of node in the qsub command to request specific interactive node. For example, if you need a node with 24gb of memory to run job interactively, do this:

[user@biowulf]$ qsub -I -l nodes=1:g24:c16