Biowulf at the NIH
RSS Feed
miRanda on Biowulf

miRanda is an algorithm for the detection of potential microRNA target sites in genomic sequences. MiRanda was developed at the Computational Biology Center of Memorial Sloan-Kettering Cancer Center.

 

The easiest way to set up your environment for miRanda is to use the modules commands, as in the example below:

biowulf% module avail miranda

--------------- /usr/local/Modules/3.2.9/modulefiles --------------------
miranda/3.3a

biowulf% module load miranda

biowulf% module list
Currently Loaded Modulefiles:
  1) miranda/3.3a

Submitting a single batch job

1. Create a script file similar to the one below:

#!/bin/bash
# This file is YourOwnFileName
#
#PBS -N yourownfilename
#PBS -m be
#PBS -k oe

module load miranda

cd /data/user/somewhereWithInputFile
miranda file1 file2 -strict 

2. Submit the script using the 'qsub' command on Biowulf.

[user@biowulf]$ qsub -l nodes=1 /data/username/theScriptFileAbove

This command will submit your job to a node with at least 1 GB of memory, and 2 cores. If your miranda run requires more than 1 GB of memory, you can specify this in the memory required. For example

[user@biowulf]$ qsub -l nodes=1:g8 /data/username/theScriptFileAbove
will submit the job to a node with 8 GB of memory.

Submitting a swarm of jobs

Using the 'swarm' utility, one can submit many jobs to the cluster to run concurrently.

Set up a swarm command file (eg /data/username/cmdfile). Here is a sample file:

miranda file1a file1b --out 1.out
miranda file2a file2b --out 2.out
miranda file3a file3b --out 3.out

Submit this swarm with:

swarm -f cmdfile --module miranda

By default, each line of the commands above will be executed on '1' processor core of a node and uses 1GB of memory. If each miranda command needs more than 1 GB of memory, you can specify the memory required by using the '-g #' flag to swarm, where # is the number of Gigabytes of memory required for a single miranda command.

Say if each line of the commands above also will need to use 10gb of memory instead of the default 1gb of memory, make sure swarm understands this by including '-g 10' flag:

[user@biowulf]$ swarm -g 10 -f cmdfile

For more information regarding running swarm, see swarm.html

 

Running an interactive job

Users may need to run jobs interactively sometimes. Such jobs should not be run on the Biowulf login node. Instead allocate an interactive node as described below, and run the interactive job there.

[user@biowulf] $ qsub -I -l nodes=1
qsub: waiting for job 2236960.biobos to start
qsub: job 2236960.biobos ready

[user@p4]$ cd /data/user/myruns
[user@p4]$ module load miranda
[user@p4]$ cd /data/userID/miranda/run1
[user@p4]$ miranda file1 file2 --strict --out fileout
[user@p4] exit
qsub: job 2236960.biobos completed
[user@biowulf]$

Users may add a node property in the qsub command to request specific interactive node. For example, if you need a node with 24gb of memory to run job interactively, do this:

[user@biowulf]$ qsub -I -l nodes=1:g24

 

Documentation

http://cbio.mskcc.org/microrna_data/manual.html