Changes to swarm on Biowulf

Effective Tuesday September 13, 2011 a number of significant changes will be made to the swarm command:

  • The syntax for using the swarm command will change
  • swarm will report memory usage for the job
  • By default, the swarm scripts generated for the batch system will be in bash shell syntax instead of csh
  • Given the heterogeneity of the Biowulf cluster, these changes are being made to (1) let users focus on the requirements of their programs instead of the hardware characteristics of the nodes (2) reduce the burden on users of having to calculate the correct number of processes and threads for each type of node.

    The previous version of swarm will continue to be available as oswarm for 60 days. After this time, only the new version will be available, so please start using it as soon as possible.

    Syntax Changes

    The "-l nodes=1" syntax will no longer be supported. This includes the specification of node properties such as "o2800", "e2666", "g24", "g72", etc. The "-n" switch to specify numbers of processes per node will also be desupported.

    The simplest case use of swarm will be:

    swarm --file <swarm control file>

    This command will create batch jobs which place one process (i.e., one line in the swarm command file) on each processor core of a node. As long as your programs are single-threaded and do not require more than 1 GB memory per program instance you can use the simplest case syntax.

    swarm will now accept three new switches:

    switchshort formdescriptiondefault valuespecial values
    --gb-per-process-gGB memory per process
    (single line in the swarm
    control file)
    --threads-per-process-tnumber of threads in a
    multi-threaded program
    --resources-Rapplication resources

    swarm will continue to support qsub switches.


    Single-threaded programs that require < 1GB memory
    If a single instance of your program (i.e. one line in the swarm command file) uses less than 1 GB of memory, and is single-threaded, you can use the simplest form of the new swarm command:

    swarm --file <swarm control file>
    You can also use the short form of the --file switch:
    swarm -f <swarm control file>

    This command will create batch jobs which place one process (i.e., one line in the swarm command file) on each processor core of a node.

    Single-threaded programs that require > 1 GB memory
    If a single instance of your program (i.e. one line in the swarm command file) is single-threaded and requires more than 1 GB memory, you will need to specify the 'gb-per-process' switch to swarm.

    e.g. if one instance of your program requires up to 3 GB of memory:

    swarm -g 3 -f <swarm control file> 

    This command will create batch jobs such that each process is guaranteed 3 GB of memory. Swarm will ensure that the batch jobs are submitted to appropriate nodes. You do not need to be concerned about the number of cores or the actual memory of each node.

    Multithreaded programs where you specify the number of threads
    An example of such a program is cufflinks. In this case, you should choose the number of threads for each instance, define it appropriately in the swarm command file (e.g. the "-p" option for cufflinks), and then use the '-t #' option when submitting the swarm job.

    In the example below, each instance of the program will consist of 4 threads and requires 4 GB per instance.

    swarm -t 4 -g 4 -f <swarm control file>

    Multi-threaded programs that 'auto-thread'
    Such programs check how many cores are on the node and will create as many threads as there are cores. An example is novoalign. You should use the '-t auto' option to swarm.

    In the example below, the programs auto-thread and require less than 1 GB of memory per instance.

    swarm -t auto -f <swarm control file> 

    Swarm jobs that require specific resources
    An example is a swarm of Matlab jobs that each requires a Matlab license resource. You should use the swarm '-R' option to specify the resource required. e.g.
    swarm -R matlab=1 -f <swarm control file> 

    Memory Usage Reporting

    The swarm .o output file will now report the largest amount of memory used by the job. This should make it easier for users to calculate the memory usage of their jobs. A typical swarm .o output file will now look like:
    ------------------------- PBS time and memory report ------------------------
    3875594.biobos elapsed time: 236 seconds
    3875594.biobos maximum memory: 2.15 GB

    In the example above, the swarm job used 2.15 GB of memory. When submitting future swarms like this one, you should specify '-g 3', and swarm will ensure that at least 3 GB of memory is available to the job.

    Bash Shell Syntax

    swarm now uses bash syntax by default. This only affects users who have shell commands in their swarm command files, for example, setting an environment variable. All shell commands should be specified in bash syntax. e.g.
    export PATH=/data/user/myprogs:$PATH

    Users who have old swarm command files that contain csh syntax should add '--usecsh' to their swarm command.

    Additional Documentation

    The complete swarm documentation is here. A swarm man page is also available.