Biowulf at the NIH
RSS Feed
Gromacs on Biowulf

GROMACS ( is a versatile package to perform molecular dynamics, i.e. simulate the Newtonian equations of motion for systems with hundreds to millions of particles. It is primarily designed for biochemical molecules like proteins and lipids that have a lot of complicated bonded interactions, but since GROMACS is extremely fast at calculating the nonbonded interactions (that usually dominate simulations) many groups are also using it for research on non-biological systems, e.g. polymers.

GROMACS manual, downloadable in several formats.


The following versions of Gromacs are available on Biowulf. All Gromacs builds are in /usr/local/apps/gromacs.
Gromacs Version Interconnect Module
4.6.5 Ethernet
Infiniband (DDR or QDR)
4.6.1 Infiniband (DDR)
Infiniband (QDR) - NIDDK/LCP only*
4.5.5 Infiniband (DDR or QDR)
4.5.5 + Plumed 2.0.2 QDR Infiniband gromacs/4.5.5/plumed2.0.2-mpi-qdrib
4.5.5 + Plumed 1.3 Infiniband
4.5.3 Infiniband
4.5.1 Infiniband
* The QDR Infiniband nodes were funded by NIDDK/LCP, and therefore Gromacs can only be run on those nodes by NIDDK/LCP users.

Submitting a GROMACS 4.x job

For basic information about setting up GROMACS jobs, read the GROMACS documentation. There is a set of tutorials at

Biowulf is a heterogenous cluster, and nodes have different #cores/node. See the chart in the user guide

Sample script for a GROMACS 4.6.1 run on Infiniband:

# this file is Run_Gromacs
#PBS -N Gromacs
#PBS -k oe
#PBS -m be

# use the module for the Gromacs version and network that you want
#   see table at the top of this page
module load gromacs/4.6.1/ib

cd /data/user/my_gromacs_dir

grompp > outfile 2>&1
`which mpirun` -machinefile $PBS_NODEFILE -n $np `which mdrun_mpi` >> outfile 2>&1

The script can be submitted with the qsub command. The number of processes should be chosen to match the number of cores on the node. The cores on each type of node is listed in the second column of the 'freen' command.

Submitting to QDR IB nodes (16 cores, 32 hyperthreaded cores per node):. Note that these nodes were funded by NIDDK/LCP, and thus Gromacs can only be run by NIDDK/LCP users on these nodes.
NIDDK/LCP users: qsub -v np=32 -l nodes=2 -q lcp Run_Gromacs

Submitting to DDR IB nodes (8 cores per node):
qsub -v np=32 -l nodes=4:ib Run_Gromacs

Submitting to Ipath nodes (2 cores per node):
qsub -v np=16 -l nodes=8:ipath Run_Gromacs 

Submitting to x2800 nodes (12 cores, 24 hyperthreaded cores per node):
qsub -v np=24 -l nodes=2 Run_Gromacs

Submitting to dual-core gige nodes (4 cores per node):
qsub -v np=8 -l nodes=2:dc Run_Gromacs

Gromacs 4.5.5 + Plumed

Gromacs 4.5.5 has also been built with Plumed 1.3, a plugin for free energy calculations in molecular systems. Plumed website. Free energy calculations can be performed as a function of many order parameters with a particular focus on biological problems, using state of the art methods such as metadynamics, umbrella sampling and Jarzynski-equation based steered MD.

Sample batch script:


#PBS -N myjob
#PBS -m be

module load gromacs/4.5.5/plumed2.0.2-mpi-ib
`which mpirun` -machinefile $PBS_NODEFILE -n $np `which mdrun` >> outfile 2>&1

Submit with:
qsub -v np=32 -l nodes=4:ib myscript

To run on Gige or Ipath, use the appropriate module as in the chart on the top of this page. Check the number of cores per node for the node type you plan to use, and set np in the qsub command appropriately.

Gromacs on GPUs

See the Gromacs on GPU page.

Walltime limits and Chaining Jobs

In Dec 2014, the Biowulf cluster will implement walltime limits for IB jobs. (Type 'batchlim' to see the current walltime limit). Since there are a limited number of IB nodes and heavy demand, this change is intended to increase turnover of jobs on those nodes. Thus, jobs should be designed to run for a week or so, save a checkpoint file, and submit a new job starting from that checkpoint.

A reasonable strategy would be to set up a job to run for a week or less by setting the number of steps appropriately, and then, at the end of the job, have it resubmit itself to continue the simulation. Below is a sample batch script:

#PBS -j oe
# this file is called Run.ib

module load use.own
module load gromacs/4.6.1/ib

cd /path/to/my/dir
grompp -f grompp.mdp -c conf.gro -p -o topol.tpr
`which mpirun` -machinefile $PBS_NODEFILE -n $np `which mdrun_mpi` -s topol.tpr

# use tpbconv to create a new topol.tpr file with an increased number of steps
tpbconv -s topol.tpr -extend 500 -o topol2.tpr

#move the newly created topol.tpr into place
mv topol.tpr topol.tpr.prev; mv topol2.tpr topol.tpr

#resubmit this script
qsub -v np=$np,nodes=$nodes -l nodes=$nodes:ib Run.ib

This script would be submitted with:

biowulf% qsub -v np=16,nodes=2 -l nodes=2:ib Run.ib
to run a series of jobs on 2 IB nodes.

More information at Extending Simulations on the Gromacs site.

If a Gromacs job is terminated unexpectedly (for example, the walltime limit was hit before the mdrun completed), it is simple to restart. The state.cpt file contains all the information necessary to continue the simulation. Use the '-cpi' and '-append' options to mdrun, which will append to existing energy, trajectory and log files. For example:

`which mpirun` -machinefile $PBS_NODEFILE -n $np `which mdrun_mpi` -s topol.tpr -cpi state.cpt -append
More information at Doing Restarts on the Gromacs site.

Replica Exchange with Gromacs 4.0*

Details about running replica exchange with Gromacs are on the Gromacs website. Multiple tpr files need to be generated from multiple *.mdp files with different temperatures. Below is a sample script for generating the tpr files. (courtesy Jeetain Mittal, NIDDK)

#!/bin/csh -f 

set ff = $argv[1]
set s = $argv[2]
set proot = 2f4k
#set ff = amber03d
set i = 0

while ( $i < 40 ) 

set fileroot = "${proot}_${ff}"
set this = "trexr"

if ( $s == 1 ) then 
set mdp = "mdp/trex_ini${i}.mdp"
set gro = "unfolded.gro" 
set sprev = $s
@ sprev--
set mdp = "mdp/trex_cont${i}.mdp"
set gro = "data/gro/${fileroot}_${this}_s${sprev}_nd${i}.gro"

# 40 rep
grompp -v -f $mdp -c $gro \
-o data/tpr/${fileroot}_${this}_nd${i}.tpr \
-p ${fileroot}

@ i++

Gromacs 4.0 can run with each replica on multiple processors. It is most efficient to run each replica on a dual-core node using all the processors on that node. This requires creating a specialized list of processors with the command make-gromacs-nodefile-dc (which is in /usr/local/bin) as in the sample script below.

# this file is Run_Gromacs_RE
#PBS -N Gromacs_RE
#PBS -k oe
#PBS -m be

# set up PATH for gige or ib nodes
export PATH=/usr/local/openmpi/bin:/usr/local/gromacs/bin:$PATH

cd /data/user/my_gromacs_dir

#create the specialized list of processors for RE

/usr/local/openmpi/bin/mpirun -machinefile ~/gromacs_nodefile.$PBS_JOBID \
      -np $np /usr/local/gromacs/bin/mdrun \ 
      -multi $n -replex 2000 >> outfile 2>&1

Submit this script to the dual-core nodes with:

qsub -v np=128,n=32 -l nodes=32:dc Run_Gromacs_RE

The above command will submit the job to 32 dual-core (either o2800 or o2600) nodes. Each of the 32 replicas will run on all 4 processors of each node. The number of processors (np=128) and the number of nodes (n=32) is passed to the program via the -v flag in qsub.

Optimizing your Gromacs job

It is critical to determine the appropriate number of nodes on which to run your job. As shown in the benchmarks below, different jobs scale differently. Thus, one job which scales very well could be submitted on up to 10 nodes, while another job may scale only up to 2 nodes. For some jobs, if you submit to more nodes than is optimal, your job will actually run slower.

To determine the optimal number of nodes:

Monitoring your jobs



The DPPC membrane system from the Gromacs benchmark suite. Detailed results

gromacs-4.6.1 benchmarks

[Benchmarks for other versions]


Gromacs Online Manual

Gromacs FAQs

All Gromacs Documentation

Getting Started

This section is for users who may or may not be familiar with Gromacs, but would find it helpful to go through the process of running a simple Gromacs job on Biowulf. The example below runs a Gromacs MPI job on Infiniband.

There is a test set of data (part of the Gromacs benchmark suite) in /usr/local/apps/gromacs/d.dppc. The screen trace below shows how an interactive node is allocated, the test data copied, and 'grompp' and 'mdrun' are run. User input is in bold. The job is later rerun using a batch script.

[susanc@biowulf ~]$ qsub -I -l nodes=1:ib
qsub: waiting for job 3810080.biobos to start
qsub: job 3810080.biobos ready
[susanc@p2133 ~]$ mkdir gromacs_example
[susanc@p2133 ~]$ cd gromacs_example
[susanc@p2133 gromacs_example]$ cp /usr/local/apps/gromacs/d.dppc/* .
[susanc@p2133 gromacs_example]$ ls
conf.gro  grompp.mdp
[susanc@p2133 gromacs_example]$ module load gromacs/4.6.1/ib
[susanc@p2133 gromacs_example]$ grompp
                         :-)  G  R  O  M  A  C  S  (-:

                 Good ROcking Metal Altar for Chronical Sinners

                            :-)  VERSION 4.6.1  (-:

        Contributions from Mark Abraham, Emile Apol, Rossen Apostolov, 
[ . . . etc . . .]
Largest charge group radii for Van der Waals: 0.190, 0.190 nm
Largest charge group radii for Coulomb:       0.190, 0.190 nm
This run will generate roughly 9 Mb of data

There were 2 notes

gcq#262: "Why Do *You* Use Constraints ?" (H.J.C. Berendsen)

[susanc@p2133 gromacs_example]$ `which mpirun` -machinefile $PBS_NODEFILE -n 8 `which mdrun_mpi`
                         :-)  G  R  O  M  A  C  S  (-:

                Guyana Rwanda Oman Macau Angola Cameroon Senegal

                            :-)  VERSION 4.6.1  (-:
[ . . . etc . . ]
No GPUs detected on host p2133

starting mdrun 'DPPC in Water'
5000 steps,     10.0 ps.
[ . . . etc. . .]
Writing final coordinates.

 Average load imbalance: 1.6 %
 Part of the total run time spent waiting due to load imbalance: 0.6 %

               Core t (s)   Wall t (s)        (%)
       Time:     1479.470      185.015      799.6
                 (ns/day)    (hour/ns)
Performance:        4.671        5.138

gcq#292: "Youth is wasted on the young" (The Smashing Pumpkins)

[susanc@p2133 gromacs_example]$ exit
qsub: job 3810080.biobos completed
[susanc@biowulf gromacs_example]$ 

The exact same thing can be accomplished via the following batch script:

#PBS -N gromacs
# this file is called gromacs_example.bat

mkdir gromacs_example
cd gromacs_example
cp -r /usr/local/apps/gromacs/d.dppc/* .

module load gromacs/4.6.1/ib
grompp > outfile 2>&1
 `which mpirun` -machinefile $PBS_NODEFILE -n $np `which mdrun_mpi` >> outfile 2>&1

which would be submitted with:

qsub -v np=16  -l nodes=2:ib gromacs_example.bat
(note that there is a minimum of 2 nodes for an IB job, so this job uses 16 cores on 2 IB nodes).