Biowulf at the NIH
RSS Feed
Gromacs on Biowulf
gromacs

GROMACS (www.gromacs.org) is a versatile package to perform molecular dynamics, i.e. simulate the Newtonian equations of motion for systems with hundreds to millions of particles. It is primarily designed for biochemical molecules like proteins and lipids that have a lot of complicated bonded interactions, but since GROMACS is extremely fast at calculating the nonbonded interactions (that usually dominate simulations) many groups are also using it for research on non-biological systems, e.g. polymers.

GROMACS manual, downloadable in several formats.

Versions

The following versions of Gromacs are available on Biowulf. All Gromacs builds are in /usr/local/apps/gromacs.
Gromacs Version Interconnect Module
4.6.5 Ethernet
Infiniband (DDR or QDR)
gromacs/4.6.1/eth
gromacs/4.6.5/ib
4.6.1 Infiniband (DDR)
Infinipath
Ethernet
Infiniband (QDR) - NIDDK/LCP only*
gromacs/4.6.1/ib
gromacs/4.6.1/ipath
gromacs/4.6.1/eth
gromacs/4.6.1/qdr-ib
4.5.5 Infiniband (DDR or QDR)
Infinipath
Ethernet
gromacs/4.5.5-ib
gromacs-4.5.5-ipath
gromacs-4.5.5-eth
4.5.5 + Plumed 2.0.2 QDR Infiniband gromacs/4.5.5/plumed2.0.2-mpi-qdrib
4.5.5 + Plumed 1.3 Infiniband
Infinipath
Ethernet
gromacs/4.5.5+plumed-ib
gromacs/4.5.5+plumed-ipath
gromacs-4.5.5+plumed-eth
4.5.3 Infiniband
Infinipath
Ethernet
gromacs/4.5.3-ib
gromacs/4.5.3-ipath
gromacs/4.5.3-eth
4.5.1 Infiniband
Infinipath
Ethernet
gromacs/4.5.1-ib
gromacs/4.5.1-ipath
gromacs/4.5.1-eth
* The QDR Infiniband nodes were funded by NIDDK/LCP, and therefore Gromacs can only be run on those nodes by NIDDK/LCP users.

Submitting a GROMACS 4.x job

For basic information about setting up GROMACS jobs, read the GROMACS documentation. There is a set of tutorials at http://www.gromacs.org/Documentation/Tutorials

Biowulf is a heterogenous cluster, and nodes have different #cores/node. See the chart in the user guide

Sample script for a GROMACS 4.6.1 run on Infiniband:

#!/bin/bash
# this file is Run_Gromacs
#PBS -N Gromacs
#PBS -k oe
#PBS -m be

# use the module for the Gromacs version and network that you want
#   see table at the top of this page
module load gromacs/4.6.1/ib

cd /data/user/my_gromacs_dir

grompp > outfile 2>&1
`which mpirun` -machinefile $PBS_NODEFILE -n $np `which mdrun_mpi` >> outfile 2>&1

The script can be submitted with the qsub command. The number of processes should be chosen to match the number of cores on the node. The cores on each type of node is listed in the second column of the 'freen' command.

Submitting to QDR IB nodes (16 cores, 32 hyperthreaded cores per node):. Note that these nodes were funded by NIDDK/LCP, and thus Gromacs can only be run by NIDDK/LCP users on these nodes.
NIDDK/LCP users: qsub -v np=32 -l nodes=2 -q lcp Run_Gromacs

Submitting to DDR IB nodes (8 cores per node):
qsub -v np=32 -l nodes=4:ib Run_Gromacs

Submitting to Ipath nodes (2 cores per node):
qsub -v np=16 -l nodes=8:ipath Run_Gromacs 

Submitting to x2800 nodes (12 cores, 24 hyperthreaded cores per node):
qsub -v np=24 -l nodes=2 Run_Gromacs

Submitting to dual-core gige nodes (4 cores per node):
qsub -v np=8 -l nodes=2:dc Run_Gromacs

Gromacs 4.5.5 + Plumed

Gromacs 4.5.5 has also been built with Plumed 1.3, a plugin for free energy calculations in molecular systems. Plumed website. Free energy calculations can be performed as a function of many order parameters with a particular focus on biological problems, using state of the art methods such as metadynamics, umbrella sampling and Jarzynski-equation based steered MD.

Sample batch script:

#!/bin/bash

#PBS -N myjob
#PBS -m be

module load gromacs/4.5.5/plumed2.0.2-mpi-ib
`which mpirun` -machinefile $PBS_NODEFILE -n $np `which mdrun` >> outfile 2>&1

Submit with:
qsub -v np=32 -l nodes=4:ib myscript

To run on Gige or Ipath, use the appropriate module as in the chart on the top of this page. Check the number of cores per node for the node type you plan to use, and set np in the qsub command appropriately.

Gromacs on GPUs

See the Gromacs on GPU page.

Replica Exchange with Gromacs 4.0*

Details about running replica exchange with Gromacs are on the Gromacs website. Multiple tpr files need to be generated from multiple *.mdp files with different temperatures. Below is a sample script for generating the tpr files. (courtesy Jeetain Mittal, NIDDK)

#!/bin/csh -f 

set ff = $argv[1]
set s = $argv[2]
set proot = 2f4k
#set ff = amber03d
set i = 0

while ( $i < 40 ) 

set fileroot = "${proot}_${ff}"
set this = "trexr"

if ( $s == 1 ) then 
set mdp = "mdp/trex_ini${i}.mdp"
set gro = "unfolded.gro" 
else
set sprev = $s
@ sprev--
set mdp = "mdp/trex_cont${i}.mdp"
set gro = "data/gro/${fileroot}_${this}_s${sprev}_nd${i}.gro"
endif

# 40 rep
grompp -v -f $mdp -c $gro \
-o data/tpr/${fileroot}_${this}_nd${i}.tpr \
-p ${fileroot}_ions.top

@ i++
end

Gromacs 4.0 can run with each replica on multiple processors. It is most efficient to run each replica on a dual-core node using all the processors on that node. This requires creating a specialized list of processors with the command make-gromacs-nodefile-dc (which is in /usr/local/bin) as in the sample script below.

#!/bin/bash
# this file is Run_Gromacs_RE
#PBS -N Gromacs_RE
#PBS -k oe
#PBS -m be

# set up PATH for gige or ib nodes
export PATH=/usr/local/openmpi/bin:/usr/local/gromacs/bin:$PATH

cd /data/user/my_gromacs_dir

#create the specialized list of processors for RE
make-gromacs-nodefile-dc

/usr/local/openmpi/bin/mpirun -machinefile ~/gromacs_nodefile.$PBS_JOBID \
      -np $np /usr/local/gromacs/bin/mdrun \ 
      -multi $n -replex 2000 >> outfile 2>&1

Submit this script to the dual-core nodes with:

qsub -v np=128,n=32 -l nodes=32:dc Run_Gromacs_RE

The above command will submit the job to 32 dual-core (either o2800 or o2600) nodes. Each of the 32 replicas will run on all 4 processors of each node. The number of processors (np=128) and the number of nodes (n=32) is passed to the program via the -v flag in qsub.

Optimizing your Gromacs job

It is critical to determine the appropriate number of nodes on which to run your job. As shown in the benchmarks below, different jobs scale differently. Thus, one job which scales very well could be submitted on up to 10 nodes, while another job may scale only up to 2 nodes. For some jobs, if you submit to more nodes than is optimal, your job will actually run slower.

To determine the optimal number of nodes:

Monitoring your jobs

Benchmarks

Summary:

The DPPC membrane system from the Gromacs benchmark suite. Detailed results

gromacs-4.6.1 benchmarks

[Benchmarks for other versions]

Documentation

Gromacs Online Manual

Gromacs FAQs

All Gromacs Documentation

Getting Started

This section is for users who may or may not be familiar with Gromacs, but would find it helpful to go through the process of running a simple Gromacs job on Biowulf. The example below runs a Gromacs MPI job on Infiniband.

There is a test set of data (part of the Gromacs benchmark suite) in /usr/local/apps/gromacs/d.dppc. The screen trace below shows how an interactive node is allocated, the test data copied, and 'grompp' and 'mdrun' are run. User input is in bold. The job is later rerun using a batch script.

[susanc@biowulf ~]$ qsub -I -l nodes=1:ib
qsub: waiting for job 3810080.biobos to start
qsub: job 3810080.biobos ready
[susanc@p2133 ~]$ mkdir gromacs_example
[susanc@p2133 ~]$ cd gromacs_example
[susanc@p2133 gromacs_example]$ cp /usr/local/apps/gromacs/d.dppc/* .
[susanc@p2133 gromacs_example]$ ls
conf.gro  grompp.mdp  topol.top
[susanc@p2133 gromacs_example]$ module load gromacs/4.6.1/ib
[susanc@p2133 gromacs_example]$ grompp
                         :-)  G  R  O  M  A  C  S  (-:

                 Good ROcking Metal Altar for Chronical Sinners

                            :-)  VERSION 4.6.1  (-:

        Contributions from Mark Abraham, Emile Apol, Rossen Apostolov, 
[ . . . etc . . .]
Largest charge group radii for Van der Waals: 0.190, 0.190 nm
Largest charge group radii for Coulomb:       0.190, 0.190 nm
This run will generate roughly 9 Mb of data

There were 2 notes

gcq#262: "Why Do *You* Use Constraints ?" (H.J.C. Berendsen)

[susanc@p2133 gromacs_example]$ `which mpirun` -machinefile $PBS_NODEFILE -n 8 `which mdrun_mpi`
                         :-)  G  R  O  M  A  C  S  (-:

                Guyana Rwanda Oman Macau Angola Cameroon Senegal

                            :-)  VERSION 4.6.1  (-:
[ . . . etc . . ]
No GPUs detected on host p2133

starting mdrun 'DPPC in Water'
5000 steps,     10.0 ps.
[ . . . etc. . .]
Writing final coordinates.

 Average load imbalance: 1.6 %
 Part of the total run time spent waiting due to load imbalance: 0.6 %


               Core t (s)   Wall t (s)        (%)
       Time:     1479.470      185.015      799.6
                 (ns/day)    (hour/ns)
Performance:        4.671        5.138

gcq#292: "Youth is wasted on the young" (The Smashing Pumpkins)

[susanc@p2133 gromacs_example]$ exit
qsub: job 3810080.biobos completed
[susanc@biowulf gromacs_example]$ 

The exact same thing can be accomplished via the following batch script:

#!/bin/bash
#PBS -N gromacs
#
# this file is called gromacs_example.bat

mkdir gromacs_example
cd gromacs_example
cp -r /usr/local/apps/gromacs/d.dppc/* .

module load gromacs/4.6.1/ib
grompp > outfile 2>&1
 `which mpirun` -machinefile $PBS_NODEFILE -n $np `which mdrun_mpi` >> outfile 2>&1

which would be submitted with:

qsub -v np=16  -l nodes=2:ib gromacs_example.bat
(note that there is a minimum of 2 nodes for an IB job, so this job uses 16 cores on 2 IB nodes).