Skip to end of metadata
Go to start of metadata

You are viewing an old version of this page. View the current version.

Compare with Current View Page History

« Previous Version 2 Next »

GROMACS

GROMACS is a versatile package to perform molecular dynamics, i.e. simulate the Newtonian equations of motion for systems with hundreds to millions of particles.

It is primarily designed for biochemical molecules like proteins, lipids and nucleic acids that have a lot of complicated bonded interactions, but since GROMACS is extremely fast at calculating the nonbonded interactions (that usually dominate simulations) many groups are also using it for research on non-biological systems, e.g. polymers.

Usage

Add module: module load gromacs/5.1.2

Usage: srun $(which gmx_mpi) (gromacs programs)

Benchmarks (on im1080 partition) indicated that best performance for mdrun is obtained using 4 OpenMP threads per MPI process. The usage for mdrun should be modified as follows

mdrun usage:
export OMP_NUM_THREADS=4
srun --ntasks-per-node=6 $(which gmx_mpi) mdrun (mdrun options) -ntomp 4

See http://manual.gromacs.org/programs/byname.html for a list of gromacs programs and how to run the programs

Absolute path to gromacs (gmx_mpi) executable: /share/Apps/gromacs/5.1.2/intel-16.0.3-mvapich2-2.1/bin/gmx_mpi

Benchmarks

Which version of Gromacs should I use?

LAMMPS

LAMMPS is a classical molecular dynamics code that models an ensemble of particles in a liquid, solid, or gaseous state. It can model atomic, polymeric, biological, metallic, granular, and coarse-grained systems using a variety of force fields and boundary conditions.

Usage

Add module: module load lammp/14may16
Dependent modules loaded: fftw, hdf5, mvapich2 and intel

Usage: srun $(which lammps) -in <input file> -log <output file> -sc none -sf gpu -pk gpu <# of gpus>

Absolute path to lammps executable: /share/Apps/lammps/14may16/bin_gpu/lmp_mv2_gpu

NAMD

NAMD is a parallel molecular dynamics code designed for high-performance simulation of large biomolecular systems. Based on Charm++ parallel objects, NAMD scales to hundreds of cores for typical simulations and beyond 500,000 cores for the largest simulations.

Usage

Add module: module load namd/2.11
Dependent modules loaded: fftw, mvapich2 and intel

Usage: srun $(which namd2) <input file> > <output file>
Alternate Usage: mpiexec -n $SLURM_NTASKS -f $SLURM_NODEFILE $(which namd2) <input file> > <output file>
Alternate Usage: mpirun_rsh -export -n $SLURM_NTASKS -hostfile $SLURM_NODEFILE $(which namd2) <input file> > <output file>
Alternate Usage: $(which charmrun) +p$SLURM_NTASKS $(which namd2) <input file> > <output file>

Absolute path to namd executable: /share/Apps/namd/2.11/bin/namd2
Absolute path to charmrun executable: /share/Apps/namd/2.11/bin/charmrun

To obtain the SLURM_NODEFILE variable, add the following to your submit script
export SLURM_NODEFILE=$(get_slurm_nodelist)


See Using Local Scratch for MD simulation for information on file system to use with MD Packages

  • No labels