Contributed by Yeol Kyo Choi, Seonghoon Kim, Shasha Feng
Date: 11/2018
...
- sbatch run_namd_scratch.sh You can un-annotate this line to make the program able to submit job automatically after the current run finishes.
- For `run_openmm_scratch` script, please see [[using_local_scratch_for_md_simulation&#appendix_iopenmm|see Appendix I]].
- For `run_gromacs_scratch` script, please see [[using_local_scratch_for_md_simulation&#appendix_iigromacs|Appendix II]].
There are 2 schemes to use scratch folder, depending on whether we copy all the input files to scratch folder or not. For OpenMM and NAMD, we use scheme 1. For Gromacs, we use scheme 2.
...
Code Block |
---|
|
#!/bin/csh
#SBATCH --job-name=NameIt
#SBATCH --partition=lts
#SBATCH --qos=nogpu
#SBATCH --nodes=1
#SBATCH --ntasks-per-node=1
#SBATCH --export=ALL
#SBATCH --output=job.out
#SBATCH --error=job.err
#SBATCH --time=1:00:00
set charmm = "/home/shf317/bin/charmm"
set LOCAL = /scratch/${USER}/${SLURM_JOBID}
${charmm} sdir=${LOCAL} < hydro_thick.inp > /dev/null
cp ${LOCAL}/* $SLURM_SUBMIT_DIR
sleep 5
exit
|
cp ${LOCAL}/* $SLURM_SUBMIT_DIR copy the analysis result data back to local folder.
The charmm analysis script hydro_thick.inp file only writes output in scratch folder. The toppar, psf files, as well as dcd files are still in SOL local nodes. We do not copy them to scratch folder, because they are read only once.
Code Block |
---|
|
open write unit 51 card name @scrdir/hydro_thick.plo |
The value for scrdir is passed through hy-thick.sh, as
Code Block |
---|
|
${charmm} sdir=${LOCAL} < hydro_thick.inp > /dev/null |
...
DCD frequency configuration & manipulation
Upcoming simulations
In simulation configuration script, change output frequency of production/simulation to every 1ps, since too frequent output is not useful but takes up lots of disk space. A NAMD example is listed below. In step7.1_production.inp, it should be set:
Code Block |
---|
|
restartfreq 5000; # 500 steps = every 1ps
dcdfreq 5000;
dcdUnitCell yes;
xstFreq 5000;
outputEnergies 5000;
outputTiming 5000;
|
Orginal script downloaded from CHARMM-GUI does much more frequent output:
Code Block |
---|
|
restartfreq 500; # 500 steps = every 1ps
dcdfreq 1000;
dcdUnitCell yes;
xstFreq 500;
outputEnergies 125;
outputTiming 500; |
Two nice packages are useful: CatDCD and DumpDCD.
Code Block |
---|
|
CatDCD 4.0
catdcd -o outputfile [-otype <filetype>] [-i indexfile]
[-stype <filetype>] [-s structurefile]
[-first firstframe] [-last lastframe] [-stride stride]
[-<filetype>] inputfile1 [-<filetype>] inputfile2 ... |
Usage:
Code Block |
---|
|
catdcd -o step7.1.dcd -first 5 -stride 5 step7.1_production.dcd |
The above line converts NAMD dcd file to fewer frames. -o step7.1.dcd specifies output file, -first 5 says the first frame in new dcd to be the 5th in old, -stride 5 use a 5-frame stride, 500-frame dcd converted to 100-frame dcd. Finally, step7.1_production.dcd is the input dcd file.
How to check? We can use DumpDCD to check the dcd:
Before dcd frequency manipulation:
Code Block |
---|
|
[shf317@sol namd]$ dumpdcd step7.1_production.dcd
500 #Number of frames in this file
1000 #Number of previous integration steps
1000 #Frequency (integration steps) for saving of frames
500000 #Number of integration steps in the run that created this file
... |
After dcd frequency manipulation:
Code Block |
---|
|
[shf317@sol namd]$ dumpdcd step7.1.dcd
100
0
1
100
... |
...
Anchor |
---|
| appendix_iopenmm |
---|
| appendix_iopenmm |
---|
|
Appendix I: OpenMMFile /share/Apps/examples/userscripts/run_openmm_scratch.sh is shown below.
Code Block |
---|
|
#!/bin/csh
#SBATCH --partition=imlab-gpu
#SBATCH --gres=gpu:1
#SBATCH --nodes=1
#SBATCH --ntasks-per-node=1
#SBATCH --export=ALL
#SBATCH -t 48:00:00
module load cuda
module load anaconda/python3
setenv OPENMM_PLUGIN_DIR /share/ceph/woi216group/shared/apps/openmm/lib/plugins
setenv PYTHONPATH /share/ceph/woi216group/shared/apps/openmm/lib/python3.5/site-packages
setenv LD_LIBRARY_PATH /share/ceph/woi216group/shared/apps/openmm/lib:$LD_LIBRARY_PATH
set LOCAL = /scratch/${USER}/${SLURM_JOBID}
# Production
set init = step5_charmm2omm
set input = step7_production
set cntmin = ${cnt}
while ( ${cntmin} <= ${cntmax} )
cd $SLURM_SUBMIT_DIR
@ pcnt = ${cntmin} - 1
set istep = step7_${cntmin}
set pstep = step7_${pcnt}
if ( ${cntmin} == 1 ) set pstep = step6.6_equilibration
if ( ! -e ${pstep}.rst ) exit
python -u openmm_run.py -i ${input}.inp -t toppar.str -p ${init}.psf -c ${init}.crd -irst ${pstep}.rst -orst ${LOCAL}/${istep}.rst -odcd ${LOCAL}/${istep}.dcd | tee ${LOCAL}/${istep}.out > /dev/null
sleep 2
if ( ! -e ${LOCAL}/${istep}.rst ) exit
cp ${LOCAL}/${istep}.* $SLURM_SUBMIT_DIR
sleep 2
@ cntmin = ${cntmin} + 1
end |
Anchor |
---|
| appendix_iigromacs |
---|
| appendix_iigromacs |
---|
|
Appendix II: GROMACSFile /share/Apps/examples/userscripts/run_gromacs_scratch.sh is shown below for single node jobs only
...