From 2eecb82cfd68f90a074ceb0e3c1de2d06e03cbe4 Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Rui=20Ap=C3=B3stolo?= Date: Thu, 4 Jul 2024 19:51:08 +0100 Subject: [PATCH 1/2] Added instructions on how to use the gromacs-gpu module --- docs/research-software/gromacs.md | 33 ++++++++++++++++++++++++++++--- 1 file changed, 30 insertions(+), 3 deletions(-) diff --git a/docs/research-software/gromacs.md b/docs/research-software/gromacs.md index b2241214e..569945ed3 100644 --- a/docs/research-software/gromacs.md +++ b/docs/research-software/gromacs.md @@ -17,12 +17,18 @@ non-biological systems, e.g. polymers. ## Using GROMACS on ARCHER2 GROMACS is Open Source software and is freely available to all users. -Three versions are available: +Three executable versions are available on the normal (CPU-only) modules: - Parallel MPI/OpenMP, single precision: `gmx_mpi` - Parallel MPI/OpenMP, double precision: `gmx_mpi_d` - Serial, single precision: `gmx` +We also provide a GPU version of GROMACS that will run on the MI210 GPU nodes, it's named `gromacs/2022.4-GPU` and can be loaded with + +```bash +module load gromacs/2022.4-GPU +``` + !!! important The `gromacs` modules reset the CPU frequency to the highest possible value (2.25 GHz) as this generally achieves the best balance of performance to @@ -33,8 +39,7 @@ Three versions are available: ### Running MPI only jobs -The following script will run a GROMACS MD job using 4 nodes (128x4 -cores) with pure MPI. +The following script will run a GROMACS MD job using 4 nodes (128x4 cores) with pure MPI. ```slurm #!/bin/bash @@ -89,6 +94,28 @@ export OMP_NUM_THREADS=8 srun --distribution=block:block --hint=nomultithread gmx_mpi mdrun -s test_calc.tpr ``` +### Running GROMACS on the AMD MI210 GPUs + +The following script will run a GROMACS MD job using 1 GPU with 1 MPI process 8 OpenMP threads per MPI process. + +```slurm +#!/bin/bash +#SBATCH --job-name=mdrun_gpu +#SBATCH --gpus=1 +#SBATCH --time=00:20:00 + +# Replace [budget code] below with your project code (e.g. t01) +#SBATCH --account=[budget code] +#SBATCH --partition=gpu +#SBATCH --qos=gpu-shd # or gpu-exc + +# Setup the environment +module load gromacs/2022.4-GPU + +export OMP_NUM_THREADS=8 +srun --ntasks=1 --cpus-per-task=8 gmx_mpi mdrun -ntomp 8 --noconfout -s calc.tpr +``` + ## Compiling Gromacs The latest instructions for building GROMACS on ARCHER2 may be found in From a21e996cf9ad9ee03cb2a68e43252e2279a44df2 Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Rui=20Ap=C3=B3stolo?= Date: Thu, 4 Jul 2024 20:01:02 +0100 Subject: [PATCH 2/2] Reduced number of modules loaded --- docs/research-software/gromacs.md | 2 ++ 1 file changed, 2 insertions(+) diff --git a/docs/research-software/gromacs.md b/docs/research-software/gromacs.md index 569945ed3..0dddeb5d1 100644 --- a/docs/research-software/gromacs.md +++ b/docs/research-software/gromacs.md @@ -103,6 +103,8 @@ The following script will run a GROMACS MD job using 1 GPU with 1 MPI process 8 #SBATCH --job-name=mdrun_gpu #SBATCH --gpus=1 #SBATCH --time=00:20:00 +#SBATCH --hint=nomultithread +#SBATCH --distribution=block:block # Replace [budget code] below with your project code (e.g. t01) #SBATCH --account=[budget code]