Skip to content

Commit

Permalink
Merge pull request #617 from RuiApostolo/20240704_gromacs_gpu
Browse files Browse the repository at this point in the history
Instructions on how to use the gromacs/2022.4-GPU module
  • Loading branch information
juanfrh committed Aug 12, 2024
2 parents cada3ae + a21e996 commit 74cb454
Showing 1 changed file with 32 additions and 3 deletions.
35 changes: 32 additions & 3 deletions docs/research-software/gromacs.md
Original file line number Diff line number Diff line change
Expand Up @@ -17,12 +17,18 @@ non-biological systems, e.g. polymers.
## Using GROMACS on ARCHER2

GROMACS is Open Source software and is freely available to all users.
Three versions are available:
Three executable versions are available on the normal (CPU-only) modules:

- Parallel MPI/OpenMP, single precision: `gmx_mpi`
- Parallel MPI/OpenMP, double precision: `gmx_mpi_d`
- Serial, single precision: `gmx`

We also provide a GPU version of GROMACS that will run on the MI210 GPU nodes, it's named `gromacs/2022.4-GPU` and can be loaded with

```bash
module load gromacs/2022.4-GPU
```

!!! important
The `gromacs` modules reset the CPU frequency to the highest possible value
(2.25 GHz) as this generally achieves the best balance of performance to
Expand All @@ -33,8 +39,7 @@ Three versions are available:

### Running MPI only jobs

The following script will run a GROMACS MD job using 4 nodes (128x4
cores) with pure MPI.
The following script will run a GROMACS MD job using 4 nodes (128x4 cores) with pure MPI.

```slurm
#!/bin/bash
Expand Down Expand Up @@ -89,6 +94,30 @@ export OMP_NUM_THREADS=8
srun --distribution=block:block --hint=nomultithread gmx_mpi mdrun -s test_calc.tpr
```

### Running GROMACS on the AMD MI210 GPUs

The following script will run a GROMACS MD job using 1 GPU with 1 MPI process 8 OpenMP threads per MPI process.

```slurm
#!/bin/bash
#SBATCH --job-name=mdrun_gpu
#SBATCH --gpus=1
#SBATCH --time=00:20:00
#SBATCH --hint=nomultithread
#SBATCH --distribution=block:block
# Replace [budget code] below with your project code (e.g. t01)
#SBATCH --account=[budget code]
#SBATCH --partition=gpu
#SBATCH --qos=gpu-shd # or gpu-exc
# Setup the environment
module load gromacs/2022.4-GPU
export OMP_NUM_THREADS=8
srun --ntasks=1 --cpus-per-task=8 gmx_mpi mdrun -ntomp 8 --noconfout -s calc.tpr
```

## Compiling Gromacs

The latest instructions for building GROMACS on ARCHER2 may be found in
Expand Down

0 comments on commit 74cb454

Please sign in to comment.