... | ... | @@ -12,11 +12,11 @@ On Turing node by default gpu partition offers 384 GB of usable memory. The memo |
|
|
|
|
|
- 8 GB per reserved CPU core if hyperthreading is deactivated (Slurm option `--hint=nomultithread`).
|
|
|
|
|
|
The default gpu partition is composed of 4 GPUs and 48 CPU cores: you can reserve for instance 1/4 of the node memory per GPU by reserving 12 CPU cores (i.e. 1/4 of 48 CPU cores) per GPU:
|
|
|
The default gpu partition is composed of 4 GPUs and 48 CPU cores: you can reserve for instance 1/4 of the node memory per GPU by reserving 12 CPU cores (i.e. 1/4 of 48 CPU cores) per GPU. However, **it is suggested to choose a slightly smaller such as 10** value as the server does not allow the full utilisation of the GPUs.
|
|
|
|
|
|
--cpus-per-task=12 # reserves 1/4 of the node memory per GPU (default gpu partition)
|
|
|
--cpus-per-task=10 # reserves ~1/4 of the node memory per GPU (default gpu partition)
|
|
|
|
|
|
In this way, you have access to 96 GB of memory per GPU if hyperthreading is deactivated (if not, half of that memory).
|
|
|
In this way, you have access to 80 GB of memory per GPU if hyperthreading is deactivated (if not, half of that memory).
|
|
|
|
|
|
## Comments
|
|
|
|
... | ... | |