|
|
[[_TOC_]]
|
|
|
|
|
|
## General Information
|
|
|
## Job Submission
|
|
|
|
|
|
When you submit a job with Slurm on Liger, you must specify:
|
|
|
- A partition which defines the type of compute nodes you wish to reserve.
|
|
|
- A QoS (Quality of Service) which calibrates your resource needs (number of nodes,execution time, ...)
|
|
|
|
|
|
There is 1 partition on Liger for Turing's ressources, in general for GPUs, so called `gpus`.
|
|
|
|
|
|
## Partition
|
|
|
|
|
|
Slurm partition added on `turing01`
|
|
|
|
... | ... | @@ -21,6 +29,11 @@ That means here we have on Turing01 : |
|
|
- 12 cores per GPU
|
|
|
- a total of 368 GB ram
|
|
|
|
|
|
## QoS policy
|
|
|
|
|
|
|
|
|
|
PartitionQoSTime limitResources limitper jobper userper QoSCPUqos_cpu-t3(default)20 h512 nodesqos_cpu-t4100 h1 node32 nodes128 nodesqos_cpu-dev2 h128 nodes128 nodes1000 nodesGPUqos_gpu-t3(default)20 h96 nodesqos_gpu-t4100 h1 node8 nodes32 nodesqos_gpu-dev2 h4 nodes4 nodes64 node
|
|
|
|
|
|
## Requesting GPUs
|
|
|
|
|
|
To request GPU nodes:
|
... | ... | |