IDRIS Jean-Zay cluster JEAN-ZAY/std

IDRIS Jean-Zay cluster JEAN-ZAY/std#

Description#

This plugin deals with specific environment characteristics of the Jean-Zay supercomputer at IDRIS (France). Jean-Zay documentation is available at <http://www.idris.fr/eng/jean-zay/index.html>`__

When submiting jobs on GPU partitions the genereated batch job file header follows this template:

#!/usr/bin/bash

#SBATCH --job-name={job_name}
#SBATCH --output=%x.%j.out
#SBATCH --error=%x.%j.out

#SBATCH --account={project}@{allocation}
#SBATCH --C={constraint}         # Only if 'constraint' is used
#SBATCH --partition={partition}  # Only if 'partition' is used
#SBATCH --qos={qos}

#SBATCH --nodes={nodes}
#SBATCH --ntasks-per-node=1
#SBATCH --cpus-per-task={cores}
#SBATCH --gres=gpu:1
#SBATCH --hint=nomultithread

#SBATCH --time={walltime}

YAML arguments#

The following arguments are used to configure the plugin. pyCIF will return an exception at the initialization if mandatory arguments are not specified, or if any argument does not fit accepted values or type:

Optional arguments#

python : str, optional, default “python”

the python command used to run sub-instances of pyCIF

python_venv : str, optional

path to the python virtual environment to use

python_module : str, optional, default “python/3.11.5”

the python module to load

project : str, optional

project on which to submit sbatch jobs. --account=[project]@[allocation] in sbatch.

allocation : “cpu” or “v100” or “a100”, optional

hour allocation type to use in sbatch jobs. --account=[project]@[allocation] in sbatch.

partition : “cpu_p1” or “prepost” or “visu” or “archive” or “compil” or “compil_h100” or “gpu_p13” or “gpu_p2” or “gpu_p2s” or “gpu_p2l” or “gpu_p5” or “gpu_p6”, optional

partition on which to submit sbatch jobs. --partition=[partition] in sbatch.

constraint : “v100-16g” or “v100-32g” or “a100” or “h100”, optional

constraint to use in sbatch jobs. --C=[constraint] in sbatch.

nodes : int, optional, default 1

number of nodes allocated in sbatch jobs. --nodes=[nodes] in sbatch.

cores : int, optional, default 1

number of cores per node allocated in sbatch jobs. --cpus-per-task=[cores] in sbatch.

qos : “qos_cpu-t3” or “qos_cpu-t4” or “qos_cpu-dev” or “qos_gpu-t3” or “qos_gpu-t4” or “qos_gpu-dev” or “qos_gpu_a100-t3” or “qos_gpu_a100-dev” or “qos_gpu_h100-t3” or “qos_gpu_h100-t4” or “qos_gpu_h100-dev”, optional

Quality of Service (QoS) used in sbatch jobs. --qos=[qos] in sbatch.

walltime : str, optional, default “01:00:00”

maximum walltime of sbatch jobs, in the format ‘hh:mm:ss’. --time=[walltime] in sbatch.

submit_sbatch : bool, optional, default True

Submit the job with sbatch. If false simply run it within the same instance

Requirements#

The current plugin requires the present plugins to run properly:

Requirement name

Requirement type

Explicit definition

Any valid

Default name

Default version

model

Model

True

True

None

None

YAML template#

Please find below a template for a YAML configuration:

 1platform:
 2  plugin:
 3    name: JEAN-ZAY
 4    version: std
 5    type: platform
 6
 7  # Optional arguments
 8  python: XXXXX  # str
 9  python_venv: XXXXX  # str
10  python_module: XXXXX  # str
11  project: XXXXX  # str
12  allocation: XXXXX  # cpu|v100|a100
13  partition: XXXXX  # cpu_p1|prepost|visu|archive|compil|compil_h100|gpu_p13|gpu_p2|gpu_p2s|gpu_p2l|gpu_p5|gpu_p6
14  constraint: XXXXX  # v100-16g|v100-32g|a100|h100
15  nodes: XXXXX  # int
16  cores: XXXXX  # int
17  qos: XXXXX  # qos_cpu-t3|qos_cpu-t4|qos_cpu-dev|qos_gpu-t3|qos_gpu-t4|qos_gpu-dev|qos_gpu_a100-t3|qos_gpu_a100-dev|qos_gpu_h100-t3|qos_gpu_h100-t4|qos_gpu_h100-dev
18  walltime: XXXXX  # str
19  submit_sbatch: XXXXX  # bool