The sbatch command is used to schedule a script to run as soon as resources are available. Usage: sbatch [options] <script>
options (most are also relevant for srun):
-c n | Allocate n cpus (per task). |
-t t | Total run time limit (e.g. "2:0:0" for 2 hours, or "2-0" for 2 days and 0 hours). |
-mem-per-cpu m | Allocate m MB per cpu. |
--mem m | Allocate m MB per node (--mem and --mem-per-cpu are mutually exclusive) |
--array=1‑k%p | Run the script k times (from 1 to k). The array index of the current run is in the SLURM_ARRAY_TASK_ID environment variable accessible from within the script. The optional %p parameter will limit the jobs to run at most p simultaneous jobs (usually it's nicer to the other users). |
--wrap cmd | instead of giving a script to sbatch, run the command cmd. |
-M cluster | The cluster to run on. Can be comma separated list of clusters which will choose the earliest expected job initiation time. |
-n n | Allocate resources for n tasks. Default is 1. Only relevant for parallel jobs, e.g. with mpi. |
--gres resource | specify general resource to use. Currently only GPU and vmem are supported. e.g. gpu:2 for two GPUs. On clusters with several types of GPUs, a specific GPU can be requested by, e.g. 'gpu:m60:2' for 2 M60 GPUs; or minimum video memory with e.g. 'gpu:1,vmem:6g'. |
More info can be found in "man sbatch" or here.
By default both standard output and standard error are directed to a file named slurm-<jobid>.out , where <jobid> is the job allocation number.
You can control this using the following options: -e, --error=<filename_pattern>
instructs slurm to redirect the batch script's standard error directly to the file name specified in the "filename pattern". -o, --output=<filename_pattern>
instructs slurm to redirect the batch script's standard output directly to the file name specified in the "filename pattern".