Versions Compared
Key
- This line was added.
- This line was removed.
- Formatting was changed.
Info |
---|
The backend job scheduler is Gridengine, which functions similarly to the PBS Pro/OpenPBS |
Resource Requests
The table below summarises the major resources attributes which are commonly used in most of the jobs. There are also other attributes which are very helping for fine tuning how a job should be scheduled, whose details are provided in the sub sections.
Resources | Attribute | Description | Default Value |
---|---|---|---|
Parallel Environments ( | smp | Allocate X number of CPUs on the SAME compute node | Optional, if not specified, a job defaults to use 1 CPU |
mpi | Allocate X number of CPUs from multiple compute node, this is mainly used by a job that implements under Open MPI framework. | ||
Resource request list ( | mem | The amount of memory limit can be used by a job | Default 1024M |
jobfs | The amount of disk space limit can be used by a job | Default 1G | |
walltime | The run time limit (elapsed time) before a job gets killed by the job scheduler | Default 0:30:0 | |
ngpus | The number of GPGPUs | N/A | |
project_name ( | project_name | Request a job to consume resource quota defined via Project. Check Project sub section for details. | Optional, if not specified, default per user quota is consumed instead. |
Submit a Batch Job
A batch job can be submitted by using command qsub
, in the following pattern:
Code Block |
---|
# submit a job which calls a script (bash, shell, python scripts etc) qsub -N JOB_NAME -pe smp NUMBER_OF_CPU -l ATTR1=VAL1,ATTR2=VAL2 SCRIPT # submit a job which calls a BINARY (anything which are not script, such as sleep, dd etc) qsub -N JOB_NAME -pe smp NUMBER_OF_CPU -l ATTR1=VAL1,ATTR2=VAL2 -b y BINARY |
Examples
Code Block |
---|
# a very big sleep job that needs 16 x CPUs, 2 x GPGPUs, 64GB memory, 10G disk space qsub -b y -N generic_gpgpu -pe smp 16 -l ngpus=2,mem=65G,jobfs=10G sleep 1m # a smaller sleep job that requires the specific A2 GPGPU... qsub -b y -N t1000_gpgpu -pe smp 8 -l ngpus=2,gpgpu_model=A2,mem=16G,jobfs=10G sleep 1m # a big job runs on multiple H100 nodes inside the same physical rack/cabinet F (rack awareness) qsub -b y -N h100_gpgpu -pe mpi 256 -l ngpus=2,gpgpu_model=H100,rack=f,mem=128G,jobfs=100G sleep 1m |
Job Status
Job status can be view in
Submission Script
For larger and more complex analyses, the qsub
submission script can be very useful. A submission script contains pre-populated qsub
parameters, can be reused, distributed and version controlled easily. It looks like:
Code Block |
---|
#!/bin/bash # # It prints the actual path of the job scratch directory. #$ -pe smp 8 #$ -j y #$ -e logs/$JOB_ID_$JOB_NAME.out #$ -o logs/$JOB_ID_$JOB_NAME.out #$ -cwd #$ -N dd_smp #$ -l mem=1G,jobfs=110G,tmpfree=150G,walltime=00:30:00 # echo "$HOSTNAME $TMPDIR $jobfs" # about 107GB dd if=/dev/zero of=$TMPDIR/dd.test bs=512M count=200 |
To submit
Code Block |
---|
z1234567@login01:~$ qsub sge_dd_smp.sh Your job 31 ("dd_smp") has been submitted |
GPGPU
Two attributes gpu_model
and gpu_code
allow jobs to request to run on a particular GPGPU card model or chip. To find out what are available:
Code Block |
---|
# qhost -F | grep gpu_model | sort -u hf:gpu_model=A2 hf:gpu_model=L4 hf:gpu_model=T1000_8GB |
Here is an example of submission script to request a pytorch job to be run on NVIDIA L4 GPGPU specifically:
Code Block |
---|
#!/bin/bash # # ItFor printsmore theinfo, actualdo path of the job scratch directory. 'man qsub' # #$ -j y #$ -e $JOB_ID_$JOB_NAME.out #$ -o $JOB_ID_$JOB_NAME.out #$ -cwd #$ -N WP_L4 #$ -l mem=20G,jobfs=10G,tmpfree=12G #$ -l ngpus=1,gpu_model=L4 #$ -P project_name echo "$HOSTNAME $TMPDIR $jobfs" # https://cseunsw.atlassian.net/wiki/x/XgCjBQ for more about conda/mamba /PATH_TO_CONDA_OR_MAMBA/mamba run -n pytorch_cu121 python3 pytorch_cuda_info.py |
Its output looks like
Code Block |
---|
$ cat logs/12_WP_L4.out wp-omega-c04.cse.unsw.edu.au /scratch_local/12.1.all.q 10G Is CUDA supported by this system? True CUDA version: 12.1 ID of current CUDA device: 0 Name of current CUDA device: NVIDIA L4 |
Local Scratch
Each Compute Node equits a dedicated local storage which acts as the “Tier 0” scratch storage. When a job starts, it is given a dedicated (but temporary) directory on the scratch storage and its path is assigned to the variable $TMPDIR
. Inside the submission script, $TMPDIR
can be utilised in the following pattern:
Code Block |
---|
cp -r $HOME/PROJECT_DATA $TMPDIR/ TOOL_BINARY -input $TMPDIR/PROJECT_DATA -output $TMPDIR/OUTPUT_DATA # MAKE SURE the output data is copy back to your persistent storage location cp -r $TMPDIR/OUTPUT_DATA $HOME/ |
Info |
---|
It is generally a good idea to utilise this “Tier0” local scratch as it gives the best disk performance, compared to network shared storage (such as home directory). |
Note |
---|
Local scratch is temporary storage!! All data inside will be deleted upon job completion. Make sure data is copied back to somewhere!! |
Walltime Limit
TBA
Projects
TBA
Rack Awareness
TBA
Table of Contents | ||
---|---|---|
|