Skip to content

Commands

Find below the most important commands for working on Euler. A collection of important Linux bash commands could be found here. Tutorials by Ryan Chadwick on Linux and bash scripting are very useful.

Job submission

Submit a job

sbatch < submit.tool.slurm.sh

Overview of the submitted jobs (running and pending)

jview

The Job-ID and the Array-ID is normally different.

Kill specific job

scancel <Job-ID>/<Array-ID>

Kill all running jobs

scancel --user=$USER

Kill job 15-23 of Array-ID

scancel <Array-ID>_[15-23]

Monitoring

CPU and memory usage of running jobs (an alias for myjobs)

jeffrun -r
jeffrun -j <Job-ID>/<Array-ID>

CPU and memory usage of a specific finished jobs

jeff <Job-ID>/<Array-ID>

Or the efficency of all jobs in the last 24 hours.

jeff24

Get a summary of your ressource usage the last couple of days

jefflow
Get a graphical representation of the usage of your finished jobs.

WebGUI

Connect to a node to check on a job. This is an advanced command. Can be useful to check on the real-time CPU usage or data in the local scratch space.

srun --interactive --jobid <job-ID> --pty bash

Software stack

Load GDC software stack

source /cluster/project/gdc/shared/stack/GDCstack.sh

View all available tools

module avail

Search for tool

module avail toolXY

Unload all modules

module purge

Data Management

Disk usage of your personal home and your scratch

lquota

Provide the size of the folder "mapping"

du -sh --si  mapping

Count number of files and directories in "raw_data".

find raw_data | wc -l

Archive folder data1

tar cvzf data1.tar.gz data1
#delete the folder
rm -rf data1

Extract archive data1.tar.gz

#get overview about the archive
tar -ztvf data1.tar.gz
#extract entire archive
tar xvf data1.tar.gz
#extract specific folder of the archive
tar xvf data1.tar.gz data1/raw
Check integrities of files

#Generate list with md5sums
md5sum *fq.gz > md5sums.txt 

#Verify md5sums e.g. on another Server
md5sum --check md5sums.txt    
Available disk space of the GDC share

/cluster/apps/local/lquota /cluster/work/gdc

Additional documentation

Official SLURM documentation. Keep in mind that Slurm is not Slurm and every cluster has own configurations and variables.