Bebop and Blues utilize the Slurm Workload Manager (formerly known as Simple Linux Utility for Resource Management or SLURM) for job management. Slurm is an open source, fault-tolerant, and highly scalable cluster management and job scheduling system for large and small Linux clusters.
As a cluster workload manager, Slurm has three key functions. First, it allocates exclusive and/or non-exclusive access to resources (compute nodes) to users for some duration of time so they can perform work. Second, it provides a framework for starting, executing, and monitoring work (normally a parallel job) on the set of allocated nodes. Finally, it arbitrates contention for resources by managing a queue of pending work.
The simplest way to become familiar with Slurm and its basic commands is to follow their Quick Start User Guide. In the rest of this page, we’ll cover specific examples and commands. If at any time something becomes unclear, please do contact LCRC support.
Logging Into Bebop
Please be sure to following our Getting Started documentation in order to make sure you’ve completed the necessary steps so that you can login to the LCRC Bebop cluster. Once you’ve done this, you can SSH to Bebop by running the following:
ssh <your_lcrc_username>@bebop.lcrc.anl.gov |
The LCRC login nodes should not be used to run jobs on. Doing so may impact other users and require these login nodes to be rebooted.
If you need to add a new SSH key as you may not have logged in for awhile, please read through our documentation here.
As before, Bebop and Blues both share the same global GPFS filesystem. All of your home and project directories noted in our storage documentation will be available between clusters.
Projects Used for Job Submission
LCRC resources require a valid project with an allocation to submit jobs. Projects are what keeps track of your quarterly allocations. Please see the following page for more information about Projects in LCRC.
To see how much time will be deducted from your project when running jobs on Bebop, please see the following on Core Hour Usage.
When logging into Bebop for the first time, you’ll need to change your default project (as a reference, what LCRC calls projects are referred to as accounts in Slurm).
Bebop and Blues currently use two separate allocation/time databases. Your time and balances on one cluster will not be the same on the other. If you need to check your current account’s (project’s) balance(s), change your default account, etc., please see our documentation below or reference the information here: Project Allocation Queries and Management.
- All Argonne Employee Bebop users will have a default project set to startup-<username> upon first login with a project balance of 20,000 core hours.
- All Non-Argonne Employee Bebop users will have a default project set to external upon first login which has no time allocated and thus you will not be able to submit jobs.
Bebop users of the partitions/queues bdwall, bdw, bdwd, bdws, knlall, knl, knld, knls and knl-preemptable will need to use a project that has a valid allocation.
All Bebop condo node users (that is all partitions/queues that are NOT publicly available) need to use the project/account name associated with their partition. Please contact your project PI to obtain this information. This project will allow you to submit jobs to your condo nodes free of charge. This project will not work on the shared, publicly available partitions and MUST be used to submit jobs to the condo nodes.You can get a list of all partition names on Bebop that you have access to by running sinfo -o %P. Any partition that is not bdwall, bdw, bdwd, bdws, knlall, knl, knld, knld and knl-preemptable is considered a condo partition.
Setting a Default Project on Bebop
You can set your default project on Bebop with the following command:
lcrc-sbank -s default <project_name> |
You can also specify the project name on Bebop in your job submission if you’d like to use something different other than your default. With SBATCH, this can be done with:
#SBATCH -A <project_name> |
Query your Default Project on Bebop
Once you set your default project on Bebop, you can make sure this is set correctly with this command:
lcrc-sbank -q default |
Query Project Balances on Bebop
You can query your project balances on Bebop to see how much time you have available and how much you have used.
Query all of your project balances on Bebop:
lcrc-sbank -q balance |
Query a specific project balance on Bebop:
lcrc-sbank -q balance <project_name> |
Query a Project Transaction History on Bebop
If you’d like to see the transaction history for a project on Bebop, you can run the below.
lcrc-sbank -q trans <project_name> |
lcrc-sbank Help Menu
If you need to query the lcrc-sbank help menu at any time, simply run the below.
lcrc-sbank -h |
Software Environment Using Lmod
Bebop is using Lmod (Lua Environment Modules) for environment variable management. SoftEnv has been deprecated in LCRC as most other sites are using Environment Modules or Lmod instead. Lmod has several advantages over SoftEnv. For example, it prevents you from loading multiple versions of the same package at the same time. It also prevents you from having multiple compilers and MPI libraries loaded at the same time. See the Lmod User Guide for information on how to use Lmod. If you are used to using SoftEnv and want to know the equivalent commands for Lmod, here is a handy cheat sheet.
By default your Lmod environment will load Intel Compilers, Intel MPI and Intel MKL.
Using Slurm to Submit Jobs
Bebop is using Slurm for the job resource manager and scheduler for the cluster.
The Slurm Workload Manager (formerly known as Simple Linux Utility for Resource Management or SLURM) is an open source, fault-tolerant, and highly scalable cluster management and job scheduling system for large and small Linux clusters.
As a cluster workload manager, Slurm has three key functions. First, it allocates exclusive and/or non-exclusive access to resources (compute nodes) to users for some duration of time so they can perform work. Second, it provides a framework for starting, executing, and monitoring work (normally a parallel job) on the set of allocated nodes. Finally, it arbitrates contention for resources by managing a queue of pending work.
Your best source of finding information on using Slurm will come from their quickstart guide here or by using the man pages.
Below we will outline some general information on the Bebop Slurm partitions and supply some basic submission information to get you started using the new tools.
Partitions Limits
Bebop currently enforces the following limits on publicly available partitions:
- 32 Running Jobs per user.
- 100 Queued Jobs per user.
- 3 Days (72 Hours) Maximum Walltime on Broadwell Nodes. (bdws is 1 hour)
- 7 Days (168 Hours) Maximum Walltime on KNL Nodes. (knls is 4 hours)
- 1 Hour Default Walltime if not specified.
- bdwall (Broadwell Compute Nodes) is the default partition.
Available Partitions
Bebop has several publicly available partitions (also known as queues) defined. Use the -p option with srun or sbatch to select a partition. The default partition is bdwall. Bebop condo node partitions are not listed below. You can get a list of all partition names on Bebop that you have access to by running sinfo -o %P. Any partition that is not bdwall, bdw, bdwd, bdws, knlall, knl and knld is considered a condo partition.
Bebop Partition Name | Description | Number of Nodes | CPU Type | Cores Per Node | Memory Per Node | Local Scratch Disk |
bdwall | All Broadwell Nodes | 664 | Intel Xeon E5-2695v4 | 36 | 128GB DDR4 | 15 GB or 4 TB |
bdw | Broadwell Nodes with 15 GB /scratch | 600 | Intel Xeon E5-2695v4 | 36 | 128GB DDR4 | 15 GB |
bdwd | Broadwell Nodes with 4 TB /scratch | 64 | Intel Xeon E5-2695v4 | 36 | 128GB DDR4 | 4 TB |
bdws | Broadwell Shared Nodes (Oversubscription / Non-Exclusive) | 8 | Intel Xeon E5-2695v4 | 36 | 128GB DDR4 | 15 GB |
knlall | All Knights Landing Nodes | 348 | Intel Xeon Phi 7230 | 64 | 96GB DDR4/16GB MCDRAM | 15 GB or 4 TB |
knl | Knights Landing Nodes with 15GB /scratch | 284 | Intel Xeon Phi 7230 | 64 | 96GB DDR4/16GB MCDRAM | 15 GB |
knld | Knights Landing Nodes with 4TB /scratch | 64 | Intel Xeon Phi 7230 | 64 | 96GB DDR4/16GB MCDRAM | 4 TB |
knls | Knights Landing Shared Nodes (Oversubscription / Non-Exclusive) | 4 | Intel Xeon Phi 7230 | 64 | 96GB DDR4/16GB MCDRAM | 15 GB |
knl-preemptable | All Knights Landing Nodes with restrictions including preemption. Click here for more details. | 348 | Intel Xeon Phi 7230 | 64 | 96GB DDR4/16GB MCDRAM | 15 GB or 4 TB |
Job Submission Commands
The 3 most common tools you will use to submit jobs are sbatch, srun and salloc.
You can reference the table below for a simple, quick cheat sheet on a few examples about jobs in Slurm:
Slurm Command | Description |
sbatch <job_script> | Submit <job_script> to the Scheduler |
srun <options> | Run Parallel Jobs |
salloc <options> | Request an Interactive Job |
squeue | View Job Information |
scancel <job_id> | Delete a Job |
Example Sbatch Job Submission (Simple)
Here you’ll find a couple of very simple submission scripts to get you started that you can use with sbatch to submit your job. For this example, the script can be named myjob.sh:
#!/bin/bash #SBATCH --job-name=<my_job_name> #SBATCH --account=<my_lcrc_project_name> #SBATCH --partition=bdwall #SBATCH --nodes=1 #SBATCH --ntasks-per-node=36 #SBATCH --output=<my_job_name>.out #SBATCH --error=<my_job_name>.error #SBATCH --mail-user=<your email address> # Optional if you require email #SBATCH --mail-type=ALL # Optional if you require email #SBATCH --time=01:00:00 # Run My Program srun /bin/hostname |
Example Sbatch Job Submission (MPI)
#!/bin/bash #SBATCH --job-name=<my_job_name> #SBATCH --account=<my_lcrc_project_name> #SBATCH --partition=bdwall #SBATCH --nodes=2 #SBATCH --ntasks-per-node=36 #SBATCH --output=<my_job_name>.out #SBATCH --error=<my_job_name>.error #SBATCH --mail-user=<your email address> # Optional if you require email #SBATCH --mail-type=ALL # Optional if you require email #SBATCH --time=01:00:00 # Setup My Environment module load intel-parallel-studio/cluster.2018.4-ztml34f export I_MPI_FABRICS=shm:tmi # Run My Program srun -n 72 ./helloworld |
NOTE: I_MPI_FABRICS=shm:tmi
– Use shared memory (shm) for communication within a single host, and the tag matching interface (tmi) (Omni-Path optimized) for host to host communication.
Example Knights Landing (KNL) Sbatch Job Submission (MPI)
Please note that if you wish to use a different set of modes for KNL other than Quadrant & Cache, you’ll need to request a reservation that will require approval. You can request your reservation via this form.
Depending on the amount of KNL nodes and the changes to be made, this could take a decent amount of time. Because this is using the resources, this time will also be charged against your core hour usage including the time it takes to complete the job.
#!/bin/bash #SBATCH --job-name=<my_job_name> #SBATCH --account=<my_lcrc_project_name> #SBATCH --partition=knlall #SBATCH --constraint knl,quad,cache # Other modes require a reservation. #SBATCH --nodes=2 #SBATCH --ntasks-per-node=64 #SBATCH --output=<my_job_name>.out #SBATCH --error=<my_job_name>.error #SBATCH --mail-user=<your email address> # Optional if you require email #SBATCH --mail-type=ALL # Optional if you require email #SBATCH --time=01:00:00 # Setup My Environment module load intel-parallel-studio/cluster.2018.4-ztml34f export I_MPI_FABRICS=shm:tmi # Run My Program srun -n 128 ./helloworld |
This will run across two KNL nodes, using the Quadrant mode, and Cache MCDRAM mode.
The default setting is quad,cache.
A table of available settings, along with more detailed information about Slurm’s KNL support is available here.
You can then submit this job from a Bebop login nodes using:
sbatch myjob.sh |
Please refer to the sbatch webpage for a list of full options including environment variables.
Example Interactive Job Submission
There are a couple of ways to run an interactive job on Bebop.
First, you can just get a session on a node by using the srun command in the following way:
srun --pty -p <partition> -t <walltime> /bin/bash |
This will drop you onto one node. Once you exit the node, the allocation will be relinquished.
If you want more flexibility, you can instead have the system first allocate resources for the job using the
the salloc
command:
salloc -N 2 -p bdwall -t 00:30:00 |
This job will allocate 2 nodes from bdwall partition for 30 minutes. You should get the job number from the output. This command will not log you into any of your allocated nodes by default.
You can get a list of your allocated nodes and many other slurm settings set by the salloc command by doing:
printenv | grep SLURM |
After the resources were allocated and the session was granted use srun
command to run your job:
srun -n 8 ./myprog |
This will start 8 threads on the allocated nodes. If you try and use more resources than you allocated (say 3 nodes worth of resources while you only asked for 2), this will create a separate reservation and the other will continue to run and use hours as well.
When you allocate resources via salloc, you can also now freely SSH to the nodes in your allocation as well if you prefer to run jobs from the nodes themselves.
Checking Queues and Jobs
To view job and job step information use squeue.
Here’s a quick example of what the output may look like:
squeue JOBID PARTITION NAME USER ST TIME NODES NODELIST(REASON) 999 bdwall test-joba user2 R 2:40:31 2 bdw-[0010-0011] 998 bdwall test-job2 user1 R 45:20 1 bdwd-0120 997 knld test-job1 user1 R 3:04 1 knld-0030 |
Here are also some common options for squeue:
-a |
Display information about all jobs in all partitions. This is the default when running squeue with no options. |
-u <user_list> |
Request jobs or job steps from a comma separated list of users. The list can consist of user names or user id numbers. |
-j <job_id_list> |
Requests a comma separated list of job IDs to display. Defaults to all jobs. |
-l |
Report more of the available information for the selected jobs or job steps. |
Deleting a Job
To delete a job use scancel
. This command will take the job id as its argument. Your job id will be given to when you submit the job. You can also retrieve this from the squeue command detailed above.
scancel <job_id> |
Other Useful Slurm Commands
scontrol
– can be used to report more detailed information about nodes, partitions, jobs, job steps, and configuration.
Common examples:
scontrol show node node-name |
Shows detailed information about the nodes. |
scontrol show partition partition-name |
Shows detailed information about a specific partition. |
scontrol show job job-id |
Shows detailed information about a specific job or all jobs if no job id is given. |
scontrol update job job-id |
Change attributes of submitted job. |
For an extensive list of formatting options please consult scontrol
man page.
sinfo
– view information about jobs, nodes and partitions located in the Slurm scheduling queue
Common options:
-a, --all |
Display information about all partitions. |
-t, --states <states> |
Display nodes in a specific state. Example: idle |
-i <seconds>, --iterate=<seconds> |
Print the state on a periodic basis. Sleep for the indicated number of seconds between reports. |
-l, --long |
Print more detailed information. |
-n <nodes>, --nodes=<nodes> |
Print information only about the specified node(s). Multiple nodes may be comma separated or expressed using a node range expression. For example “bdw-[0001-0007]” |
-o <output_format>, --format=<output_format> |
Specify the information to be displayed using an sinfo format string. |
For an extensive list of formatting options please consult sinfo
man page.
sacct
– command displays accounting data for all jobs and job steps and can be used to display the information about the complete jobs.
Common options:
-S, --starttime |
Select jobs in any state after the specified time. |
-E end_time, --endtime=end_time |
Select jobs in any state before the specified time. |
Valid time formats are:
HH:MM[:SS] [AM|PM]
MMDD[YY] or MM/DD[/YY] or MM.DD[.YY]
MM/DD[/YY]-HH:MM[:SS]
YYYY-MM-DD[THH:MM[:SS]]
Example:
# sacct -S2014-07-03-11:40 -E2014-07-03-12:00 -X -ojobid,start,end,state JobID Start End State --------- --------------------- -------------------- ------------ 2 2014-07-03T11:33:16 2014-07-03T11:59:01 COMPLETED 3 2014-07-03T11:35:21 Unknown RUNNING 4 2014-07-03T11:35:21 2014-07-03T11:45:21 COMPLETED 5 2014-07-03T11:41:01 Unknown RUNNING |
For an extensive list of formatting options please consult sacct
man page.
sprio
– view the factors that comprise a job’s scheduling priority.
sprio is used to view the components of a job’s scheduling priority when the multi-factor priority plugin is installed. sprio is a read-only utility that extracts information from the multi-factor priority plugin. By default, sprio returns information for all pending jobs. Options exist to display specific jobs by job ID and user name.
For an extensive list of formatting options please consult sprio
man page.
KNL Preemptable Queue
LCRC has set up a preemptable queue on the KNL partition (named as knl-preemptable). This covers all of the same nodes in the knlall partition. The preemptable queue is largely targeted towards users who need to run jobs but do not have sufficient time available in their projects or wish to stretch their allocations further. As the name implies, these jobs are preemptable immediately by other normal jobs if the partition (knlall) is full. The main advantage to users is that preemptable jobs are charged at 0.2 (20%) the normal core-hours.
The user has to have an active project with some remaining time to be able to submit jobs to the preemptable queue.
The rules for submitting jobs to the preemptable queue are as follows:
Maximum job size: 6 nodes
Maximum time: 24 hours
Maximum number of running jobs per project: 1
A single user can submit multiple jobs to the preemptable queue if they have more than one project. If a user submits two preemptable jobs in any given project, only one of the jobs will run while the other is queued.
As an example of the core-hour savings, a job run for 24 hours on 6 KNL nodes would be charged:
6x64x24x0.585 = 5391 core-hours (0.585 is the scaling factor for jobs run on the knl partition) whereas in the preemptable queue, the core-hours charged would be:
6x64x24x0.2 = 1843 core-hours (0.2 is the additional scaling factor for preemptable jobs).
It is important that the user have checkpointing available and enabled so intermediate solutions are being saved at regular intervals as the job is running. If a job is preempted, the job can be automatically requeued so it can start again as new resources become available if the requeue flag is included during job submission. It is also important that users make appropriate changes to their scripts and input files to have the requeued job start from the latest saved solution. It is recommended that users periodically clean up their project spaces to delete intermediate solution files they might not require after the preemptable job is completed.
Users wishing to submit jobs to the preemptable queue should include the line
#SBATCH –partition=knl-preemptable
in their batch script.
It is recommended that users include the following lines in their batch-scripts if they desire the job to be requeued and to be notified if their jobs have been preempted and requeued. This will enable them to make necessary changes to the input file to restart the jobs from the last saved solution (if need be).
#SBATCH –requeue
#SBATCH –mail-user=<your email address>
#SBATCH –mail-type=REQUEUE
Core Hour Usage
As mentioned, submitting jobs to Bebop requires time allocated to a Project (or what Slurm calls an Account). Our documentation has an extensive write up on this on the following page: Projects in LCRC
Whenever a computing job runs on any computing node, the time the job uses will be counted and recorded as computing used by the associated project. A job must have a project in order to run on the computing nodes and will be assigned to your default project if none has been specified in your job script. ALL jobs submitted via sbatch, srun or salloc will deduct computing core hours from your project.
On Bebop, the Broadwell and KNL nodes charge as follows for each job:
Broadwell # of Nodes * 36 (# Cores Per Node) * Time Used |
KNL # of Nodes * 64 (# Cores Per Node) * Time Used * 0.585 |
Projects will be charged for the entire node when a job is run even if you don’t utilize all of the cores or don’t actually run a job when a node allocated to you. Anytime a node is allocated, the resource is unavailable for anyone else to use, thus the reason for charging the full amount of a node.
Furthermore, after previously benchmarking the Broadwell and KNL nodes and collecting user experiences with their performance, the consensus is that a KNL core is not as productive as a Broadwell core for most LCRC codes. To compensate, a KNL core hour will cost 0.585 bank core-hours.
As a reminder, any non-public condo queues that you belong to DO NOT charge time to run on these nodes.
Compute Node Scratch Space
Bebop currently writes all temporary files on the compute nodes to a 15 GB tmpfs (4TB disk on the diskfull partitions bdwd and knld) at /scratch. You can also write here to temporarily store your run files. Please note that all data will be deleted from this directory once your job completes. You can also change your environments TMPDIR variable in your job script if you want to set an alternate path.
Why Isn’t My Job Running Yet?
If today is NOT LCRC Maintenance Day and you find that your job is in the pending (PD) state after running squeue, Slurm will provide a reason for this shown in the squeue command. Here are a few of the most common reasons your job may not be running.
First, check to the see reason code by querying your job number in Slurm:
squeue -j <job_id> |
Then, you can determine why the job has not started by deciphering this sample reason list:
Reason Code | Description |
AccountNotAllowed | The job isn’t using an account that is allowed on the partition. Certain projects may be restricted to certain partitions. For example, a project may only be allowed to run on the knl partitions. Bebop condo node users must use the ‘condo‘ account when running on their dedicated partitions. |
AssocGrpBillingMinutes | The job doesn’t have enough time in the banking account to begin. |
BadConstraints | The job’s constraints can not be satisfied. |
BeginTime | The job’s earliest start time has not yet been reached. |
Cleaning | The job is being requeued and still cleaning up from its previous execution. |
Dependency | This job is waiting for a dependent job to complete. |
JobHeldAdmin | The job is held by a system administrator. |
JobHeldUser | The job is held by the user. |
NodeDown | A node required by the job is down. |
PartitionNodeLimit | The number of nodes required by this job is outside of it’s partitions current limits. Can also indicate that required nodes are DOWN or DRAINED. |
PartitionTimeLimit | The job’s time limit exceeds it’s partition’s current time limit. |
Priority | One or more higher priority jobs exist for this partition or advanced reservation. |
QOSMaxJobsPerUserLimit | The job’s QOS has reached its maximum job count for the user at one time. |
ReqNodeNotAvail | During LCRC Maintenance Day, you may see this reason. In addition, jobs that have a walltime that runs into a scheduled maintenance period will also show this message. The job TimeLimit should be adjusted accordingly. Otherwise, some node specifically required by the job is not currently available. The node may currently be in use, reserved for another job, in an advanced reservation, DOWN, DRAINED, or not responding. Nodes which are DOWN, DRAINED, or not responding will be identified as part of the job’s “reason” field as “UnavailableNodes”. Such nodes will typically require the intervention of a system administrator to make available. |
Reservation | The job is waiting its advanced reservation to become available. |
Resources | The job is waiting for resources to become available. |
TimeLimit | The job exhausted its time limit. |
While this is not every reason code, these are the most common on Bebop. You can view the full list of Slurm reason codes here.
Assuming your job is in the Priority/Resources state, you can use the sprio command to get a closer idea on when your job may start based on the priorities of other pending jobs. The priority is the sum of age, fairshare, jobsize and QOS (quality of service).
sprio is used to view the components of a job’s scheduling priority when the multi-factor priority plugin is installed. sprio is a read-only utility that extracts information from the multi-factor priority plugin. By default, sprio returns information for all pending jobs. Options exist to display specific jobs by job ID and user name.
For an extensive list of formatting options please consult sprio man page.
Command Line Quick Reference Guide
Command | Description |
---|---|
sbatch <script_name> |
Submit a job. |
scancel <job_id> |
Delete a job. |
squeue squeue -u <username> |
Show queued jobs via the scheduler. Show queued jobs from a specific user. |
scontrol show job <job_id> |
Provide a detailed status report for a specified job via the scheduler. |
sinfo -t idle |
Get a list of all free/idle nodes. |
lcrc-sbank -q balance <project_name> lcrc-sbank -q balance lcrc-sbank -q default lcrc-sbank -s default <project_name> lcrc-sbank -q trans <project_name> |
Query a specific project balance. Query all of your project balances. Query your default project. Change your default project. Query all transactions on a project. |
lcrc-quota |
Query your global filesystem disk usage. |
Troubleshooting Notes
Bebop by default using Intel, but if you switch to using MVAPICH2, please note that it was built with the
slurm
option, which means that mpiexec
and mpirun
are not available.srun
should be used as a process manager.
Contact Information
Please contact [email protected] with any questions you may have regarding the upgrade.