Running Jobs

With the launch of Bebop, we are utilizing many new components and software stacks including a new job scheduler and resource manager. For the time being, Bebop and Blues will be using 2 different pieces of software and as such, we have detailed how to use each below. In the near future, both clusters will run the Slurm manager.

Bebop

For detailed information on how to run jobs on Bebop, you can follow our documentation by clicking here: Running Jobs on Bebop.

Bebop utilizes the Slurm Workload Manager (formerly known as Simple Linux Utility for Resource Management or SLURM) for job management. Slurm is an open source, fault-tolerant, and highly scalable cluster management and job scheduling system for large and small Linux clusters.

As a cluster workload manager, Slurm has three key functions. First, it allocates exclusive and/or non-exclusive access to resources (compute nodes) to users for some duration of time so they can perform work. Second, it provides a framework for starting, executing, and monitoring work (normally a parallel job) on the set of allocated nodes. Finally, it arbitrates contention for resources by managing a queue of pending work.

Blues

For detailed information on how to run jobs on Blues, you can follow our documentation by clicking here: Running Jobs on Blues.

Blues utilizes the Torque PBS manager and Maui PBS scheduler to allow users to submit jobs via PBS commands. The Portable Batch System (PBS) is a richly featured workload management system providing job scheduling and job management interface on computing resources, including Linux clusters.

With PBS, a user requests resources and submits a job to a queue. The system will then take jobs from queues, allocate the necessary nodes, and execute them in as efficient a manner as it can.

Running Jobs Sidebar