Bebop

Quick Facts

  • 1024 public nodes
  • 128 GB (Intel Broadwell / Intel Knights Landing) of memory on each node
  • 36 cores (Intel Broadwell) / 64 cores (Intel Knights Landing) per compute node
  • Omni-Path Fabric Interconnect

Available Queues

Bebop has several partitions defined, a partition is similar to a queue. Use the -p option with srun or sbatch to select a partition. The default partition is bdwall.


Bebop Partition Name Description Number of Nodes CPU Type Cores Per Node Memory Per Node
debug For administrative use only. 1024 Mixed Mixed 128GB
knlall All KNL Nodes. 352 Phi 7230 64 128GB
knld KNL with 4TB /scratch disk. 64 Phi 7230 64 128GB
knl KNL with 15GB /scratch disk. 288 Phi 7230 64 128GB
bdwall All Broadwell Nodes. 672 E5-2695v4 36 128GB
bdwd Broadwell with 4TB /scratch disk. 64 E5-2695v4 36 128GB
bdw Broadwell with 15GB /scratch disk. 608 E5-2695v4 36 128GB

File Storage

There are no physical disks in most of the Bebop nodes themselves, and as such the OS running on every node runs in a diskless environment. Users that do take advantage of local scratch space currently on Blues will still have the option of using a scratch space on the node’s memory (15GB located at /scratch). A subset of our Broadwell and KNL nodes instead have a 4TB /scratch available. For details on which queues have which scratch space, please refer to the queue table above. The scratch space is essentially a RAM disk and consumes an amount of memory, so this should be taken into account if you are running a large job that requires a substantial amount of memory.

Please see our detailed description of the file storage used in LCRC here.

Architecture

TBD

Running Jobs on Bebop

For detailed information on how to run jobs on Bebop, you can follow our documentation by clicking here: Running Jobs on Bebop.

Bebop utilizes the Slurm Workload Manager (formerly known as Simple Linux Utility for Resource Management or SLURM) for job management. Slurm is an open source, fault-tolerant, and highly scalable cluster management and job scheduling system for large and small Linux clusters.

As a cluster workload manager, Slurm has three key functions. First, it allocates exclusive and/or non-exclusive access to resources (compute nodes) to users for some duration of time so they can perform work. Second, it provides a framework for starting, executing, and monitoring work (normally a parallel job) on the set of allocated nodes. Finally, it arbitrates contention for resources by managing a queue of pending work.