Blues is one of the clusters that belongs to the computational power of LCRC, featuring ~350 publicly usable compute nodes and over 6,000 cores available for all users. Overall, Blues is comprised of 630 compute nodes of varying architectures including private condo nodes. Sporting roughly twice the computational power of the previous cluster, Fusion, Blues has some similarities and some differences that all users that plan to use Blues should become familiar with.

Quick Facts

  • ~350 public nodes
  • 64 GB (Intel Sandy Bridge)/128 GB (Intel Haswell) of memory on each node
  • 16 cores (Intel Sandy Bridge)/32 cores (Intel Haswell) per compute node
  • QLogic QDR InfiniBand Interconnect (fat-tree topology)
  • Over 6,000 compute cores available
  • Theoretical peak performance of 107.8 TFlops

Available Queues

We have several public queues to choose from on Blues. This does not list information on the private condo nodes. Below you can find some details on the types of nodes in each public queue.

Blues Cluster Queues Nodes Cores Memory Processor Co-processors Local Scratch Disk
shared 4 16 64 GB Sandy Bridge Xeon E5-2670 2.6GHz 15 GB
batch 306 16 64 GB Sandy Bridge Xeon E5-2670 2.6GHz 15 GB
haswell 40 32 128 GB Haswell Xeon E5-2698v3 2.3GHz 15 GB
biggpu 6 16 768 GB Sandy Bridge Xeon E5-2670 2.6GHz  2x NVIDIA Tesla K40m GPU 1 TB

File Storage

One of the ways Blues differs from Fusion is that there are no disks on the nodes themselves, they are completely diskless. Users that do take advantage of local scratch space currently on Fusion will still have the option of using a small scratch space on the node’s memory (15GB located at /scratch). The scratch space is essentially a RAM disk and consumes an amount of memory, so this should be taken into account if you are running a large job that requires a substantial amount of memory.

Please see our detailed description of the file storage used in LCRC here.


Blues runs on sandy bridge processors, which have a special subprocessor on chip that would normally work as a weak graphics processor. This subprocessor can be taken advantage of by using the AVX instruction set. All compilers currently on Blues support the AVX instruction set among being the latest and greatest currently available.

The interconnect also differs between Blues and Fusion. Whereas with Fusion we relied on a Mellanox based QDR InfiniBand network, this time we went with a QLogic QDR InfiniBand network. This fact comes into play when considering MPI programs that would use the standard ibverbs library as a means for communication. QLogic has their own version of ibverbs that only works with their gear (called infinipath or PSM), allowing for higher performance than you would see from using ibverbs. This means that you should recompile your code and use one of the MPI’s on blues which supports PSM.

If you use the same MPI that you have been using on Fusion to run a job on Blues you will get a lot error messages in your job log about not using the PSM interface. Although this has no effect on the job completing normally, you may gain a significant boost in performance if you use the PSM interface.

Diagram of Blues Network
Blues Network Diagram

Running Jobs on Blues

For detailed information on how to run jobs on Blues, you can follow our documentation by clicking here: Running Jobs on Blues.

Blues utilizes the Torque PBS manager and Maui PBS scheduler to allow users to submit jobs via PBS commands. The Portable Batch System (PBS) is a richly featured workload management system providing job scheduling and job management interface on computing resources, including Linux clusters.

With PBS, a user requests resources and submits a job to a queue. The system will then take jobs from queues, allocate the necessary nodes, and execute them in as efficient a manner as it can.