Perceval

Perceval is a Linux high-performance computing (HPC) cluster brought to Rutgers in 2015. Purchased with a grant of $1.72 million from the NIH, Perceval went online in early December 2015.

Perceval User Guide

75% of the utilization of this machine is reserved for the investigators who participated in the NIH S10 grant proposal that funded this instrument (NIH grant # 1S10OD012346-01A1). The remaining capacity is available to other Rutgers faculty, staff and students. Preference will be given to life sciences studies during periods that the instrument is at or near full utilization. Please contact OARC Help or a member of the OARC Staff for additional information.

Perceval can be accessed by using an ssh client to connect to perceval.rutgers.edu. Perceval can only be accessed from within the Rutgers University network.

Perceval uses the Slurm scheduler. The following Slurm partitions are available on Perceval:

Partition Purpose Time limit # of nodes
main general use (CPU-only) 7 days 130*
gpu jobs requiring GPUs 48 hours 8*
largemem jobs requiring the large memory node 48 hours 1**
testing default queue – for testing small jobs 2 hours 2*

* 24 cores/node

** 48 cores/node

The Perceval cluster consists of the following components:

  • 132 Lenovo NextScale nx360 servers
    • 2 x 12-core Intel Xeon E5-2680 v3 “Haswell” processors
    • 128 GB RAM
    • 1 TB local scratch disk
  • 8 Lenovo NextScale nx360 servers w/ GPUs
    • 2x NVIDIA Tesla K80 GPUs
    • 2 x 12-core Intel Xeon E5-2680 v3 “Haswell” processors
    • 128 GB RAM
    • 1 TB local scratch disk
  • 1 Lenovo x3850 “large memory” server
    • 4x 12-core Intel Xeon E7-4830 v3 “Haswell” Processors
    • 1.5 TB RAM
    • 2 TB local scratch disk
  • 2 Lenovo NextScale 3550 Login nodes
    • 2 x 10-core Intel Xeon E5-2650 v3 “Haswell” Processors
    • 128 GB of RAM
  • Lenovo Solution for GPFS Storage Server
    • GPFS High-performance parallel filesystem
    • 1.3 PB of usable disk space
  • Mellanox FDR InfiniBand Interconnect
  • Slurm 15.08 workload manager