More About High Performance Computing Group Minus iconIcon indicating subtraction, or that the element can be closed. Plus IconIcon indicating addition, or that the element can be opened. Arrow (down) icon.An arrow icon, usually indicating that the containing element can be opened and closed.

HPC Computing

The HPC group maintains a wide range of computational resources to fit your needs. All are Linux compute clusters, each attached to large storage platforms to support intensive data research.

Each cluster is configured to meet the needs of its main users, with various operating CPUs and GPUs. Currently Lilac is the primary cluster designated for new research HPC requests. Any requests for using alternate clusters will be reviewed on a case-by-case basis depending on the scope of your research requirements and our cluster availability.

Existing Infrastructure:

General Compute Cluster: “Lilac”

Lilac is an HPC cluster that is available to the MSKCC community. The system runs LSF queueing system, which supports a variety of job submission techniques.

  • Centos 7 Linux
  • LSF Scheduler
  • 2520 CPU Cores and >300 GPUs (NVIDIA) 7 x DGX-1 NVIDIA Deep learning servers
  • 2 TB NVMe SSD Storage per node
  • 8 PB Local Compute Storage (GPFS)
  • 4 PB Archive Warm Storage (shared)

 

Center for Molecular Oncology Cluster: “Luna”

Computing resources for investigators in the Computational Biology Center and other SKI investigators who need access to significant Linux compute and storage resource specifically in association with the Center for Molecular Oncology or the Bioinformatics Core. These systems support initial processing of sequence data generated by IGO.

There are two computational clusters that support Bioinformatics operations. These two systems are known as ‘lux’ and ‘luna’. Lux currently functions primarily to support raw data processing of genomic data. Luna is the main computational system used within the Bioinformatics Core and by members of the Center of Molecular Oncology. As members of these labs these systems provide access to shared research data, applications and packages that support various pipelines for bioinformatics analysis.

  • Centos 6 (migrating to Centos 7) LSF Scheduler
  • 2100 CPU cores
  • 12 PB Isilon Compute Storage
  • 4 PB GPFS Compute Storage
  • 4 PB Archive warm storage (shared)

Clinical Compute Cluster: “Phoenix”

Our scientists analyze tumor DNA and shed light on the complex molecular changes that occur in cancer. These advances are also enabling our doctors to improve diagnoses and develop personalized treatments for patients. We now have extraordinary opportunities to provide insights into what causes cancers to form or progress and suggest strategies for blocking them.

To support this analytical process a computational infrastructure has been put into place that provides the storage, compute and networking requirements to effectively process the genomic data to completion for use by physicians and geneticists. This system is known as “Phoenix” and is an isolated compute cluster dedicated to clinical sequence processing. Access to this system requires direct association with the DMP.

  • Centos 6
  • Sun Grid Engine Scheduler
  • 500TB Isilon compute storage + 500TB Isilon storage mirror 500TB Azure Cloud storage data archive

Other:

  • > 100 Other compute servers available for dedicated needs