The Clark Science Center’s Computer and Technical Services group (CATS) at Smith College maintains and supports a High Performance Computing (HPC) infrastructure. If you need to make use of HPC resources please contact us at email@example.com.
The HPC system currently consists of two separate systems. Two Dell HPC VM hosts and a Bright Cluster containing 8 compute servers. Servers specs are outlined below. The virtualization and higher amount of RAM on the two VM hosts allow the flexibility of running either Windows or Linux VMs at the same time and allows us to allocate only the resources that are needed.
Current HPC Resources
Virtual HPC Server – 1
512GB RAM, 2 processors with 18 cores each giving 72 total with HyperThreading.
Virtual HPC Server – 2
512GB RAM, 2 processors with 18 cores (72 total), 1 NVIDIA Tesla M60 GPU with 16GB of Ram and 4096 GPU cores.
Bright Computing Cluster
The Bright computing cluster consists of a management node(computer) and multiple compute nodes. Software is loaded onto the management node by CATS using EasyBuild module manager and shared throughout the compute nodes. The cluster uses Slurm Workload Manager to schedule and run the jobs.
384GB RAM, 2 processors with 32 cores
Compute Nodes (8)
192GB RAM, 2 processors with 32 cores ea
1500GB RAM, 2 processors with 35 cores ea
Dev Compute Nodes (older servers)
256GB Ram, 1 processor with 8 cores ea
Cluster management tools: Hadoop, VMware, Bright Computing, Slurm, Easybuild
Other Computational Software
- Biology — bioinformatics and genetic analyses using: Seqman NGen, Muscle, SOAPDenovo, SPAdes and other packages
- Chemistry — calculational chemistry using: Gaussian, ORCA, ADF
- Economics — Monte Carlo and other modeling simulations using: Matlab, Mathematica, NetLogo, Stata
- Engineering — modeling using Ansys
- Physics — Monte Carlo simulations using: Matlab
- Statistics and Data Sciences — R