Bouchet
Yale recently joined the Massachusetts Green High Performance Computing Center (MGHPCC), a not-for-profit, state-of-the-art data center dedicated to computationally-intensive research. We are pleased to announce our first installation at MGHPCC will be a new HPC cluster called Bouchet. Bouchet is named for Edward Bouchet (1852-1918), the first self-identified African American to earn a doctorate from an American university, a PhD in physics at Yale University in 1876.
Announcing the Bouchet HPC Cluster
The Bouchet HPC cluster will be available in beta Fall 2024.
The first installation of nodes, approximately 4,000 direct-liquid-cooled cores, will be dedicated to tightly coupled parallel workflows, such as those run in the mpi
partition on the Grace cluster.
Later on this year we will be acquiring and installing a large number of general purpose compute nodes as well as GPU-enabled compute nodes.
At that point Bouchet will be available to all Yale researchers for computational work involving low-risk data.
Ultimately, Bouchet is the planned successor to both Grace and McCleary, with the majority of HPC infrastructure refreshes and growth deployed at MGHPCC going forward. However, we are still in the early stages of planning that transition and will continue to operate both Grace and McCleary in their current form for a number of years. More details will be provided as we consult with faculty and researchers about the transition and how we can minimize disruptions to critical work. To this effect, we will be convening a faculty advisory committee this fall to ensure a smooth migration.
If you have any questions about Yale’s partnership at MGHPCC or the Bouchet cluster, please reach out to us.
Access the Cluster
Once you have an account, the cluster can be accessed via ssh.
System Status and Monitoring
For system status messages and the schedule for upcoming maintenance, please see the system status page. For a current node-level view of job activity, see the cluster monitor page (VPN only).
Partitions and Hardware
Bouchet is made up of sixty identical compute nodes.
These are mostly reserved for the mpi
partition, but we have set aside two nodes for debugging and compiliation in the devel
partition.
Public Partitions
See each tab below for more information about the available common use partitions.
Use the devel partition to jobs with which you need ongoing interaction. For example, exploratory analyses or debugging compilation.
Request Defaults
Unless specified, your jobs will run with the following options to salloc
and sbatch
options for this partition.
--time=01:00:00 --nodes=1 --ntasks=1 --cpus-per-task=1 --mem-per-cpu=5120
Job Limits
Jobs submitted to the devel partition are subject to the following limits:
Limit | Value |
---|---|
Maximum job time limit | 06:00:00 |
Maximum CPUs per user | 10 |
Maximum memory per user | 70G |
Available Compute Nodes
Requests for --cpus-per-task
and --mem
can't exceed what is available on a single compute node.
Count | CPU Type | CPUs/Node | Memory/Node (GiB) | Node Features |
---|---|---|---|---|
2 | 8562Y+ | 64 | 479 | cpugen:emeraldrerapids, cpumodel:8562Y+, common:yes |
Use the mpi partition for tightly-coupled parallel programs that make efficient use of multiple nodes. See our MPI documentation if your workload fits this description.
Request Defaults
Unless specified, your jobs will run with the following options to salloc
and sbatch
options for this partition.
--time=01:00:00 --nodes=1 --ntasks=1 --cpus-per-task=1 --exclusive --mem=498688
Job Limits
Jobs submitted to the mpi partition are subject to the following limits:
Limit | Value |
---|---|
Maximum job time limit | 1-00:00:00 |
Maximum nodes per group | 58 |
Maximum nodes per user | 58 |
Available Compute Nodes
Requests for --cpus-per-task
and --mem
can't exceed what is available on a single compute node.
Count | CPU Type | CPUs/Node | Memory/Node (GiB) | Node Features |
---|---|---|---|---|
58 | 8562Y+ | 64 | 479 | cpugen:emeraldrerapids, cpumodel:8562Y+, common:yes |
Storage
Bouchet has access to one filesystem called roberts
.
This is a VAST filesystem similar to the palmer
filesystem on Grace and McCleary.
For more details on the different storage spaces, see our Cluster Storage documentation.
You can check your current storage usage & limits by running the getquota
command.
Your ~/project
and ~/scratch
directories are shortcuts.
Get a list of the absolute paths to your directories with the mydirectories
command.
If you want to share data in your Project or Scratch directory, see the permissions page.
For information on data recovery, see the Backups and Snapshots documentation.
Warning
Files stored in scratch
are purged if they are older than 60 days. You will receive an email alert one week before they are deleted. Artificial extension of scratch file expiration is forbidden without explicit approval from the YCRC. Please purchase storage if you need additional longer term storage.
Partition | Root Directory | Storage | File Count | Backups | Snapshots | Notes |
---|---|---|---|---|---|---|
home | /home |
125GiB/user | 500,000 | Yes | >=2 days | |
project | /nfs/roberts/project |
1TiB/group, increase to 4TiB on request | 5,000,000 | No | >=2 days | |
scratch | /nfs/roberts/scratch |
10TiB/group | 15,000,000 | No | No |