McCleary
McCleary is a shared-use resource for the Yale School of Medicine (YSM), life science researchers elsewhere on campus and projects related to the Yale Center for Genome Analysis. It consists of a variety of compute nodes networked over ethernet and mounts several shared filesystems.
McCleary is named for Beatrix McCleary Hamburg, who received her medical degree in 1948 and was the first female African American graduate of Yale School of Medicine. The McCleary HPC cluster is Yale's first direct-to-chip liquid cooled cluster, moving the YCRC and the Yale research computing community into a more environmentally friendly future.
Info
Farnam or Ruddle user? Farnam and Ruddle were both retired in summer 2023. See our explainer for what you need to know about using McCleary and how it differs from Farnam and Ruddle.
Access the Cluster
Once you have an account, the cluster can be accessed via ssh or through the Open OnDemand web portal.
System Status and Monitoring
For system status messages and the schedule for upcoming maintenance, please see the system status page. For a current node-level view of job activity, see the cluster monitor page (VPN only).
Partitions and Hardware
McCleary is made up of several kinds of compute nodes. We group them into (sometimes overlapping) Slurm partitions meant to serve different purposes. By combining the --partition
and --constraint
Slurm options you can more finely control what nodes your jobs can run on.
Info
YCGA sequence data user? To avoid being charged for your cpu usage for YCGA-related work, make sure to submit jobs to the ycga partition with -p ycga.
Job Submission Limits
-
You are limited to 4 interactive app instances (of any type) at one time. Additional instances will be rejected until you delete older open instances. For OnDemand jobs, closing the window does not terminate the interactive app job. To terminate the job, click the "Delete" button in your "My Interactive Apps" page in the web portal.
-
Job submissions are limited to 200 jobs per hour. See the Rate Limits section in the Common Job Failures page for more info.
Public Partitions
See each tab below for more information about the available common use partitions.
Use the day partition for most batch jobs. This is the default if you don't specify one with --partition
.
Request Defaults
Unless specified, your jobs will run with the following options to salloc
and sbatch
options for this partition.
--time=01:00:00 --nodes=1 --ntasks=1 --cpus-per-task=1 --mem-per-cpu=5120
Job Limits
Jobs submitted to the day partition are subject to the following limits:
Limit | Value |
---|---|
Maximum job time limit | 1-00:00:00 |
Maximum CPUs per group | 512 |
Maximum memory per group | 6000G |
Maximum CPUs per user | 256 |
Maximum memory per user | 3000G |
Available Compute Nodes
Requests for --cpus-per-task
and --mem
can't exceed what is available on a single compute node.
Count | CPU Type | CPUs/Node | Memory/Node (GiB) | Node Features |
---|---|---|---|---|
26 | 8358 | 64 | 983 | icelake, avx512, 8358, nogpu, bigtmp, common |
5 | 6240 | 36 | 180 | cascadelake, avx512, 6240, nogpu, bigtmp, standard, oldest, common |
Use the devel partition to jobs with which you need ongoing interaction. For example, exploratory analyses or debugging compilation.
Request Defaults
Unless specified, your jobs will run with the following options to salloc
and sbatch
options for this partition.
--time=01:00:00 --nodes=1 --ntasks=1 --cpus-per-task=1 --mem-per-cpu=5120
Job Limits
Jobs submitted to the devel partition are subject to the following limits:
Limit | Value |
---|---|
Maximum job time limit | 06:00:00 |
Maximum CPUs per user | 4 |
Maximum memory per user | 32G |
Maximum submitted jobs per user | 1 |
Available Compute Nodes
Requests for --cpus-per-task
and --mem
can't exceed what is available on a single compute node.
Count | CPU Type | CPUs/Node | Memory/Node (GiB) | Node Features |
---|---|---|---|---|
7 | 6240 | 36 | 180 | cascadelake, avx512, 6240, nogpu, bigtmp, standard, oldest, common |
Use the week partition for jobs that need a longer runtime than day allows.
Request Defaults
Unless specified, your jobs will run with the following options to salloc
and sbatch
options for this partition.
--time=01:00:00 --nodes=1 --ntasks=1 --cpus-per-task=1 --mem-per-cpu=5120
Job Limits
Jobs submitted to the week partition are subject to the following limits:
Limit | Value |
---|---|
Maximum job time limit | 7-00:00:00 |
Maximum CPUs per group | 192 |
Maximum memory per group | 2949G |
Maximum CPUs per user | 192 |
Maximum memory per user | 2949G |
Available Compute Nodes
Requests for --cpus-per-task
and --mem
can't exceed what is available on a single compute node.
Count | CPU Type | CPUs/Node | Memory/Node (GiB) | Node Features |
---|---|---|---|---|
14 | 8358 | 64 | 983 | icelake, avx512, 8358, nogpu, bigtmp, common |
2 | 6240 | 36 | 180 | cascadelake, avx512, 6240, nogpu, bigtmp, standard, oldest, common |
Use the long partition for jobs that need a longer runtime than week allows.
Request Defaults
Unless specified, your jobs will run with the following options to salloc
and sbatch
options for this partition.
--time=7-00:00:00 --nodes=1 --ntasks=1 --cpus-per-task=1 --mem-per-cpu=5120
Job Limits
Jobs submitted to the long partition are subject to the following limits:
Limit | Value |
---|---|
Maximum job time limit | 28-00:00:00 |
Maximum CPUs per group | 36 |
Maximum CPUs per user | 36 |
Available Compute Nodes
Requests for --cpus-per-task
and --mem
can't exceed what is available on a single compute node.
Count | CPU Type | CPUs/Node | Memory/Node (GiB) | Node Features |
---|---|---|---|---|
3 | 6240 | 36 | 180 | cascadelake, avx512, 6240, nogpu, bigtmp, standard, oldest, common |
Use the transfer partition to stage data for your jobs to and from cluster storage.
Request Defaults
Unless specified, your jobs will run with the following options to salloc
and sbatch
options for this partition.
--time=01:00:00 --nodes=1 --ntasks=1 --cpus-per-task=1 --mem-per-cpu=5120
Job Limits
Jobs submitted to the transfer partition are subject to the following limits:
Limit | Value |
---|---|
Maximum job time limit | 1-00:00:00 |
Maximum CPUs per user | 4 |
Maximum running jobs per user | 4 |
Available Compute Nodes
Requests for --cpus-per-task
and --mem
can't exceed what is available on a single compute node.
Count | CPU Type | CPUs/Node | Memory/Node (GiB) | Node Features |
---|---|---|---|---|
2 | 72 | 8 | 227 | milan, 72F3, nogpu, standard, common |
Use the gpu partition for jobs that make use of GPUs. You must request GPUs explicitly with the --gpus
option in order to use them. For example, --gpus=gtx1080ti:2
would request 2 GeForce GTX 1080Ti GPUs per node.
Request Defaults
Unless specified, your jobs will run with the following options to salloc
and sbatch
options for this partition.
--time=01:00:00 --nodes=1 --ntasks=1 --cpus-per-task=1 --mem-per-cpu=5120
GPU jobs need GPUs!
Jobs submitted to this partition do not request a GPU by default. You must request one with the --gpus
option.
Job Limits
Jobs submitted to the gpu partition are subject to the following limits:
Limit | Value |
---|---|
Maximum job time limit | 2-00:00:00 |
Maximum GPUs per group | 24 |
Maximum GPUs per user | 12 |
Available Compute Nodes
Requests for --cpus-per-task
and --mem
can't exceed what is available on a single compute node.
Count | CPU Type | CPUs/Node | Memory/Node (GiB) | GPU Type | GPUs/Node | vRAM/GPU (GB) | Node Features |
---|---|---|---|---|---|---|---|
9 | 6326 | 32 | 206 | a5000 | 4 | 24 | icelake, avx512, 6326, doubleprecision, a5000, common |
1 | 8358 | 64 | 983 | a100 | 4 | 80 | icelake, avx512, 8358, doubleprecision, bigtmp, common, a100, a100-80g |
1 | 8358 | 64 | 983 | a100 | 4 | 80 | icelake, avx512, 8358, gpu, bigtmp, common, doubleprecision, a100, a100-80g |
1 | 8358 | 64 | 984 | a100 | 4 | 80 | icelake, avx512, 8358, doubleprecision, bigtmp, common, a100, a100-80g |
3 | 5222 | 8 | 163 | rtx3090 | 4 | 24 | cascadelake, avx512, 5222, doubleprecision, common, rtx3090 |
4 | 5222 | 8 | 163 | rtx5000 | 4 | 16 | cascadelake, avx512, 5222, doubleprecision, common, bigtmp, rtx5000 |
Use the gpu_devel partition to debug jobs that make use of GPUs, or to develop GPU-enabled code.
Request Defaults
Unless specified, your jobs will run with the following options to salloc
and sbatch
options for this partition.
--time=01:00:00 --nodes=1 --ntasks=1 --cpus-per-task=1 --mem-per-cpu=5120
GPU jobs need GPUs!
Jobs submitted to this partition do not request a GPU by default. You must request one with the --gpus
option.
Job Limits
Jobs submitted to the gpu_devel partition are subject to the following limits:
Limit | Value |
---|---|
Maximum job time limit | 06:00:00 |
Maximum CPUs per user | 10 |
Maximum GPUs per user | 2 |
Maximum submitted jobs per user | 2 |
Available Compute Nodes
Requests for --cpus-per-task
and --mem
can't exceed what is available on a single compute node.
Count | CPU Type | CPUs/Node | Memory/Node (GiB) | GPU Type | GPUs/Node | vRAM/GPU (GB) | Node Features |
---|---|---|---|---|---|---|---|
3 | 6326 | 32 | 206 | a5000 | 4 | 24 | icelake, avx512, 6326, doubleprecision, a5000, common |
2 | 5222 | 8 | 163 | rtx5000 | 4 | 16 | cascadelake, avx512, 5222, doubleprecision, common, bigtmp, rtx5000 |
1 | 5222 | 8 | 163 | rtx3090 | 4 | 24 | cascadelake, avx512, 5222, doubleprecision, common, rtx3090 |
1 | 6240 | 36 | 352 | a100 | 4 | 40 | cascadelake, avx512, 6240, doubleprecision, common, bigtmp, oldest, a100, a100-40g |
Use the bigmem partition for jobs that have memory requirements other partitions can't handle.
Request Defaults
Unless specified, your jobs will run with the following options to salloc
and sbatch
options for this partition.
--time=01:00:00 --nodes=1 --ntasks=1 --cpus-per-task=1 --mem-per-cpu=5120
Job Limits
Jobs submitted to the bigmem partition are subject to the following limits:
Limit | Value |
---|---|
Maximum job time limit | 1-00:00:00 |
Maximum CPUs per user | 32 |
Maximum memory per user | 3960G |
Available Compute Nodes
Requests for --cpus-per-task
and --mem
can't exceed what is available on a single compute node.
Count | CPU Type | CPUs/Node | Memory/Node (GiB) | Node Features |
---|---|---|---|---|
4 | 6346 | 32 | 3960 | icelake, avx512, 6346, nogpu, bigtmp, common |
3 | 6240 | 36 | 1486 | cascadelake, avx512, 6240, nogpu, pi, bigtmp, oldest |
2 | 6234 | 16 | 1486 | cascadelake, avx512, 6234, nogpu, common, bigtmp |
Use the scavenge partition to run preemptable jobs on more resources than normally allowed. For more information about scavenge, see the Scavenge documentation.
Request Defaults
Unless specified, your jobs will run with the following options to salloc
and sbatch
options for this partition.
--time=01:00:00 --nodes=1 --ntasks=1 --cpus-per-task=1 --mem-per-cpu=5120
GPU jobs need GPUs!
Jobs submitted to this partition do not request a GPU by default. You must request one with the --gpus
option.
Job Limits
Jobs submitted to the scavenge partition are subject to the following limits:
Limit | Value |
---|---|
Maximum job time limit | 1-00:00:00 |
Maximum CPUs per user | 1000 |
Maximum memory per user | 20000G |
Available Compute Nodes
Requests for --cpus-per-task
and --mem
can't exceed what is available on a single compute node.
Count | CPU Type | CPUs/Node | Memory/Node (GiB) | GPU Type | GPUs/Node | vRAM/GPU (GB) | Node Features |
---|---|---|---|---|---|---|---|
48 | 8362 | 64 | 479 | icelake, avx512, 8362, nogpu, standard, pi | |||
1 | 8358 | 64 | 1007 | a5000 | 8 | 24 | icelake, avx512, 8358, doubleprecision, bigtmp, pi, a5000 |
17 | 6326 | 32 | 206 | a5000 | 4 | 24 | icelake, avx512, 6326, doubleprecision, a5000, common |
2 | 6326 | 32 | 984 | a100 | 4 | 80 | icelake, avx512, 6326, doubleprecision, pi, a100, a100-80g |
40 | 8358 | 64 | 983 | icelake, avx512, 8358, nogpu, bigtmp, common | |||
1 | 8358 | 64 | 983 | a100 | 4 | 80 | icelake, avx512, 8358, doubleprecision, bigtmp, common, a100, a100-80g |
1 | 8358 | 64 | 983 | a100 | 4 | 80 | icelake, avx512, 8358, doubleprecision, bigtmp, pi, a100, a100-80g |
1 | 8358 | 64 | 983 | a100 | 4 | 80 | icelake, avx512, 8358, gpu, bigtmp, common, doubleprecision, a100, a100-80g |
1 | 8358 | 64 | 984 | a100 | 4 | 80 | icelake, avx512, 8358, doubleprecision, bigtmp, common, a100, a100-80g |
4 | 6346 | 32 | 1991 | icelake, avx512, 6346, nogpu, pi | |||
4 | 6326 | 32 | 479 | a5000 | 4 | 24 | icelake, avx512, 6326, doubleprecision, a5000, pi |
1 | 8358 | 64 | 1007 | l40s | 8 | 48 | icelake, avx512, 8358, doubleprecision, pi, bigtmp, l40s, |
4 | 6346 | 32 | 3960 | icelake, avx512, 6346, nogpu, bigtmp, common | |||
41 | 6240 | 36 | 180 | cascadelake, avx512, 6240, nogpu, bigtmp, standard, pi | |||
4 | 6240 | 36 | 730 | cascadelake, avx512, 6240, nogpu, bigtmp, standard, pi | |||
4 | 6240 | 36 | 352 | cascadelake, avx512, 6240, nogpu, bigtmp, standard, pi | |||
12 | 6240 | 36 | 180 | cascadelake, avx512, 6240, nogpu, bigtmp, standard, oldest, common | |||
9 | 6240 | 36 | 163 | cascadelake, avx512, 6240, nogpu, bigtmp, standard, pi | |||
2 | 6240 | 36 | 166 | cascadelake, avx512, 6240, nogpu, bigtmp, standard, pi | |||
3 | 5222 | 8 | 163 | rtx3090 | 4 | 24 | cascadelake, avx512, 5222, doubleprecision, common, rtx3090 |
6 | 6240 | 36 | 1486 | cascadelake, avx512, 6240, nogpu, pi, bigtmp, oldest | |||
10 | 8268 | 48 | 352 | cascadelake, avx512, 8268, nogpu, bigtmp, pi | |||
1 | 6248r | 48 | 352 | cascadelake, avx512, 6248r, nogpu, pi, bigtmp | |||
2 | 6234 | 16 | 1486 | cascadelake, avx512, 6234, nogpu, common, bigtmp | |||
1 | 6240 | 36 | 352 | v100 | 4 | 16 | cascadelake, avx512, 6240, pi, oldest, v100 |
4 | 5222 | 8 | 163 | rtx5000 | 4 | 16 | cascadelake, avx512, 5222, doubleprecision, common, bigtmp, rtx5000 |
8 | 5222 | 8 | 163 | rtx5000 | 4 | 16 | cascadelake, avx512, 5222, doubleprecision, pi, bigtmp, rtx5000 |
2 | 6240 | 36 | 352 | a100 | 4 | 40 | cascadelake, avx512, 6240, doubleprecision, pi, bigtmp, oldest, a100, a100-40g |
1 | 6226r | 32 | 163 | rtx3090 | 4 | 24 | cascadelake, avx512, 6226r, doubleprecision, pi, rtx3090 |
2 | 6240 | 36 | 163 | rtx2080ti | 4 | 11 | cascadelake, avx512, 6240, singleprecision, pi, bigtmp, oldest, rtx2080ti |
1 | 6242 | 32 | 981 | rtx8000 | 2 | 48 | cascadelake, avx512, 6242, doubleprecision, pi, bigtmp, oldest, rtx8000 |
1 | 6240 | 36 | 352 | rtx3090 | 8 | 24 | cascadelake, avx512, 6240, doubleprecision, pi, bigtmp, oldest, rtx3090 |
1 | 6240 | 36 | 730 | a100 | 4 | 40 | cascadelake, avx512, 6240, doubleprecision, pi, bigtmp, oldest, a100, a100-40g |
1 | 6240 | 36 | 163 | rtx3090 | 4 | 24 | cascadelake, avx512, 6240, doubleprecision, pi, bigtmp, oldest, rtx3090 |
1 | 6240 | 36 | 163 | rtx3090 | 8 | 24 | cascadelake, avx512, 6240, doubleprecision, pi, bigtmp, oldest, rtx3090 |
1 | 6132 | 28 | 730 | skylake, avx512, 6132, nogpu, standard, bigtmp, pi | |||
2 | 6132 | 28 | 163 | skylake, avx512, 6132, nogpu, standard, bigtmp, pi | |||
2 | 5122 | 8 | 163 | rtx2080 | 4 | 8 | skylake, avx512, 5122, singleprecision, pi, rtx2080 |
Use the scavenge_gpu partition to run preemptable jobs on more GPU resources than normally allowed. For more information about scavenge, see the Scavenge documentation.
Request Defaults
Unless specified, your jobs will run with the following options to salloc
and sbatch
options for this partition.
--time=01:00:00 --nodes=1 --ntasks=1 --cpus-per-task=1 --mem-per-cpu=5120
GPU jobs need GPUs!
Jobs submitted to this partition do not request a GPU by default. You must request one with the --gpus
option.
Job Limits
Jobs submitted to the scavenge_gpu partition are subject to the following limits:
Limit | Value |
---|---|
Maximum job time limit | 1-00:00:00 |
Maximum GPUs per group | 100 |
Maximum GPUs per user | 64 |
Available Compute Nodes
Requests for --cpus-per-task
and --mem
can't exceed what is available on a single compute node.
Count | CPU Type | CPUs/Node | Memory/Node (GiB) | GPU Type | GPUs/Node | vRAM/GPU (GB) | Node Features |
---|---|---|---|---|---|---|---|
1 | 8358 | 64 | 1007 | a5000 | 8 | 24 | icelake, avx512, 8358, doubleprecision, bigtmp, pi, a5000 |
17 | 6326 | 32 | 206 | a5000 | 4 | 24 | icelake, avx512, 6326, doubleprecision, a5000, common |
2 | 6326 | 32 | 984 | a100 | 4 | 80 | icelake, avx512, 6326, doubleprecision, pi, a100, a100-80g |
1 | 8358 | 64 | 983 | a100 | 4 | 80 | icelake, avx512, 8358, doubleprecision, bigtmp, common, a100, a100-80g |
1 | 8358 | 64 | 983 | a100 | 4 | 80 | icelake, avx512, 8358, doubleprecision, bigtmp, pi, a100, a100-80g |
1 | 8358 | 64 | 983 | a100 | 4 | 80 | icelake, avx512, 8358, gpu, bigtmp, common, doubleprecision, a100, a100-80g |
1 | 8358 | 64 | 984 | a100 | 4 | 80 | icelake, avx512, 8358, doubleprecision, bigtmp, common, a100, a100-80g |
4 | 6326 | 32 | 479 | a5000 | 4 | 24 | icelake, avx512, 6326, doubleprecision, a5000, pi |
1 | 8358 | 64 | 1007 | l40s | 8 | 48 | icelake, avx512, 8358, doubleprecision, pi, bigtmp, l40s, |
3 | 5222 | 8 | 163 | rtx3090 | 4 | 24 | cascadelake, avx512, 5222, doubleprecision, common, rtx3090 |
1 | 6240 | 36 | 352 | v100 | 4 | 16 | cascadelake, avx512, 6240, pi, oldest, v100 |
4 | 5222 | 8 | 163 | rtx5000 | 4 | 16 | cascadelake, avx512, 5222, doubleprecision, common, bigtmp, rtx5000 |
8 | 5222 | 8 | 163 | rtx5000 | 4 | 16 | cascadelake, avx512, 5222, doubleprecision, pi, bigtmp, rtx5000 |
2 | 6240 | 36 | 352 | a100 | 4 | 40 | cascadelake, avx512, 6240, doubleprecision, pi, bigtmp, oldest, a100, a100-40g |
1 | 6226r | 32 | 163 | rtx3090 | 4 | 24 | cascadelake, avx512, 6226r, doubleprecision, pi, rtx3090 |
2 | 6240 | 36 | 163 | rtx2080ti | 4 | 11 | cascadelake, avx512, 6240, singleprecision, pi, bigtmp, oldest, rtx2080ti |
1 | 6242 | 32 | 981 | rtx8000 | 2 | 48 | cascadelake, avx512, 6242, doubleprecision, pi, bigtmp, oldest, rtx8000 |
1 | 6240 | 36 | 352 | rtx3090 | 8 | 24 | cascadelake, avx512, 6240, doubleprecision, pi, bigtmp, oldest, rtx3090 |
1 | 6240 | 36 | 730 | a100 | 4 | 40 | cascadelake, avx512, 6240, doubleprecision, pi, bigtmp, oldest, a100, a100-40g |
1 | 6240 | 36 | 163 | rtx3090 | 4 | 24 | cascadelake, avx512, 6240, doubleprecision, pi, bigtmp, oldest, rtx3090 |
1 | 6240 | 36 | 163 | rtx3090 | 8 | 24 | cascadelake, avx512, 6240, doubleprecision, pi, bigtmp, oldest, rtx3090 |
2 | 5122 | 8 | 163 | rtx2080 | 4 | 8 | skylake, avx512, 5122, singleprecision, pi, rtx2080 |
Private Partitions
With few exceptions, jobs submitted to private partitions are not considered when calculating your group's Fairshare. Your group can purchase additional hardware for private use, which we will make available as a pi_groupname
partition. These nodes are purchased by you, but supported and administered by us. After vendor support expires, we retire compute nodes. Compute nodes can range from $10K to upwards of $50K depending on your requirements. If you are interested in purchasing nodes for your group, please contact us.
PI Partitions (click to expand)
Request Defaults
Unless specified, your jobs will run with the following options to salloc
and sbatch
options for this partition.
--time=01:00:00 --nodes=1 --ntasks=1 --cpus-per-task=1 --mem-per-cpu=5120
GPU jobs need GPUs!
Jobs submitted to this partition do not request a GPU by default. You must request one with the --gpus
option.
Job Limits
Jobs submitted to the pi_bunick partition are subject to the following limits:
Limit | Value |
---|---|
Maximum job time limit | 7-00:00:00 |
Available Compute Nodes
Requests for --cpus-per-task
and --mem
can't exceed what is available on a single compute node.
Count | CPU Type | CPUs/Node | Memory/Node (GiB) | GPU Type | GPUs/Node | vRAM/GPU (GB) | Node Features |
---|---|---|---|---|---|---|---|
1 | 6240 | 36 | 352 | a100 | 4 | 40 | cascadelake, avx512, 6240, doubleprecision, pi, bigtmp, oldest, a100, a100-40g |
Request Defaults
Unless specified, your jobs will run with the following options to salloc
and sbatch
options for this partition.
--time=01:00:00 --nodes=1 --ntasks=1 --cpus-per-task=1 --mem-per-cpu=5120
GPU jobs need GPUs!
Jobs submitted to this partition do not request a GPU by default. You must request one with the --gpus
option.
Job Limits
Jobs submitted to the pi_butterwick partition are subject to the following limits:
Limit | Value |
---|---|
Maximum job time limit | 7-00:00:00 |
Available Compute Nodes
Requests for --cpus-per-task
and --mem
can't exceed what is available on a single compute node.
Count | CPU Type | CPUs/Node | Memory/Node (GiB) | GPU Type | GPUs/Node | vRAM/GPU (GB) | Node Features |
---|---|---|---|---|---|---|---|
1 | 6240 | 36 | 352 | a100 | 4 | 40 | cascadelake, avx512, 6240, doubleprecision, pi, bigtmp, oldest, a100, a100-40g |
Request Defaults
Unless specified, your jobs will run with the following options to salloc
and sbatch
options for this partition.
--time=01:00:00 --nodes=1 --ntasks=1 --cpus-per-task=1 --mem-per-cpu=5120
Job Limits
Jobs submitted to the pi_chenlab partition are subject to the following limits:
Limit | Value |
---|---|
Maximum job time limit | 14-00:00:00 |
Available Compute Nodes
Requests for --cpus-per-task
and --mem
can't exceed what is available on a single compute node.
Count | CPU Type | CPUs/Node | Memory/Node (GiB) | Node Features |
---|---|---|---|---|
1 | 8268 | 48 | 352 | cascadelake, avx512, 8268, nogpu, bigtmp, pi |
Request Defaults
Unless specified, your jobs will run with the following options to salloc
and sbatch
options for this partition.
--time=1-00:00:00 --nodes=1 --ntasks=1 --cpus-per-task=1 --mem-per-cpu=5120
GPU jobs need GPUs!
Jobs submitted to this partition do not request a GPU by default. You must request one with the --gpus
option.
Job Limits
Jobs submitted to the pi_cryo_realtime partition are subject to the following limits:
Limit | Value |
---|---|
Maximum job time limit | 14-00:00:00 |
Maximum GPUs per user | 12 |
Maximum running jobs per user | 2 |
Available Compute Nodes
Requests for --cpus-per-task
and --mem
can't exceed what is available on a single compute node.
Count | CPU Type | CPUs/Node | Memory/Node (GiB) | GPU Type | GPUs/Node | vRAM/GPU (GB) | Node Features |
---|---|---|---|---|---|---|---|
1 | 6326 | 32 | 206 | a5000 | 4 | 24 | icelake, avx512, 6326, doubleprecision, a5000, common |
Request Defaults
Unless specified, your jobs will run with the following options to salloc
and sbatch
options for this partition.
--time=1-00:00:00 --nodes=1 --ntasks=1 --cpus-per-task=1 --mem-per-cpu=5120
GPU jobs need GPUs!
Jobs submitted to this partition do not request a GPU by default. You must request one with the --gpus
option.
Job Limits
Jobs submitted to the pi_cryoem partition are subject to the following limits:
Limit | Value |
---|---|
Maximum job time limit | 4-00:00:00 |
Maximum CPUs per user | 32 |
Maximum GPUs per user | 12 |
Maximum running jobs per user | 2 |
Available Compute Nodes
Requests for --cpus-per-task
and --mem
can't exceed what is available on a single compute node.
Count | CPU Type | CPUs/Node | Memory/Node (GiB) | GPU Type | GPUs/Node | vRAM/GPU (GB) | Node Features |
---|---|---|---|---|---|---|---|
6 | 6326 | 32 | 206 | a5000 | 4 | 24 | icelake, avx512, 6326, doubleprecision, a5000, common |
Request Defaults
Unless specified, your jobs will run with the following options to salloc
and sbatch
options for this partition.
--time=01:00:00 --nodes=1 --ntasks=1 --cpus-per-task=1 --mem-per-cpu=5120
Job Limits
Jobs submitted to the pi_dewan partition are subject to the following limits:
Limit | Value |
---|---|
Maximum job time limit | 7-00:00:00 |
Available Compute Nodes
Requests for --cpus-per-task
and --mem
can't exceed what is available on a single compute node.
Count | CPU Type | CPUs/Node | Memory/Node (GiB) | Node Features |
---|---|---|---|---|
2 | 6240 | 36 | 163 | cascadelake, avx512, 6240, nogpu, bigtmp, standard, pi |
Request Defaults
Unless specified, your jobs will run with the following options to salloc
and sbatch
options for this partition.
--time=01:00:00 --nodes=1 --ntasks=1 --cpus-per-task=1 --mem-per-cpu=5120
GPU jobs need GPUs!
Jobs submitted to this partition do not request a GPU by default. You must request one with the --gpus
option.
Job Limits
Jobs submitted to the pi_dijk partition are subject to the following limits:
Limit | Value |
---|---|
Maximum job time limit | 7-00:00:00 |
Available Compute Nodes
Requests for --cpus-per-task
and --mem
can't exceed what is available on a single compute node.
Count | CPU Type | CPUs/Node | Memory/Node (GiB) | GPU Type | GPUs/Node | vRAM/GPU (GB) | Node Features |
---|---|---|---|---|---|---|---|
1 | 6240 | 36 | 352 | rtx3090 | 8 | 24 | cascadelake, avx512, 6240, doubleprecision, pi, bigtmp, oldest, rtx3090 |
Request Defaults
Unless specified, your jobs will run with the following options to salloc
and sbatch
options for this partition.
--time=01:00:00 --nodes=1 --ntasks=1 --cpus-per-task=1 --mem-per-cpu=5120
Job Limits
Jobs submitted to the pi_dunn partition are subject to the following limits:
Limit | Value |
---|---|
Maximum job time limit | 7-00:00:00 |
Available Compute Nodes
Requests for --cpus-per-task
and --mem
can't exceed what is available on a single compute node.
Count | CPU Type | CPUs/Node | Memory/Node (GiB) | Node Features |
---|---|---|---|---|
1 | 6240 | 36 | 163 | cascadelake, avx512, 6240, nogpu, bigtmp, standard, pi |
Request Defaults
Unless specified, your jobs will run with the following options to salloc
and sbatch
options for this partition.
--time=01:00:00 --nodes=1 --ntasks=1 --cpus-per-task=1 --mem-per-cpu=5120
Job Limits
Jobs submitted to the pi_edwards partition are subject to the following limits:
Limit | Value |
---|---|
Maximum job time limit | 7-00:00:00 |
Available Compute Nodes
Requests for --cpus-per-task
and --mem
can't exceed what is available on a single compute node.
Count | CPU Type | CPUs/Node | Memory/Node (GiB) | Node Features |
---|---|---|---|---|
1 | 6240 | 36 | 163 | cascadelake, avx512, 6240, nogpu, bigtmp, standard, pi |
Request Defaults
Unless specified, your jobs will run with the following options to salloc
and sbatch
options for this partition.
--time=01:00:00 --nodes=1 --ntasks=1 --cpus-per-task=1 --mem-per-cpu=5120
GPU jobs need GPUs!
Jobs submitted to this partition do not request a GPU by default. You must request one with the --gpus
option.
Job Limits
Jobs submitted to the pi_falcone partition are subject to the following limits:
Limit | Value |
---|---|
Maximum job time limit | 7-00:00:00 |
Available Compute Nodes
Requests for --cpus-per-task
and --mem
can't exceed what is available on a single compute node.
Count | CPU Type | CPUs/Node | Memory/Node (GiB) | GPU Type | GPUs/Node | vRAM/GPU (GB) | Node Features |
---|---|---|---|---|---|---|---|
1 | 6240 | 36 | 163 | cascadelake, avx512, 6240, nogpu, bigtmp, standard, pi | |||
1 | 6240 | 36 | 1486 | cascadelake, avx512, 6240, nogpu, pi, bigtmp, oldest | |||
1 | 6240 | 36 | 352 | v100 | 4 | 16 | cascadelake, avx512, 6240, pi, oldest, v100 |
Request Defaults
Unless specified, your jobs will run with the following options to salloc
and sbatch
options for this partition.
--time=01:00:00 --nodes=1 --ntasks=1 --cpus-per-task=1 --mem-per-cpu=5120
Job Limits
Jobs submitted to the pi_galvani partition are subject to the following limits:
Limit | Value |
---|---|
Maximum job time limit | 7-00:00:00 |
Available Compute Nodes
Requests for --cpus-per-task
and --mem
can't exceed what is available on a single compute node.
Count | CPU Type | CPUs/Node | Memory/Node (GiB) | Node Features |
---|---|---|---|---|
7 | 8268 | 48 | 352 | cascadelake, avx512, 8268, nogpu, bigtmp, pi |
Request Defaults
Unless specified, your jobs will run with the following options to salloc
and sbatch
options for this partition.
--time=01:00:00 --nodes=1 --ntasks=1 --cpus-per-task=1 --mem-per-cpu=5120
Job Limits
Jobs submitted to the pi_gerstein partition are subject to the following limits:
Limit | Value |
---|---|
Maximum job time limit | 7-00:00:00 |
Available Compute Nodes
Requests for --cpus-per-task
and --mem
can't exceed what is available on a single compute node.
Count | CPU Type | CPUs/Node | Memory/Node (GiB) | Node Features |
---|---|---|---|---|
1 | 6132 | 28 | 730 | skylake, avx512, 6132, nogpu, standard, bigtmp, pi |
2 | 6132 | 28 | 163 | skylake, avx512, 6132, nogpu, standard, bigtmp, pi |
Request Defaults
Unless specified, your jobs will run with the following options to salloc
and sbatch
options for this partition.
--time=01:00:00 --nodes=1 --ntasks=1 --cpus-per-task=1 --mem-per-cpu=5120
GPU jobs need GPUs!
Jobs submitted to this partition do not request a GPU by default. You must request one with the --gpus
option.
Job Limits
Jobs submitted to the pi_gerstein_gpu partition are subject to the following limits:
Limit | Value |
---|---|
Maximum job time limit | 7-00:00:00 |
Available Compute Nodes
Requests for --cpus-per-task
and --mem
can't exceed what is available on a single compute node.
Count | CPU Type | CPUs/Node | Memory/Node (GiB) | GPU Type | GPUs/Node | vRAM/GPU (GB) | Node Features |
---|---|---|---|---|---|---|---|
1 | 8358 | 64 | 983 | a100 | 4 | 80 | icelake, avx512, 8358, doubleprecision, bigtmp, pi, a100, a100-80g |
1 | 6240 | 36 | 163 | rtx3090 | 4 | 24 | cascadelake, avx512, 6240, doubleprecision, pi, bigtmp, oldest, rtx3090 |
1 | 6240 | 36 | 163 | rtx3090 | 8 | 24 | cascadelake, avx512, 6240, doubleprecision, pi, bigtmp, oldest, rtx3090 |
Request Defaults
Unless specified, your jobs will run with the following options to salloc
and sbatch
options for this partition.
--time=01:00:00 --nodes=1 --ntasks=1 --cpus-per-task=1 --mem-per-cpu=5120
GPU jobs need GPUs!
Jobs submitted to this partition do not request a GPU by default. You must request one with the --gpus
option.
Job Limits
Jobs submitted to the pi_hall partition are subject to the following limits:
Limit | Value |
---|---|
Maximum job time limit | 28-00:00:00 |
Available Compute Nodes
Requests for --cpus-per-task
and --mem
can't exceed what is available on a single compute node.
Count | CPU Type | CPUs/Node | Memory/Node (GiB) | GPU Type | GPUs/Node | vRAM/GPU (GB) | Node Features |
---|---|---|---|---|---|---|---|
2 | 6326 | 32 | 984 | a100 | 4 | 80 | icelake, avx512, 6326, doubleprecision, pi, a100, a100-80g |
39 | 6240 | 36 | 180 | cascadelake, avx512, 6240, nogpu, bigtmp, standard, pi |
Request Defaults
Unless specified, your jobs will run with the following options to salloc
and sbatch
options for this partition.
--time=01:00:00 --nodes=1 --ntasks=1 --cpus-per-task=1 --mem-per-cpu=5120
Job Limits
Jobs submitted to the pi_hall_bigmem partition are subject to the following limits:
Limit | Value |
---|---|
Maximum job time limit | 28-00:00:00 |
Available Compute Nodes
Requests for --cpus-per-task
and --mem
can't exceed what is available on a single compute node.
Count | CPU Type | CPUs/Node | Memory/Node (GiB) | Node Features |
---|---|---|---|---|
2 | 6240 | 36 | 1486 | cascadelake, avx512, 6240, nogpu, pi, bigtmp, oldest |
Request Defaults
Unless specified, your jobs will run with the following options to salloc
and sbatch
options for this partition.
--time=01:00:00 --nodes=1 --ntasks=1 --cpus-per-task=1 --mem-per-cpu=5120
Job Limits
Jobs submitted to the pi_jetz partition are subject to the following limits:
Limit | Value |
---|---|
Maximum job time limit | 7-00:00:00 |
Available Compute Nodes
Requests for --cpus-per-task
and --mem
can't exceed what is available on a single compute node.
Count | CPU Type | CPUs/Node | Memory/Node (GiB) | Node Features |
---|---|---|---|---|
2 | 8358 | 64 | 1991 | icelake, avx512, 8358, nogpu, bigtmp, pi |
4 | 6240 | 36 | 730 | cascadelake, avx512, 6240, nogpu, bigtmp, standard, pi |
4 | 6240 | 36 | 352 | cascadelake, avx512, 6240, nogpu, bigtmp, standard, pi |
Request Defaults
Unless specified, your jobs will run with the following options to salloc
and sbatch
options for this partition.
--time=01:00:00 --nodes=1 --ntasks=1 --cpus-per-task=1 --mem-per-cpu=5120
Job Limits
Jobs submitted to the pi_kleinstein partition are subject to the following limits:
Limit | Value |
---|---|
Maximum job time limit | 7-00:00:00 |
Available Compute Nodes
Requests for --cpus-per-task
and --mem
can't exceed what is available on a single compute node.
Count | CPU Type | CPUs/Node | Memory/Node (GiB) | Node Features |
---|---|---|---|---|
2 | 6240 | 36 | 163 | cascadelake, avx512, 6240, nogpu, bigtmp, standard, pi |
Request Defaults
Unless specified, your jobs will run with the following options to salloc
and sbatch
options for this partition.
--time=01:00:00 --nodes=1 --ntasks=1 --cpus-per-task=1 --mem-per-cpu=5120
GPU jobs need GPUs!
Jobs submitted to this partition do not request a GPU by default. You must request one with the --gpus
option.
Job Limits
Jobs submitted to the pi_krishnaswamy partition are subject to the following limits:
Limit | Value |
---|---|
Maximum job time limit | 7-00:00:00 |
Available Compute Nodes
Requests for --cpus-per-task
and --mem
can't exceed what is available on a single compute node.
Count | CPU Type | CPUs/Node | Memory/Node (GiB) | GPU Type | GPUs/Node | vRAM/GPU (GB) | Node Features |
---|---|---|---|---|---|---|---|
1 | 6240 | 36 | 730 | a100 | 4 | 40 | cascadelake, avx512, 6240, doubleprecision, pi, bigtmp, oldest, a100, a100-40g |
Request Defaults
Unless specified, your jobs will run with the following options to salloc
and sbatch
options for this partition.
--time=01:00:00 --nodes=1 --ntasks=1 --cpus-per-task=1 --mem-per-cpu=5120
Job Limits
Jobs submitted to the pi_ma partition are subject to the following limits:
Limit | Value |
---|---|
Maximum job time limit | 7-00:00:00 |
Available Compute Nodes
Requests for --cpus-per-task
and --mem
can't exceed what is available on a single compute node.
Count | CPU Type | CPUs/Node | Memory/Node (GiB) | Node Features |
---|---|---|---|---|
1 | 8268 | 48 | 352 | cascadelake, avx512, 8268, nogpu, bigtmp, pi |
Request Defaults
Unless specified, your jobs will run with the following options to salloc
and sbatch
options for this partition.
--time=01:00:00 --nodes=1 --ntasks=1 --cpus-per-task=1 --mem-per-cpu=5120
Job Limits
Jobs submitted to the pi_medzhitov partition are subject to the following limits:
Limit | Value |
---|---|
Maximum job time limit | 7-00:00:00 |
Available Compute Nodes
Requests for --cpus-per-task
and --mem
can't exceed what is available on a single compute node.
Count | CPU Type | CPUs/Node | Memory/Node (GiB) | Node Features |
---|---|---|---|---|
2 | 6240 | 36 | 166 | cascadelake, avx512, 6240, nogpu, bigtmp, standard, pi |
Request Defaults
Unless specified, your jobs will run with the following options to salloc
and sbatch
options for this partition.
--time=01:00:00 --nodes=1 --ntasks=1 --cpus-per-task=1 --mem-per-cpu=5120
Job Limits
Jobs submitted to the pi_miranker partition are subject to the following limits:
Limit | Value |
---|---|
Maximum job time limit | 7-00:00:00 |
Available Compute Nodes
Requests for --cpus-per-task
and --mem
can't exceed what is available on a single compute node.
Count | CPU Type | CPUs/Node | Memory/Node (GiB) | Node Features |
---|---|---|---|---|
1 | 6248r | 48 | 352 | cascadelake, avx512, 6248r, nogpu, pi, bigtmp |
Request Defaults
Unless specified, your jobs will run with the following options to salloc
and sbatch
options for this partition.
--time=01:00:00 --nodes=1 --ntasks=1 --cpus-per-task=1 --mem-per-cpu=5120
Job Limits
Jobs submitted to the pi_ohern partition are subject to the following limits:
Limit | Value |
---|---|
Maximum job time limit | 7-00:00:00 |
Available Compute Nodes
Requests for --cpus-per-task
and --mem
can't exceed what is available on a single compute node.
Count | CPU Type | CPUs/Node | Memory/Node (GiB) | Node Features |
---|---|---|---|---|
4 | 8358 | 64 | 984 | icelake, avx512, 8358, nogpu, bigtmp, pi |
Request Defaults
Unless specified, your jobs will run with the following options to salloc
and sbatch
options for this partition.
--time=01:00:00 --nodes=1 --ntasks=1 --cpus-per-task=1 --mem-per-cpu=5120
GPU jobs need GPUs!
Jobs submitted to this partition do not request a GPU by default. You must request one with the --gpus
option.
Job Limits
Jobs submitted to the pi_reinisch partition are subject to the following limits:
Limit | Value |
---|---|
Maximum job time limit | 7-00:00:00 |
Available Compute Nodes
Requests for --cpus-per-task
and --mem
can't exceed what is available on a single compute node.
Count | CPU Type | CPUs/Node | Memory/Node (GiB) | GPU Type | GPUs/Node | vRAM/GPU (GB) | Node Features |
---|---|---|---|---|---|---|---|
2 | 5122 | 8 | 163 | rtx2080 | 4 | 8 | skylake, avx512, 5122, singleprecision, pi, rtx2080 |
Request Defaults
Unless specified, your jobs will run with the following options to salloc
and sbatch
options for this partition.
--time=01:00:00 --nodes=1 --ntasks=1 --cpus-per-task=1 --mem-per-cpu=5120
Job Limits
Jobs submitted to the pi_sestan partition are subject to the following limits:
Limit | Value |
---|---|
Maximum job time limit | 7-00:00:00 |
Available Compute Nodes
Requests for --cpus-per-task
and --mem
can't exceed what is available on a single compute node.
Count | CPU Type | CPUs/Node | Memory/Node (GiB) | Node Features |
---|---|---|---|---|
2 | 8358 | 64 | 1991 | icelake, avx512, 8358, nogpu, bigtmp, pi |
Request Defaults
Unless specified, your jobs will run with the following options to salloc
and sbatch
options for this partition.
--time=01:00:00 --nodes=1 --ntasks=1 --cpus-per-task=1 --mem-per-cpu=5120
GPU jobs need GPUs!
Jobs submitted to this partition do not request a GPU by default. You must request one with the --gpus
option.
Job Limits
Jobs submitted to the pi_sigworth partition are subject to the following limits:
Limit | Value |
---|---|
Maximum job time limit | 7-00:00:00 |
Available Compute Nodes
Requests for --cpus-per-task
and --mem
can't exceed what is available on a single compute node.
Count | CPU Type | CPUs/Node | Memory/Node (GiB) | GPU Type | GPUs/Node | vRAM/GPU (GB) | Node Features |
---|---|---|---|---|---|---|---|
1 | 6240 | 36 | 163 | rtx2080ti | 4 | 11 | cascadelake, avx512, 6240, singleprecision, pi, bigtmp, oldest, rtx2080ti |
Request Defaults
Unless specified, your jobs will run with the following options to salloc
and sbatch
options for this partition.
--time=01:00:00 --nodes=1 --ntasks=1 --cpus-per-task=1 --mem-per-cpu=5120
GPU jobs need GPUs!
Jobs submitted to this partition do not request a GPU by default. You must request one with the --gpus
option.
Job Limits
Jobs submitted to the pi_sindelar partition are subject to the following limits:
Limit | Value |
---|---|
Maximum job time limit | 7-00:00:00 |
Available Compute Nodes
Requests for --cpus-per-task
and --mem
can't exceed what is available on a single compute node.
Count | CPU Type | CPUs/Node | Memory/Node (GiB) | GPU Type | GPUs/Node | vRAM/GPU (GB) | Node Features |
---|---|---|---|---|---|---|---|
1 | 6240 | 36 | 163 | rtx2080ti | 4 | 11 | cascadelake, avx512, 6240, singleprecision, pi, bigtmp, oldest, rtx2080ti |
Request Defaults
Unless specified, your jobs will run with the following options to salloc
and sbatch
options for this partition.
--time=1-00:00:00 --nodes=1 --ntasks=1 --cpus-per-task=1 --mem-per-cpu=5120
GPU jobs need GPUs!
Jobs submitted to this partition do not request a GPU by default. You must request one with the --gpus
option.
Job Limits
Jobs submitted to the pi_tomography partition are subject to the following limits:
Limit | Value |
---|---|
Maximum job time limit | 4-00:00:00 |
Maximum CPUs per user | 32 |
Maximum GPUs per user | 24 |
Maximum running jobs per user | 2 |
Available Compute Nodes
Requests for --cpus-per-task
and --mem
can't exceed what is available on a single compute node.
Count | CPU Type | CPUs/Node | Memory/Node (GiB) | GPU Type | GPUs/Node | vRAM/GPU (GB) | Node Features |
---|---|---|---|---|---|---|---|
8 | 5222 | 8 | 163 | rtx5000 | 4 | 16 | cascadelake, avx512, 5222, doubleprecision, pi, bigtmp, rtx5000 |
1 | 6242 | 32 | 981 | rtx8000 | 2 | 48 | cascadelake, avx512, 6242, doubleprecision, pi, bigtmp, oldest, rtx8000 |
Request Defaults
Unless specified, your jobs will run with the following options to salloc
and sbatch
options for this partition.
--time=01:00:00 --nodes=1 --ntasks=1 --cpus-per-task=1 --mem-per-cpu=5120
Job Limits
Jobs submitted to the pi_townsend partition are subject to the following limits:
Limit | Value |
---|---|
Maximum job time limit | 7-00:00:00 |
Available Compute Nodes
Requests for --cpus-per-task
and --mem
can't exceed what is available on a single compute node.
Count | CPU Type | CPUs/Node | Memory/Node (GiB) | Node Features |
---|---|---|---|---|
2 | 6240 | 36 | 180 | cascadelake, avx512, 6240, nogpu, bigtmp, standard, pi |
Request Defaults
Unless specified, your jobs will run with the following options to salloc
and sbatch
options for this partition.
--time=01:00:00 --nodes=1 --ntasks=1 --cpus-per-task=1 --mem-per-cpu=5120
Job Limits
Jobs submitted to the pi_tsang partition are subject to the following limits:
Limit | Value |
---|---|
Maximum job time limit | 7-00:00:00 |
Available Compute Nodes
Requests for --cpus-per-task
and --mem
can't exceed what is available on a single compute node.
Count | CPU Type | CPUs/Node | Memory/Node (GiB) | Node Features |
---|---|---|---|---|
4 | 8358 | 64 | 983 | icelake, avx512, 8358, nogpu, bigtmp, pi |
Request Defaults
Unless specified, your jobs will run with the following options to salloc
and sbatch
options for this partition.
--time=01:00:00 --nodes=1 --ntasks=1 --cpus-per-task=1 --mem-per-cpu=5120
Job Limits
Jobs submitted to the pi_ya-chi_ho partition are subject to the following limits:
Limit | Value |
---|---|
Maximum job time limit | 7-00:00:00 |
Available Compute Nodes
Requests for --cpus-per-task
and --mem
can't exceed what is available on a single compute node.
Count | CPU Type | CPUs/Node | Memory/Node (GiB) | Node Features |
---|---|---|---|---|
1 | 8268 | 48 | 352 | cascadelake, avx512, 8268, nogpu, bigtmp, pi |
Request Defaults
Unless specified, your jobs will run with the following options to salloc
and sbatch
options for this partition.
--time=01:00:00 --nodes=1 --ntasks=1 --cpus-per-task=1 --mem-per-cpu=5120
GPU jobs need GPUs!
Jobs submitted to this partition do not request a GPU by default. You must request one with the --gpus
option.
Job Limits
Jobs submitted to the pi_yong_xiong partition are subject to the following limits:
Limit | Value |
---|---|
Maximum job time limit | 7-00:00:00 |
Available Compute Nodes
Requests for --cpus-per-task
and --mem
can't exceed what is available on a single compute node.
Count | CPU Type | CPUs/Node | Memory/Node (GiB) | GPU Type | GPUs/Node | vRAM/GPU (GB) | Node Features |
---|---|---|---|---|---|---|---|
1 | 8358 | 64 | 1007 | a5000 | 8 | 24 | icelake, avx512, 8358, doubleprecision, bigtmp, pi, a5000 |
4 | 6326 | 32 | 479 | a5000 | 4 | 24 | icelake, avx512, 6326, doubleprecision, a5000, pi |
1 | 8358 | 64 | 1007 | l40s | 8 | 48 | icelake, avx512, 8358, doubleprecision, pi, bigtmp, l40s, |
1 | 6226r | 32 | 163 | rtx3090 | 4 | 24 | cascadelake, avx512, 6226r, doubleprecision, pi, rtx3090 |
Request Defaults
Unless specified, your jobs will run with the following options to salloc
and sbatch
options for this partition.
--time=01:00:00 --nodes=1 --ntasks=1 --cpus-per-task=1 --mem-per-cpu=5120
Job Limits
Jobs submitted to the pi_zhao partition are subject to the following limits:
Limit | Value |
---|---|
Maximum job time limit | 7-00:00:00 |
Available Compute Nodes
Requests for --cpus-per-task
and --mem
can't exceed what is available on a single compute node.
Count | CPU Type | CPUs/Node | Memory/Node (GiB) | Node Features |
---|---|---|---|---|
2 | 6240 | 36 | 163 | cascadelake, avx512, 6240, nogpu, bigtmp, standard, pi |
YCGA Partitions
The following partitions are intended for projects related to the Yale Center for Genome Analysis. Please do not use these partitions for other proejcts. Access is granted on a group basis. If you need access to these partitions, please contact us to get approved and added.
YCGA Partitions (click to expand)
Request Defaults
Unless specified, your jobs will run with the following options to salloc
and sbatch
options for this partition.
--time=01:00:00 --nodes=1 --ntasks=1 --cpus-per-task=1 --mem-per-cpu=5120
Job Limits
Jobs submitted to the ycga partition are subject to the following limits:
Limit | Value |
---|---|
Maximum job time limit | 2-00:00:00 |
Maximum CPUs per group | 1024 |
Maximum memory per group | 3934G |
Maximum CPUs per user | 256 |
Maximum memory per user | 1916G |
Available Compute Nodes
Requests for --cpus-per-task
and --mem
can't exceed what is available on a single compute node.
Count | CPU Type | CPUs/Node | Memory/Node (GiB) | Node Features |
---|---|---|---|---|
40 | 8362 | 64 | 479 | icelake, avx512, 8362, nogpu, standard, pi |
Request Defaults
Unless specified, your jobs will run with the following options to salloc
and sbatch
options for this partition.
--time=01:00:00 --nodes=1 --ntasks=1 --cpus-per-task=1 --mem-per-cpu=5120
Available Compute Nodes
Requests for --cpus-per-task
and --mem
can't exceed what is available on a single compute node.
Count | CPU Type | CPUs/Node | Memory/Node (GiB) | Node Features |
---|---|---|---|---|
2 | 8362 | 64 | 479 | icelake, avx512, 8362, nogpu, standard, pi |
Request Defaults
Unless specified, your jobs will run with the following options to salloc
and sbatch
options for this partition.
--time=01:00:00 --nodes=1 --ntasks=1 --cpus-per-task=1 --mem-per-cpu=5120
Job Limits
Jobs submitted to the ycga_bigmem partition are subject to the following limits:
Limit | Value |
---|---|
Maximum job time limit | 4-00:00:00 |
Maximum CPUs per user | 64 |
Maximum memory per user | 1991G |
Available Compute Nodes
Requests for --cpus-per-task
and --mem
can't exceed what is available on a single compute node.
Count | CPU Type | CPUs/Node | Memory/Node (GiB) | Node Features |
---|---|---|---|---|
4 | 6346 | 32 | 1991 | icelake, avx512, 6346, nogpu, pi |
Request Defaults
Unless specified, your jobs will run with the following options to salloc
and sbatch
options for this partition.
--time=01:00:00 --nodes=1 --ntasks=1 --cpus-per-task=1 --mem-per-cpu=5120
Job Limits
Jobs submitted to the ycga_long partition are subject to the following limits:
Limit | Value |
---|---|
Maximum job time limit | 14-00:00:00 |
Maximum CPUs per group | 64 |
Maximum memory per group | 479G |
Maximum CPUs per user | 32 |
Maximum memory per user | 239G |
Available Compute Nodes
Requests for --cpus-per-task
and --mem
can't exceed what is available on a single compute node.
Count | CPU Type | CPUs/Node | Memory/Node (GiB) | Node Features |
---|---|---|---|---|
6 | 8362 | 64 | 479 | icelake, avx512, 8362, nogpu, standard, pi |
Public Datasets
We host datasets of general interest in a loosely organized directory tree in /gpfs/gibbs/data
:
├── alphafold-2.3
├── alphafold-2.2 (deprecated)
├── alphafold-2.0 (deprecated)
├── annovar
│ └── humandb
├── cryoem
├── db
│ ├── annovar
│ ├── blast
│ ├── busco
│ └── Pfam
└── genomes
├── 1000Genomes
├── 10xgenomics
├── Aedes_aegypti
├── Bos_taurus
├── Chelonoidis_nigra
├── Danio_rerio
├── Drosophila_melanogaster
├── Gallus_gallus
├── hisat2
├── Homo_sapiens
├── Macaca_mulatta
├── Mus_musculus
├── Monodelphis_domestica
├── PhiX
└── Saccharomyces_cerevisiae
└── tmp
└── hisat2
└── mouse
If you would like us to host a dataset or questions about what is currently available, please contact us.
YCGA Data
Data associated with YCGA projects and sequenceers are located on the YCGA storage system, accessible at /gpfs/ycga
.
For more information on accessing this data as well as sequencing data retention polices, see the YCGA Data documentation.
Storage
McCleary has access to a number of GPFS filesystems. /vast/palmer
is McCleary's primary filesystem where Home and Scratch60 directories are located. Every group on McCleary also has access to a Project allocation on the Gibbs filesytem on /gpfs/gibbs
. For more details on the different storage spaces, see our Cluster Storage documentation.
You can check your current storage usage & limits by running the getquota
command. Your ~/project
and ~/palmer_scratch
directories are shortcuts. Get a list of the absolute paths to your directories with the mydirectories
command. If you want to share data in your Project or Scratch directory, see the permissions page.
For information on data recovery, see the Backups and Snapshots documentation.
Warning
Files stored in palmer_scratch
are purged if they are older than 60 days. You will receive an email alert one week before they are deleted. Artificial extension of scratch file expiration is forbidden without explicit approval from the YCRC. Please purchase storage if you need additional longer term storage.
Partition | Root Directory | Storage | File Count | Backups | Snapshots |
---|---|---|---|---|---|
home | /vast/palmer/home.mccleary |
125GiB/user | 500,000 | Yes | >=2 days |
project | /gpfs/gibbs/project |
1TiB/group, increase to 4TiB on request | 5,000,000 | No | >=2 days |
scratch | /vast/palmer/scratch |
10TiB/group | 15,000,000 | No | No |