Skip to content

Grace

Grace

Grace is a shared-use resource for the Faculty of Arts and Sciences (FAS). It consists of a variety of compute nodes networked over low-latency InfiniBand and mounts several shared filesystems.

The Grace cluster is is named for the computer scientist and United States Navy Rear Admiral Grace Murray Hopper, who received her Ph.D. in Mathematics from Yale in 1934.


Access the Cluster

Once you have an account, the cluster can be accessed via ssh or through the Open OnDemand web portal.

System Status and Monitoring

For system status messages and the schedule for upcoming maintenance, please see the system status page. For a current node-level view of job activity, see the cluster monitor page (VPN only).

Partitions and Hardware

Grace is made up of several kinds of compute nodes. We group them into (sometimes overlapping) Slurm partitions meant to serve different purposes. By combining the --partition and --constraint Slurm options you can more finely control what nodes your jobs can run on.

Job Submission Rate Limits

Job submissions are limited to 200 jobs per hour. See the Rate Limits section in the Common Job Failures page for more info.

Public Partitions

See each tab below for more information about the available common use partitions.

Use the day partition for most batch jobs. This is the default if you don't specify one with --partition.

Request Defaults

Unless specified, your jobs will run with the following options to srun and sbatch options for this partition.

--time=01:00:00 --nodes=1 --ntasks=1 --cpus-per-task=1 --mem-per-cpu=5120

Job Limits

Jobs submitted to the day partition are subject to the following limits:

Limit Value
Maximum job time limit 1-00:00:00
Maximum CPUs per group 2500
Maximum CPUs per user 1000

Available Compute Nodes

Requests for --cpus-per-task and --mem can't exceed what is available on a single compute node.

Count CPU Type CPUs/Node Memory/Node (GiB) Node Features
60 8268 48 356 cascadelake, avx2, avx512, 8268, nogpu, standard, common, bigtmp
107 6240 36 181 cascadelake, avx2, avx512, 6240, nogpu, standard, common, bigtmp
78 E5-2660_v4 28 245 broadwell, avx2, E5-2660_v4, nogpu, standard, common
52 E5-2660_v3 20 119 haswell, avx2, E5-2660_v3, nogpu, standard, common, oldest
1 E5-2660_v3 20 119 haswell, avx2, E5-2660_v3, nogpu, standard, common, bigtmp, oldest

Use the interactive partition to jobs with which you need ongoing interaction. For example, exploratory analyses or debugging compilation.

Request Defaults

Unless specified, your jobs will run with the following options to srun and sbatch options for this partition.

--time=01:00:00 --nodes=1 --ntasks=1 --cpus-per-task=1 --mem-per-cpu=5120

Job Limits

Jobs submitted to the interactive partition are subject to the following limits:

Limit Value
Maximum job time limit 06:00:00
Maximum CPUs per user 4
Maximum memory per user 32G
Maximum running jobs per user 1
Maximum submitted jobs per user 1

Available Compute Nodes

Requests for --cpus-per-task and --mem can't exceed what is available on a single compute node.

Count CPU Type CPUs/Node Memory/Node (GiB) Node Features
1 6126 24 174 skylake, avx2, avx512, 6126, nogpu, standard, common
2 E5-2660_v3 20 119 haswell, avx2, E5-2660_v3, nogpu, standard, common, oldest

Use the week partition for jobs that need a longer runtime than day allows.

Request Defaults

Unless specified, your jobs will run with the following options to srun and sbatch options for this partition.

--time=01:00:00 --nodes=1 --ntasks=1 --cpus-per-task=1 --mem-per-cpu=5120

Job Limits

Jobs submitted to the week partition are subject to the following limits:

Limit Value
Maximum job time limit 7-00:00:00
Maximum CPUs per group 250
Maximum CPUs per user 108

Available Compute Nodes

Requests for --cpus-per-task and --mem can't exceed what is available on a single compute node.

Count CPU Type CPUs/Node Memory/Node (GiB) Node Features
25 6240 36 181 cascadelake, avx2, avx512, 6240, nogpu, standard, common, bigtmp

Use the transfer partition to stage data for your jobs to and from cluster storage.

Request Defaults

Unless specified, your jobs will run with the following options to srun and sbatch options for this partition.

--time=01:00:00 --nodes=1 --ntasks=1 --cpus-per-task=1 --mem-per-cpu=5120

Job Limits

Jobs submitted to the transfer partition are subject to the following limits:

Limit Value
Maximum job time limit 1-00:00:00
Maximum running jobs per user 2
Maximum CPUs per job 1

Available Compute Nodes

Requests for --cpus-per-task and --mem can't exceed what is available on a single compute node.

Count CPU Type CPUs/Node Memory/Node (GiB) Node Features
2 E5-2660_v3 20 119 haswell, avx2, E5-2660_v3, nogpu, standard, common, oldest

Use the gpu partition for jobs that make use of GPUs. You must request GPUs explicitly with the --gpus option in order to use them. For example, --gpus=gtx1080ti:2 would request 2 GeForce GTX 1080Ti GPUs per node.

Request Defaults

Unless specified, your jobs will run with the following options to srun and sbatch options for this partition.

--time=01:00:00 --nodes=1 --ntasks=1 --cpus-per-task=1 --mem-per-cpu=5120

GPU jobs need GPUs!

Jobs submitted to this partition do not request a GPU by default. You must request one with the --gpus option.

Job Limits

Jobs submitted to the gpu partition are subject to the following limits:

Limit Value
Maximum job time limit 2-00:00:00
Maximum GPUs per user 24

Available Compute Nodes

Requests for --cpus-per-task and --mem can't exceed what is available on a single compute node.

Count CPU Type CPUs/Node Memory/Node (GiB) GPU Type GPUs/Node vRAM/GPU (GB) Node Features
4 6240 36 370 v100 4 16 cascadelake, avx2, avx512, 6240, doubleprecision, common
5 6240 36 181 rtx2080ti 4 11 cascadelake, avx2, avx512, 6240, singleprecision, common, bigtmp
6 5222 8 181 rtx5000 4 16 cascadelake, avx2, avx512, 5222, doubleprecision, common, bigtmp
2 6136 24 90 v100 2 16 skylake, avx2, avx512, 6136, doubleprecision, common, bigtmp
6 E5-2660_v4 28 245 p100 1 16 broadwell, avx2, E5-2660_v4, doubleprecision, common
3 E5-2660_v3 20 119 k80 4 12 haswell, avx2, E5-2660_v3, doubleprecision, common, oldest

Use the gpu_devel partition to debug jobs that make use of GPUs, or to develop GPU-enabled code.

Request Defaults

Unless specified, your jobs will run with the following options to srun and sbatch options for this partition.

--time=01:00:00 --nodes=1 --ntasks=1 --cpus-per-task=1 --mem-per-cpu=5120

GPU jobs need GPUs!

Jobs submitted to this partition do not request a GPU by default. You must request one with the --gpus option.

Job Limits

Jobs submitted to the gpu_devel partition are subject to the following limits:

Limit Value
Maximum job time limit 04:00:00
Maximum CPUs per user 10
Maximum submitted jobs per user 1

Available Compute Nodes

Requests for --cpus-per-task and --mem can't exceed what is available on a single compute node.

Count CPU Type CPUs/Node Memory/Node (GiB) GPU Type GPUs/Node vRAM/GPU (GB) Node Features
1 6240 36 370 v100 4 16 cascadelake, avx2, avx512, 6240, doubleprecision, common

Use the bigmem partition for jobs that have memory requirements other partitions can't handle.

Request Defaults

Unless specified, your jobs will run with the following options to srun and sbatch options for this partition.

--time=01:00:00 --nodes=1 --ntasks=1 --cpus-per-task=1 --mem-per-cpu=5120

Job Limits

Jobs submitted to the bigmem partition are subject to the following limits:

Limit Value
Maximum job time limit 1-00:00:00
Maximum CPUs per user 40
Maximum memory per user 1500G

Available Compute Nodes

Requests for --cpus-per-task and --mem can't exceed what is available on a single compute node.

Count CPU Type CPUs/Node Memory/Node (GiB) Node Features
1 6240 36 1505 cascadelake, avx2, avx512, 6240, nogpu, standard, common, bigtmp
2 6240 36 1505 cascadelake, avx2, avx512, 6240, nogpu, common, bigtmp
2 6234 16 1505 cascadelake, avx2, avx512, nogpu, 6234, common, bigtmp
2 E7-4820_v4 40 1505 broadwell, avx2, E7-4820_v4, nogpu, common

Use the mpi partition for tightly-coupled parallel programs that make efficient use of multiple nodes. See our MPI documentation if your workload fits this description.

Request Defaults

Unless specified, your jobs will run with the following options to srun and sbatch options for this partition.

--time=01:00:00 --nodes=1 --ntasks=1 --cpus-per-task=1 --exclusive --mem=92160

Job Limits

Jobs submitted to the mpi partition are subject to the following limits:

Limit Value
Maximum job time limit 1-00:00:00
Maximum nodes per group 48
Maximum nodes per user 32

Available Compute Nodes

Requests for --cpus-per-task and --mem can't exceed what is available on a single compute node.

Count CPU Type CPUs/Node Memory/Node (GiB) Node Features
132 6136 24 90 hdr, skylake, avx2, avx512, 6136, nogpu, standard, common, bigtmp

Use the scavenge partition to run preemptable jobs on more resources than normally allowed. For more information about scavenge, see the Scavenge documentation.

Request Defaults

Unless specified, your jobs will run with the following options to srun and sbatch options for this partition.

--time=01:00:00 --nodes=1 --ntasks=1 --cpus-per-task=1 --mem-per-cpu=5120

GPU jobs need GPUs!

Jobs submitted to this partition do not request a GPU by default. You must request one with the --gpus option.

Job Limits

Jobs submitted to the scavenge partition are subject to the following limits:

Limit Value
Maximum job time limit 1-00:00:00
Maximum CPUs per user 10000

Available Compute Nodes

Requests for --cpus-per-task and --mem can't exceed what is available on a single compute node.

Count CPU Type CPUs/Node Memory/Node (GiB) GPU Type GPUs/Node vRAM/GPU (GB) Node Features
60 8268 48 356 cascadelake, avx2, avx512, 8268, nogpu, standard, common, bigtmp
80 6240 36 181 cascadelake, avx2, avx512, 6240, nogpu, standard, pi, bigtmp
1 6240 36 1505 cascadelake, avx2, avx512, 6240, nogpu, standard, common, bigtmp
135 6240 36 181 cascadelake, avx2, avx512, 6240, nogpu, standard, common, bigtmp
2 6240 36 1505 cascadelake, avx2, avx512, 6240, nogpu, common, bigtmp
20 8260 96 181 cascadelake, avx2, avx512, 8260, nogpu, pi
4 6240 36 370 v100 4 16 cascadelake, avx2, avx512, 6240, doubleprecision, common
5 6240 36 181 rtx2080ti 4 11 cascadelake, avx2, avx512, 6240, singleprecision, common, bigtmp
2 6240 36 181 rtx2080ti 4 11 cascadelake, avx2, avx512, 6240, singleprecision, pi, bigtmp
8 6240 36 370 cascadelake, avx2, avx512, 6240, nogpu, pi, bigtmp
2 6234 16 1505 cascadelake, avx2, avx512, nogpu, 6234, common, bigtmp
132 6136 24 90 hdr, skylake, avx2, avx512, 6136, nogpu, standard, common, bigtmp
16 6136 24 90 hdr, skylake, avx2, avx512, 6136, nogpu, standard, pi, bigtmp
3 6142 32 181 skylake, avx2, avx512, 6142, nogpu, standard, pi, bigtmp
16 6136 24 90 edr, skylake, avx2, avx512, 6136, nogpu, standard, pi, bigtmp
2 6136 24 90 v100 2 16 skylake, avx2, avx512, 6136, doubleprecision, common, bigtmp
2 5122 8 181 rtx2080 4 8 skylake, avx2, avx512, 5122, singleprecision, pi
1 6136 24 749 skylake, avx2, avx512, 6136, nogpu, pi, bigtmp
9 6136 24 181 p100 4 16 skylake, avx2, avx512, 6136, doubleprecision, pi
81 E5-2660_v4 28 245 broadwell, avx2, E5-2660_v4, nogpu, standard, pi
80 E5-2660_v4 28 245 broadwell, avx2, E5-2660_v4, nogpu, standard, common
2 E7-4820_v4 40 1505 broadwell, avx2, E7-4820_v4, nogpu, common
1 E5-2637_v4 8 119 gtx1080ti 4 11 broadwell, avx2, E5-2637_v4, singleprecision, pi, bigtmp
2 E7-4820_v4 40 1505 broadwell, avx2, E7-4820_v4, nogpu, pi
1 E5-2660_v4 28 245 p100 1 16 broadwell, avx2, E5-2660_v4, doubleprecision, pi
6 E5-2660_v4 28 245 p100 1 16 broadwell, avx2, E5-2660_v4, doubleprecision, common
52 E5-2660_v3 20 119 haswell, avx2, E5-2660_v3, nogpu, standard, common, oldest
39 E5-2660_v3 20 119 haswell, avx2, E5-2660_v3, nogpu, standard, pi, oldest
19 E5-2660_v3 20 245 haswell, avx2, E5-2660_v3, nogpu, standard, pi, oldest
1 E5-2660_v3 20 119 haswell, avx2, E5-2660_v3, nogpu, standard, common, bigtmp, oldest
1 E7-4809_v3 32 2009 haswell, avx2, E7-4809_v3, nogpu, pi, oldest
8 E5-2660_v3 20 245 k80 2 12 haswell, avx2, E5-2660_v3, doubleprecision, pi, oldest
6 E5-2660_v3 20 119 k80 4 12 haswell, avx2, E5-2660_v3, doubleprecision, common, oldest

Use the scavenge_gpu partition to run preemptable jobs on more GPU resources than normally allowed. For more information about scavenge, see the Scavenge documentation.

Request Defaults

Unless specified, your jobs will run with the following options to srun and sbatch options for this partition.

--time=01:00:00 --nodes=1 --ntasks=1 --cpus-per-task=1 --mem-per-cpu=5120

GPU jobs need GPUs!

Jobs submitted to this partition do not request a GPU by default. You must request one with the --gpus option.

Job Limits

Jobs submitted to the scavenge_gpu partition are subject to the following limits:

Limit Value
Maximum job time limit 1-00:00:00
Maximum GPUs per user 30

Available Compute Nodes

Requests for --cpus-per-task and --mem can't exceed what is available on a single compute node.

Count CPU Type CPUs/Node Memory/Node (GiB) GPU Type GPUs/Node vRAM/GPU (GB) Node Features
4 6240 36 370 v100 4 16 cascadelake, avx2, avx512, 6240, doubleprecision, common
5 6240 36 181 rtx2080ti 4 11 cascadelake, avx2, avx512, 6240, singleprecision, common, bigtmp
1 6240 36 181 rtx2080ti 4 11 cascadelake, avx2, avx512, 6240, singleprecision, pi, bigtmp
2 6136 24 90 v100 2 16 skylake, avx2, avx512, 6136, doubleprecision, common, bigtmp
2 5122 8 181 rtx2080 4 8 skylake, avx2, avx512, 5122, singleprecision, pi
9 6136 24 181 p100 4 16 skylake, avx2, avx512, 6136, doubleprecision, pi
1 E5-2637_v4 8 119 gtx1080ti 4 11 broadwell, avx2, E5-2637_v4, singleprecision, pi, bigtmp
1 E5-2660_v4 28 245 p100 1 16 broadwell, avx2, E5-2660_v4, doubleprecision, pi
6 E5-2660_v4 28 245 p100 1 16 broadwell, avx2, E5-2660_v4, doubleprecision, common
8 E5-2660_v3 20 245 k80 2 12 haswell, avx2, E5-2660_v3, doubleprecision, pi, oldest
6 E5-2660_v3 20 119 k80 4 12 haswell, avx2, E5-2660_v3, doubleprecision, common, oldest

Private Partitions

With few exceptions, jobs submitted to private partitions are not considered when calculating your group's Fairshare. Your group can purchase additional hardware for private use, which we will make available as a pi_groupname partition. These nodes are purchased by you, but supported and administered by us. After vendor support expires, we retire compute nodes. Compute nodes can range from $10K to upwards of $50K depending on your requirements. If you are interested in purchasing nodes for your group, please contact us.

PI Partitions (click to expand)

Request Defaults

Unless specified, your jobs will run with the following options to srun and sbatch options for this partition.

--time=01:00:00 --nodes=1 --ntasks=1 --cpus-per-task=1 --mem-per-cpu=5120

Job Limits

Jobs submitted to the pi_altonji partition are subject to the following limits:

Limit Value
Maximum job time limit 28-00:00:00

Available Compute Nodes

Requests for --cpus-per-task and --mem can't exceed what is available on a single compute node.

Count CPU Type CPUs/Node Memory/Node (GiB) Node Features
2 E5-2660_v3 20 119 haswell, avx2, E5-2660_v3, nogpu, standard, pi, oldest

Request Defaults

Unless specified, your jobs will run with the following options to srun and sbatch options for this partition.

--time=01:00:00 --nodes=1 --ntasks=1 --cpus-per-task=1 --mem-per-cpu=5120

Job Limits

Jobs submitted to the pi_anticevic partition are subject to the following limits:

Limit Value
Maximum job time limit 100-00:00:00

Available Compute Nodes

Requests for --cpus-per-task and --mem can't exceed what is available on a single compute node.

Count CPU Type CPUs/Node Memory/Node (GiB) Node Features
16 E5-2660_v4 28 245 broadwell, avx2, E5-2660_v4, nogpu, standard, pi
15 E5-2660_v3 20 245 haswell, avx2, E5-2660_v3, nogpu, standard, pi, oldest

Request Defaults

Unless specified, your jobs will run with the following options to srun and sbatch options for this partition.

--time=01:00:00 --nodes=1 --ntasks=1 --cpus-per-task=1 --mem-per-cpu=5120

Job Limits

Jobs submitted to the pi_anticevic_bigmem partition are subject to the following limits:

Limit Value
Maximum job time limit 100-00:00:00

Available Compute Nodes

Requests for --cpus-per-task and --mem can't exceed what is available on a single compute node.

Count CPU Type CPUs/Node Memory/Node (GiB) Node Features
1 E7-4809_v3 32 2009 haswell, avx2, E7-4809_v3, nogpu, pi, oldest

Request Defaults

Unless specified, your jobs will run with the following options to srun and sbatch options for this partition.

--time=01:00:00 --nodes=1 --ntasks=1 --cpus-per-task=1 --mem-per-cpu=5120

GPU jobs need GPUs!

Jobs submitted to this partition do not request a GPU by default. You must request one with the --gpus option.

Job Limits

Jobs submitted to the pi_anticevic_gpu partition are subject to the following limits:

Limit Value
Maximum job time limit 100-00:00:00

Available Compute Nodes

Requests for --cpus-per-task and --mem can't exceed what is available on a single compute node.

Count CPU Type CPUs/Node Memory/Node (GiB) GPU Type GPUs/Node vRAM/GPU (GB) Node Features
8 E5-2660_v3 20 245 k80 2 12 haswell, avx2, E5-2660_v3, doubleprecision, pi, oldest

Request Defaults

Unless specified, your jobs will run with the following options to srun and sbatch options for this partition.

--time=01:00:00 --nodes=1 --ntasks=1 --cpus-per-task=1 --mem-per-cpu=5120

Job Limits

Jobs submitted to the pi_anticevic_z partition are subject to the following limits:

Limit Value
Maximum job time limit 100-00:00:00

Available Compute Nodes

Requests for --cpus-per-task and --mem can't exceed what is available on a single compute node.

Count CPU Type CPUs/Node Memory/Node (GiB) Node Features
3 E5-2660_v3 20 245 haswell, avx2, E5-2660_v3, nogpu, standard, pi, oldest

Request Defaults

Unless specified, your jobs will run with the following options to srun and sbatch options for this partition.

--time=01:00:00 --nodes=1 --ntasks=1 --cpus-per-task=1 --mem-per-cpu=5120

Job Limits

Jobs submitted to the pi_balou partition are subject to the following limits:

Limit Value
Maximum job time limit 28-00:00:00

Available Compute Nodes

Requests for --cpus-per-task and --mem can't exceed what is available on a single compute node.

Count CPU Type CPUs/Node Memory/Node (GiB) Node Features
9 6240 36 181 cascadelake, avx2, avx512, 6240, nogpu, standard, pi, bigtmp
30 E5-2660_v4 28 245 broadwell, avx2, E5-2660_v4, nogpu, standard, pi

Request Defaults

Unless specified, your jobs will run with the following options to srun and sbatch options for this partition.

--time=01:00:00 --nodes=1 --ntasks=1 --cpus-per-task=1 --mem-per-cpu=5120

Job Limits

Jobs submitted to the pi_berry partition are subject to the following limits:

Limit Value
Maximum job time limit 28-00:00:00

Available Compute Nodes

Requests for --cpus-per-task and --mem can't exceed what is available on a single compute node.

Count CPU Type CPUs/Node Memory/Node (GiB) Node Features
1 6240 36 181 cascadelake, avx2, avx512, 6240, nogpu, standard, pi, bigtmp

Request Defaults

Unless specified, your jobs will run with the following options to srun and sbatch options for this partition.

--time=01:00:00 --nodes=1 --ntasks=1 --cpus-per-task=1 --mem-per-cpu=3840

GPU jobs need GPUs!

Jobs submitted to this partition do not request a GPU by default. You must request one with the --gpus option.

Job Limits

Jobs submitted to the pi_chem_chase partition are subject to the following limits:

Limit Value
Maximum job time limit 28-00:00:00

Available Compute Nodes

Requests for --cpus-per-task and --mem can't exceed what is available on a single compute node.

Count CPU Type CPUs/Node Memory/Node (GiB) GPU Type GPUs/Node vRAM/GPU (GB) Node Features
8 6240 36 181 cascadelake, avx2, avx512, 6240, nogpu, standard, pi, bigtmp
1 6240 36 181 rtx2080ti 4 11 cascadelake, avx2, avx512, 6240, singleprecision, pi, bigtmp

Request Defaults

Unless specified, your jobs will run with the following options to srun and sbatch options for this partition.

--time=01:00:00 --nodes=1 --ntasks=1 --cpus-per-task=1 --mem-per-cpu=5120

Job Limits

Jobs submitted to the pi_cowles partition are subject to the following limits:

Limit Value
Maximum job time limit 28-00:00:00
Maximum CPUs per user 120
Maximum nodes per user 5

Available Compute Nodes

Requests for --cpus-per-task and --mem can't exceed what is available on a single compute node.

Count CPU Type CPUs/Node Memory/Node (GiB) Node Features
13 E5-2660_v3 20 119 haswell, avx2, E5-2660_v3, nogpu, standard, pi, oldest

Request Defaults

Unless specified, your jobs will run with the following options to srun and sbatch options for this partition.

--time=01:00:00 --nodes=1 --ntasks=1 --cpus-per-task=1 --mem-per-cpu=5120

Job Limits

Jobs submitted to the pi_cowles_nopreempt partition are subject to the following limits:

Limit Value
Maximum job time limit 28-00:00:00
Maximum CPUs per user 120
Maximum nodes per user 5

Available Compute Nodes

Requests for --cpus-per-task and --mem can't exceed what is available on a single compute node.

Count CPU Type CPUs/Node Memory/Node (GiB) Node Features
10 E5-2660_v3 20 119 haswell, avx2, E5-2660_v3, nogpu, standard, pi, oldest

Request Defaults

Unless specified, your jobs will run with the following options to srun and sbatch options for this partition.

--time=01:00:00 --nodes=1 --ntasks=1 --cpus-per-task=1 --mem-per-cpu=5120

Job Limits

Jobs submitted to the pi_econ_io partition are subject to the following limits:

Limit Value
Maximum job time limit 28-00:00:00

Available Compute Nodes

Requests for --cpus-per-task and --mem can't exceed what is available on a single compute node.

Count CPU Type CPUs/Node Memory/Node (GiB) Node Features
6 6240 36 181 cascadelake, avx2, avx512, 6240, nogpu, standard, pi, bigtmp

Request Defaults

Unless specified, your jobs will run with the following options to srun and sbatch options for this partition.

--time=01:00:00 --nodes=1 --ntasks=1 --cpus-per-task=1 --mem-per-cpu=5120

Job Limits

Jobs submitted to the pi_econ_lp partition are subject to the following limits:

Limit Value
Maximum job time limit 28-00:00:00

Available Compute Nodes

Requests for --cpus-per-task and --mem can't exceed what is available on a single compute node.

Count CPU Type CPUs/Node Memory/Node (GiB) Node Features
5 6240 36 181 cascadelake, avx2, avx512, 6240, nogpu, standard, pi, bigtmp

Request Defaults

Unless specified, your jobs will run with the following options to srun and sbatch options for this partition.

--time=01:00:00 --nodes=1 --ntasks=1 --cpus-per-task=1 --mem-per-cpu=5120

Job Limits

Jobs submitted to the pi_esi partition are subject to the following limits:

Limit Value
Maximum job time limit 28-00:00:00
Maximum CPUs per user 648

Available Compute Nodes

Requests for --cpus-per-task and --mem can't exceed what is available on a single compute node.

Count CPU Type CPUs/Node Memory/Node (GiB) Node Features
36 6240 36 181 cascadelake, avx2, avx512, 6240, nogpu, standard, pi, bigtmp

Request Defaults

Unless specified, your jobs will run with the following options to srun and sbatch options for this partition.

--time=01:00:00 --nodes=1 --ntasks=1 --cpus-per-task=1 --mem-per-cpu=3840

Job Limits

Jobs submitted to the pi_fedorov partition are subject to the following limits:

Limit Value
Maximum job time limit 28-00:00:00

Available Compute Nodes

Requests for --cpus-per-task and --mem can't exceed what is available on a single compute node.

Count CPU Type CPUs/Node Memory/Node (GiB) Node Features
12 6136 24 90 hdr, skylake, avx2, avx512, 6136, nogpu, standard, pi, bigtmp

Request Defaults

Unless specified, your jobs will run with the following options to srun and sbatch options for this partition.

--time=01:00:00 --nodes=1 --ntasks=1 --cpus-per-task=1 --mem-per-cpu=5120

Job Limits

Jobs submitted to the pi_gelernter partition are subject to the following limits:

Limit Value
Maximum job time limit 28-00:00:00

Available Compute Nodes

Requests for --cpus-per-task and --mem can't exceed what is available on a single compute node.

Count CPU Type CPUs/Node Memory/Node (GiB) Node Features
1 6240 36 181 cascadelake, avx2, avx512, 6240, nogpu, standard, pi, bigtmp
1 E5-2660_v4 28 245 broadwell, avx2, E5-2660_v4, nogpu, standard, pi

Request Defaults

Unless specified, your jobs will run with the following options to srun and sbatch options for this partition.

--time=01:00:00 --nodes=1 --ntasks=1 --cpus-per-task=1 --mem-per-cpu=5120

Job Limits

Jobs submitted to the pi_gerstein partition are subject to the following limits:

Limit Value
Maximum job time limit 28-00:00:00

Available Compute Nodes

Requests for --cpus-per-task and --mem can't exceed what is available on a single compute node.

Count CPU Type CPUs/Node Memory/Node (GiB) Node Features
29 E5-2660_v3 20 119 haswell, avx2, E5-2660_v3, nogpu, standard, pi, oldest

Request Defaults

Unless specified, your jobs will run with the following options to srun and sbatch options for this partition.

--time=01:00:00 --nodes=1 --ntasks=1 --cpus-per-task=1 --mem-per-cpu=5120

Job Limits

Jobs submitted to the pi_glahn partition are subject to the following limits:

Limit Value
Maximum job time limit 100-00:00:00

Available Compute Nodes

Requests for --cpus-per-task and --mem can't exceed what is available on a single compute node.

Count CPU Type CPUs/Node Memory/Node (GiB) Node Features
1 E5-2660_v3 20 245 haswell, avx2, E5-2660_v3, nogpu, standard, pi, oldest

Request Defaults

Unless specified, your jobs will run with the following options to srun and sbatch options for this partition.

--time=01:00:00 --nodes=1 --ntasks=1 --cpus-per-task=1 --mem-per-cpu=3840

GPU jobs need GPUs!

Jobs submitted to this partition do not request a GPU by default. You must request one with the --gpus option.

Job Limits

Jobs submitted to the pi_hammes_schiffer partition are subject to the following limits:

Limit Value
Maximum job time limit 28-00:00:00

Available Compute Nodes

Requests for --cpus-per-task and --mem can't exceed what is available on a single compute node.

Count CPU Type CPUs/Node Memory/Node (GiB) GPU Type GPUs/Node vRAM/GPU (GB) Node Features
8 6240 36 181 cascadelake, avx2, avx512, 6240, nogpu, standard, pi, bigtmp
1 6240 36 181 rtx2080ti 4 11 cascadelake, avx2, avx512, 6240, singleprecision, pi, bigtmp
16 6136 24 90 edr, skylake, avx2, avx512, 6136, nogpu, standard, pi, bigtmp
2 5122 8 181 rtx2080 4 8 skylake, avx2, avx512, 5122, singleprecision, pi
1 6136 24 749 skylake, avx2, avx512, 6136, nogpu, pi, bigtmp
1 E5-2637_v4 8 119 gtx1080ti 4 11 broadwell, avx2, E5-2637_v4, singleprecision, pi, bigtmp

Request Defaults

Unless specified, your jobs will run with the following options to srun and sbatch options for this partition.

--time=01:00:00 --nodes=1 --ntasks=1 --cpus-per-task=1 --mem-per-cpu=5120

Job Limits

Jobs submitted to the pi_hodgson partition are subject to the following limits:

Limit Value
Maximum job time limit 28-00:00:00

Available Compute Nodes

Requests for --cpus-per-task and --mem can't exceed what is available on a single compute node.

Count CPU Type CPUs/Node Memory/Node (GiB) Node Features
1 6240 36 181 cascadelake, avx2, avx512, 6240, nogpu, standard, pi, bigtmp

Request Defaults

Unless specified, your jobs will run with the following options to srun and sbatch options for this partition.

--time=01:00:00 --nodes=1 --ntasks=1 --cpus-per-task=1 --mem-per-cpu=5120

Job Limits

Jobs submitted to the pi_holland partition are subject to the following limits:

Limit Value
Maximum job time limit 28-00:00:00

Available Compute Nodes

Requests for --cpus-per-task and --mem can't exceed what is available on a single compute node.

Count CPU Type CPUs/Node Memory/Node (GiB) Node Features
8 6240 36 181 cascadelake, avx2, avx512, 6240, nogpu, standard, pi, bigtmp
2 E5-2660_v3 20 119 haswell, avx2, E5-2660_v3, nogpu, standard, pi, oldest

Request Defaults

Unless specified, your jobs will run with the following options to srun and sbatch options for this partition.

--time=01:00:00 --nodes=1 --ntasks=1 --cpus-per-task=1 --mem-per-cpu=5120

Job Limits

Jobs submitted to the pi_howard partition are subject to the following limits:

Limit Value
Maximum job time limit 28-00:00:00

Available Compute Nodes

Requests for --cpus-per-task and --mem can't exceed what is available on a single compute node.

Count CPU Type CPUs/Node Memory/Node (GiB) Node Features
1 6240 36 181 cascadelake, avx2, avx512, 6240, nogpu, standard, pi, bigtmp

Request Defaults

Unless specified, your jobs will run with the following options to srun and sbatch options for this partition.

--time=01:00:00 --nodes=1 --ntasks=1 --cpus-per-task=1 --mem-per-cpu=5120

Job Limits

Jobs submitted to the pi_jetz partition are subject to the following limits:

Limit Value
Maximum job time limit 28-00:00:00

Available Compute Nodes

Requests for --cpus-per-task and --mem can't exceed what is available on a single compute node.

Count CPU Type CPUs/Node Memory/Node (GiB) Node Features
2 E5-2660_v4 28 245 broadwell, avx2, E5-2660_v4, nogpu, standard, pi

Request Defaults

Unless specified, your jobs will run with the following options to srun and sbatch options for this partition.

--time=01:00:00 --nodes=1 --ntasks=1 --cpus-per-task=1 --mem-per-cpu=5120

Job Limits

Jobs submitted to the pi_kaminski partition are subject to the following limits:

Limit Value
Maximum job time limit 28-00:00:00

Available Compute Nodes

Requests for --cpus-per-task and --mem can't exceed what is available on a single compute node.

Count CPU Type CPUs/Node Memory/Node (GiB) Node Features
7 E5-2660_v3 20 119 haswell, avx2, E5-2660_v3, nogpu, standard, pi, oldest

Request Defaults

Unless specified, your jobs will run with the following options to srun and sbatch options for this partition.

--time=01:00:00 --nodes=1 --ntasks=1 --cpus-per-task=1 --mem-per-cpu=5120

GPU jobs need GPUs!

Jobs submitted to this partition do not request a GPU by default. You must request one with the --gpus option.

Job Limits

Jobs submitted to the pi_lederman partition are subject to the following limits:

Limit Value
Maximum job time limit 28-00:00:00

Available Compute Nodes

Requests for --cpus-per-task and --mem can't exceed what is available on a single compute node.

Count CPU Type CPUs/Node Memory/Node (GiB) GPU Type GPUs/Node vRAM/GPU (GB) Node Features
1 6254 36 1505 rtx4000,rtx8000,v100 4,2,2 8,48,16 cascadelake, avx2, avx512, 6254, pi, bigtmp

Request Defaults

Unless specified, your jobs will run with the following options to srun and sbatch options for this partition.

--time=01:00:00 --nodes=1 --ntasks=1 --cpus-per-task=1 --mem-per-cpu=1952

Job Limits

Jobs submitted to the pi_levine partition are subject to the following limits:

Limit Value
Maximum job time limit 28-00:00:00

Available Compute Nodes

Requests for --cpus-per-task and --mem can't exceed what is available on a single compute node.

Count CPU Type CPUs/Node Memory/Node (GiB) Node Features
20 8260 96 181 cascadelake, avx2, avx512, 8260, nogpu, pi

Request Defaults

Unless specified, your jobs will run with the following options to srun and sbatch options for this partition.

--time=01:00:00 --nodes=1 --ntasks=1 --cpus-per-task=1 --mem-per-cpu=3840

Job Limits

Jobs submitted to the pi_lora partition are subject to the following limits:

Limit Value
Maximum job time limit 28-00:00:00

Available Compute Nodes

Requests for --cpus-per-task and --mem can't exceed what is available on a single compute node.

Count CPU Type CPUs/Node Memory/Node (GiB) Node Features
4 6136 24 90 hdr, skylake, avx2, avx512, 6136, nogpu, standard, pi, bigtmp

Request Defaults

Unless specified, your jobs will run with the following options to srun and sbatch options for this partition.

--time=01:00:00 --nodes=1 --ntasks=1 --cpus-per-task=1 --mem-per-cpu=5120

Job Limits

Jobs submitted to the pi_mak partition are subject to the following limits:

Limit Value
Maximum job time limit 28-00:00:00

Available Compute Nodes

Requests for --cpus-per-task and --mem can't exceed what is available on a single compute node.

Count CPU Type CPUs/Node Memory/Node (GiB) Node Features
3 E5-2660_v4 28 245 broadwell, avx2, E5-2660_v4, nogpu, standard, pi

Request Defaults

Unless specified, your jobs will run with the following options to srun and sbatch options for this partition.

--time=01:00:00 --nodes=1 --ntasks=1 --cpus-per-task=1 --mem-per-cpu=5120

GPU jobs need GPUs!

Jobs submitted to this partition do not request a GPU by default. You must request one with the --gpus option.

Job Limits

Jobs submitted to the pi_manohar partition are subject to the following limits:

Limit Value
Maximum job time limit 180-00:00:00

Available Compute Nodes

Requests for --cpus-per-task and --mem can't exceed what is available on a single compute node.

Count CPU Type CPUs/Node Memory/Node (GiB) GPU Type GPUs/Node vRAM/GPU (GB) Node Features
4 6240 36 181 cascadelake, avx2, avx512, 6240, nogpu, standard, pi, bigtmp
8 E5-2660_v4 28 245 broadwell, avx2, E5-2660_v4, nogpu, standard, pi
2 E7-4820_v4 40 1505 broadwell, avx2, E7-4820_v4, nogpu, pi
1 E5-2660_v4 28 245 p100 1 16 broadwell, avx2, E5-2660_v4, doubleprecision, pi

Request Defaults

Unless specified, your jobs will run with the following options to srun and sbatch options for this partition.

--time=01:00:00 --nodes=1 --ntasks=1 --cpus-per-task=1 --mem-per-cpu=5120

GPU jobs need GPUs!

Jobs submitted to this partition do not request a GPU by default. You must request one with the --gpus option.

Job Limits

Jobs submitted to the pi_ohern partition are subject to the following limits:

Limit Value
Maximum job time limit 28-00:00:00

Available Compute Nodes

Requests for --cpus-per-task and --mem can't exceed what is available on a single compute node.

Count CPU Type CPUs/Node Memory/Node (GiB) GPU Type GPUs/Node vRAM/GPU (GB) Node Features
2 6240 36 181 cascadelake, avx2, avx512, 6240, nogpu, standard, pi, bigtmp
9 6136 24 181 p100 4 16 skylake, avx2, avx512, 6136, doubleprecision, pi
3 E5-2660_v4 28 245 broadwell, avx2, E5-2660_v4, nogpu, standard, pi

Request Defaults

Unless specified, your jobs will run with the following options to srun and sbatch options for this partition.

--time=01:00:00 --nodes=1 --ntasks=1 --cpus-per-task=1 --mem-per-cpu=5120

Job Limits

Jobs submitted to the pi_owen_miller partition are subject to the following limits:

Limit Value
Maximum job time limit 28-00:00:00

Available Compute Nodes

Requests for --cpus-per-task and --mem can't exceed what is available on a single compute node.

Count CPU Type CPUs/Node Memory/Node (GiB) Node Features
5 E5-2660_v4 28 245 broadwell, avx2, E5-2660_v4, nogpu, standard, pi

Request Defaults

Unless specified, your jobs will run with the following options to srun and sbatch options for this partition.

--time=01:00:00 --nodes=1 --ntasks=1 --cpus-per-task=1 --mem-per-cpu=5120

GPU jobs need GPUs!

Jobs submitted to this partition do not request a GPU by default. You must request one with the --gpus option.

Job Limits

Jobs submitted to the pi_panda partition are subject to the following limits:

Limit Value
Maximum job time limit 28-00:00:00

Available Compute Nodes

Requests for --cpus-per-task and --mem can't exceed what is available on a single compute node.

Count CPU Type CPUs/Node Memory/Node (GiB) GPU Type GPUs/Node vRAM/GPU (GB) Node Features
1 6254 36 370 rtx2080ti 8 11 cascadelake, avx2, avx512, 6254, singleprecision, pi, bigtmp
2 6240 36 370 v100 4 16 cascadelake, avx2, avx512, 6240, doubleprecision, pi
3 6240 36 181 rtx2080ti 4 11 cascadelake, avx2, avx512, 6240, singleprecision, pi, bigtmp

Request Defaults

Unless specified, your jobs will run with the following options to srun and sbatch options for this partition.

--time=01:00:00 --nodes=1 --ntasks=1 --cpus-per-task=1 --mem-per-cpu=5120

Job Limits

Jobs submitted to the pi_poland partition are subject to the following limits:

Limit Value
Maximum job time limit 28-00:00:00

Available Compute Nodes

Requests for --cpus-per-task and --mem can't exceed what is available on a single compute node.

Count CPU Type CPUs/Node Memory/Node (GiB) Node Features
8 6240 36 370 cascadelake, avx2, avx512, 6240, nogpu, pi, bigtmp
10 E5-2660_v4 28 245 broadwell, avx2, E5-2660_v4, nogpu, standard, pi

Request Defaults

Unless specified, your jobs will run with the following options to srun and sbatch options for this partition.

--time=01:00:00 --nodes=1 --ntasks=1 --cpus-per-task=1 --mem-per-cpu=5120

Job Limits

Jobs submitted to the pi_polimanti partition are subject to the following limits:

Limit Value
Maximum job time limit 28-00:00:00

Available Compute Nodes

Requests for --cpus-per-task and --mem can't exceed what is available on a single compute node.

Count CPU Type CPUs/Node Memory/Node (GiB) Node Features
1 6240 36 181 cascadelake, avx2, avx512, 6240, nogpu, standard, pi, bigtmp

Request Defaults

Unless specified, your jobs will run with the following options to srun and sbatch options for this partition.

--time=01:00:00 --nodes=1 --ntasks=1 --cpus-per-task=1 --mem-per-cpu=5120

Job Limits

Jobs submitted to the pi_seto partition are subject to the following limits:

Limit Value
Maximum job time limit 28-00:00:00

Available Compute Nodes

Requests for --cpus-per-task and --mem can't exceed what is available on a single compute node.

Count CPU Type CPUs/Node Memory/Node (GiB) Node Features
3 6142 32 181 skylake, avx2, avx512, 6142, nogpu, standard, pi, bigtmp

Request Defaults

Unless specified, your jobs will run with the following options to srun and sbatch options for this partition.

--time=01:00:00 --nodes=1 --ntasks=1 --cpus-per-task=1 --mem-per-cpu=5120

Job Limits

Jobs submitted to the pi_tsmith partition are subject to the following limits:

Limit Value
Maximum job time limit 28-00:00:00

Available Compute Nodes

Requests for --cpus-per-task and --mem can't exceed what is available on a single compute node.

Count CPU Type CPUs/Node Memory/Node (GiB) Node Features
1 E5-2660_v3 20 119 haswell, avx2, E5-2660_v3, nogpu, standard, pi, oldest

Storage

Grace has access to a number of filesystems. /gpfs/loomis is Grace's primary filesystem where home, project, and scratch60 directories are located. For more details on the different storage spaces, see our Cluster Storage documentation.

You can check your current storage usage & limits by running the getquota command. Your ~/project and ~/scratch60 directories are shortcuts. Get a list of the absolute paths to your directories with the mydirectories command. If you want to share data in your Project or Scratch directory, see the permissions page.

Warning

Files stored in scratch60 are purged if they are older than 60 days. You will receive an email alert one week before they are deleted.

Partition Root Directory Storage File Count Backups
home /gpfs/loomis/home.grace 125GiB/user 500,000 Yes
project /gpfs/loomis/project 1TiB/group, increase to 4TiB on request 5,000,000 No
scratch60 /gpfs/loomis/scratch60 20TiB/group 15,000,000 No

Last update: April 28, 2021