Skip to content

Milgram

Stanley

Milgram is named for Dr. Stanley Milgram, a psychologist who researched the behavioral motivations behind social awareness in individuals and obedience to authority figures. He conducted several famous experiments during his professorship at Yale University including the lost-letter experiment, the small-world experiment, and the Milgram experiment on obedience to authority figures.

Milgram is a HIPAA aligned Department of Psychology cluster intended for use on projects that may involve sensitive data. This applies to both storage and computation. If you have any questions about this policy, please contact us.

Info

Connections to Milgram can only be made from the Yale VPN (access.yale.edu)--even if you are already on campus (YaleSecure or ethernet). See our VPN page for setup instructions. If your group has a workstation (see list), you can connect using one of those.


Partitions and Hardware

Milgram is made up of several kinds of compute nodes. We group them into (sometimes overlapping) Slurm partitions meant to serve different purposes. By combining the --partition and --constraint Slurm options you can more finely control what nodes your jobs can run on.

Public Partitions

See each tab below for more information about the available common use partitions.

Use the short partition for most batch jobs. This is the default if you don't specify one with --partition.

Request Defaults

Unless specified, your jobs will run with the following options to srun and sbatch options for this partition.

--time=01:00:00 --nodes=1 --ntasks=1 --cpus-per-task=1 --mem-per-cpu=5120

Job Limits

Jobs submitted to the short partition are subject to the following limits:

Limit Value
Max job time limit 06:00:00
Maximum CPUs per group 1158
Maximum memory per group 10176G
Maximum CPUs per user 772
Maximum memory per user 6784G

Available Compute Nodes

Requests for --cpus-per-task and --mem can't exceed what is available on a single compute node.

Nodes CPU Type CPUs/Node Memory/Node (GiB) Node Features
48 E5-2660_v4 28 247 broadwell, E5-2660_v4
8 E5-2660_v3 20 121 haswell, E5-2660_v3, oldest

Use the interactive partition to jobs with which you need ongoing interaction. For example, exploratory analyses or debugging compilation.

Request Defaults

Unless specified, your jobs will run with the following options to srun and sbatch options for this partition.

--time=01:00:00 --nodes=1 --ntasks=1 --cpus-per-task=1 --mem-per-cpu=5120

Job Limits

Jobs submitted to the interactive partition are subject to the following limits:

Limit Value
Max job time limit 06:00:00
Maximum CPUs per user 4
Maximum memory per user 30G
Maximum running jobs per user 1
Maximum submitted jobs per user 1

Available Compute Nodes

Requests for --cpus-per-task and --mem can't exceed what is available on a single compute node.

Nodes CPU Type CPUs/Node Memory/Node (GiB) Node Features
2 E5-2660_v3 20 121 haswell, E5-2660_v3, oldest

Use the development partition for jobs where you are interactively developing code.

Request Defaults

Unless specified, your jobs will run with the following options to srun and sbatch options for this partition.

--time=01:00:00 --nodes=1 --ntasks=1 --cpus-per-task=1 --mem-per-cpu=5120

Available Compute Nodes

Requests for --cpus-per-task and --mem can't exceed what is available on a single compute node.

Nodes CPU Type CPUs/Node Memory/Node (GiB) Node Features
1 E5-2660_v3 20 121 haswell, E5-2660_v3, oldest

Use the education partition for course work.

Request Defaults

Unless specified, your jobs will run with the following options to srun and sbatch options for this partition.

--time=01:00:00 --nodes=1 --ntasks=1 --cpus-per-task=1 --mem-per-cpu=5120

Job Limits

Jobs submitted to the education partition are subject to the following limits:

Limit Value
Max job time limit 06:00:00

Available Compute Nodes

Requests for --cpus-per-task and --mem can't exceed what is available on a single compute node.

Nodes CPU Type CPUs/Node Memory/Node (GiB) Node Features
2 E5-2660_v3 20 121 haswell, E5-2660_v3, oldest

Use the gpu partition for jobs that make use of GPUs. You must request GPUs explicitly with the --gres option in order to use them. For example, --gres=gpu:gtx1080ti:2 would request 2 GeForce GTX 1080Ti GPUs per node.

Request Defaults

Unless specified, your jobs will run with the following options to srun and sbatch options for this partition.

--time=01:00:00 --nodes=1 --ntasks=1 --cpus-per-task=1 --mem-per-cpu=5120

GPU jobs need GPUs!

Jobs submitted to this partition do not request a GPU by default. You must request one with the --gres option.

Job Limits

Jobs submitted to the gpu partition are subject to the following limits:

Limit Value
Max job time limit 7-00:00:00

Available Compute Nodes

Requests for --cpus-per-task and --mem can't exceed what is available on a single compute node.

Nodes CPU Type CPUs/Node Memory/Node (GiB) GPU Type GPUs/Node vRAM/GPU (GB) Node Features
5 6240 36 372 rtx2080ti 4 11 cascadelake, 6240

Use the long partition for jobs that need a longer runtime than short allows.

Request Defaults

Unless specified, your jobs will run with the following options to srun and sbatch options for this partition.

--time=01:00:00 --nodes=1 --ntasks=1 --cpus-per-task=1 --mem-per-cpu=5120

Job Limits

Jobs submitted to the long partition are subject to the following limits:

Limit Value
Max job time limit 2-00:00:00

Available Compute Nodes

Requests for --cpus-per-task and --mem can't exceed what is available on a single compute node.

Nodes CPU Type CPUs/Node Memory/Node (GiB) Node Features
48 E5-2660_v4 28 247 broadwell, E5-2660_v4
8 E5-2660_v3 20 121 haswell, E5-2660_v3, oldest

Use the verylong partition for jobs that need a longer runtime than long allows.

Request Defaults

Unless specified, your jobs will run with the following options to srun and sbatch options for this partition.

--time=01:00:00 --nodes=1 --ntasks=1 --cpus-per-task=1 --mem-per-cpu=5120

Job Limits

Jobs submitted to the verylong partition are subject to the following limits:

Limit Value
Max job time limit 7-00:00:00

Available Compute Nodes

Requests for --cpus-per-task and --mem can't exceed what is available on a single compute node.

Nodes CPU Type CPUs/Node Memory/Node (GiB) Node Features
48 E5-2660_v4 28 247 broadwell, E5-2660_v4
8 E5-2660_v3 20 121 haswell, E5-2660_v3, oldest

Use the scavenge partition to run preemptable jobs on more resources than normally allowed. For more information about scavenge, see the Scavenge documentation.

Request Defaults

Unless specified, your jobs will run with the following options to srun and sbatch options for this partition.

--time=01:00:00 --nodes=1 --ntasks=1 --cpus-per-task=1 --mem-per-cpu=5120

GPU jobs need GPUs!

Jobs submitted to this partition do not request a GPU by default. You must request one with the --gres option.

Available Compute Nodes

Requests for --cpus-per-task and --mem can't exceed what is available on a single compute node.

Nodes CPU Type CPUs/Node Memory/Node (GiB) GPU Type GPUs/Node vRAM/GPU (GB) Node Features
48 E5-2660_v4 28 247 broadwell, E5-2660_v4
5 6240 36 372 rtx2080ti 4 11 cascadelake, 6240
10 E5-2660_v3 20 121 haswell, E5-2660_v3, oldest

Storage

/gpfs/milgram is Milgram's primary filesystem where home, project, and scratch60 directories are located. For more details on the different storage spaces, see our Cluster Storage documentation.

You can check your current storage usage & limits by running the getquota command. Note that the per-user usage breakdown only update once daily.

Warning

Files stored in scratch60 are purged if they are older than 60 days. You will receive an email alert one week before they are deleted.

Partition Root Directory Storage File Count Backups
home /gpfs/milgram/home 20GiB/user 500,000 Yes
project /gpfs/milgram/project varies varies No
scratch60 /gpfs/milgram/scratch60 varies 5,000,000 No

Last update: August 5, 2020