Skip to content

McCleary

Beatrix McCleary Hamburg

McCleary is a shared-use resource for the Yale School of Medicine (YSM), life science researchers elsewhere on campus and projects related to the Yale Center for Genome Analysis. It consists of a variety of compute nodes networked over ethernet and mounts several shared filesystems.

McCleary is named for Beatrix McCleary Hamburg, who received her medical degree in 1948 and was the first female African American graduate of Yale School of Medicine. The McCleary HPC cluster is Yale's first direct-to-chip liquid cooled cluster, moving the YCRC and the Yale research computing community into a more environmentally friendly future.


NIH Controlled-Access Data and Repositories

Effective January 25, 2025, new or renewed Data Use Certifications for NIH Controlled-Access Data and Repositories must adhere to the NIH Security Best Practices for Users of Controlled-Access Data. While YCRC's new Hopper cluster is in development, Yale has completed appropriate documentation for McCleary to be used as an approved location of NIH Controlled-Access Data and Repositories, with the certain conditions. See our NIH Controlled-Access Data documentation for more information and to request access.

Access the Cluster

Once you have an account, the cluster can be accessed via ssh or through the Open OnDemand web portal.

System Status and Monitoring

For system status messages and the schedule for upcoming maintenance, please see the system status page. For a current node-level view of job activity, see the cluster monitor page (VPN only).

Installed Applications

A large number of software and applications are installed on our clusters. These are made available to researchers via software modules.

Available Software Modules (click to expand)
Package Versions
ACTC 1.1,1.1
ADMIXTURE 1.3.0
AFNI 23.2.08,2022.1.14,2023.1.01,2023.1.07,24.1.22
ANTLR 2.7.7
ANTs 2.3.5
APBS 1.4.2.1,3.4.1.Linux
APR 1.7.0,1.7.5
APR-util 1.6.1,1.6.3
ASE 3.22.1
ATK 2.36.0,2.38.0
AUGUSTUS 3.4.0
Abseil 20230125.2
AdapterRemoval 2.3.2
AlphaFold 2.2.3,2.2.3,2.2.4,2.2.4,2.3.2,2.3.2,3.0.0
AmberTools 23.6
Archive-Zip 1.68,1.68
AreTomo 1.3.4
AreTomo2 1.0.0
AreTomo3 2.0.6beta
Armadillo 10.2.1,11.4.3,11.4.3
Arrow 0.17.1,0.17.1,6.0.0,11.0.0,14.0.1,16.1.0
Aspera-CLI 3.9.6.1467.159c5b1
Aspera-Connect 4.2.4.265
AuthentiCT 1.0.1
Autoconf 2.69,2.71,2.72
Automake 1.16.2,1.16.5,1.16.5
Autotools 20200321,20220317,20231222
BBMap 38.90
BCFtools 1.11,1.16,1.21
BEDOPS 2.4.41
BEDTools 2.30.0
BGEN-enkre 1.1.7
BLAST 2.2.26
BLAST+ 2.13.0,2.14.1,2.15.0
BLAT 3.5,3.5
BLIS 0.9.0,1.0
BLT 20220626
BWA 0.7.17,0.7.17,0.7.17
BamTools 2.5.1,2.5.1,2.5.2
BaseSpaceCLI 1.5.3
Bazel 3.7.2,5.4.1,6.1.0,6.3.1
Beast 2.6.3,2.6.3,2.6.7,2.7.4,2.7.6
BeautifulSoup 4.11.1
Bio-DB-BigFile 1.07,1.07
Bio-DB-HTS 3.01,3.01
BioPP 2.4.1
BioPerl 1.7.8,1.7.8
Biopython 1.78,1.79,1.81,1.83
Bismark 0.24.0
Bison 3.0.4,3.0.4,3.0.5,3.7.1,3.7.1,3.8.2,3.8.2,3.8.2
Blender 4.0.1,4.2.1
Block 1.5.3
Blosc 1.21.0,1.21.3
Blosc2 2.8.0
Boost 1.74.0,1.74.0,1.74.0,1.74.0,1.74.0,1.81.0,1.81.0,1.81.0,1.83.0,1.85.0,1.86.0
Boost.MPI 1.81.0,1.81.0
Boost.Python 1.74.0,1.81.0
Boost.Python-NumPy 1.74.0,1.81.0
Bowtie 1.3.0,1.3.0,1.3.1
Bowtie2 2.3.4.3,2.4.2,2.4.2,2.5.1
Brotli 1.0.9,1.0.9
Brunsli 0.1
Bsoft 2.1.4
CAMPARI 4.0
CCP4 8.0.011,8.0.015
CD-HIT 4.8.1
CDO 2.2.2
CESM 2.1.3,2.1.3
CESM-deps 2,2
CFITSIO 3.48,4.2.0
CGAL 4.14.3,4.14.3,5.2,5.2.4,5.5.2
CLHEP 2.4.4.0,2.4.6.4
CMake 3.18.4,3.18.4,3.20.1,3.24.3,3.29.3
COMSOL 5.2a,5.2a
CONN 22a
CP2K 8.1
CPPE 0.3.1
CREST 3.0.1,3.0.2
CTFFIND 4.1.14,4.1.14,4.1.14,4.1.14,4.1.14
CUDA 10.1.243,11.1.1,11.3.1,11.8.0,12.0.0,12.1.1,12.6.0
CUDAcore 11.1.1,11.3.1
CUnit 2.1
Cartopy 0.20.3,0.22.0
Catch2 2.13.10
Cbc 2.10.5
CellRanger 3.0.2,6.1.2,7.0.0,7.0.1,7.1.0,7.2.0,8.0.1
CellRanger-ARC 2.0.2
Cereal 1.3.2,1.3.2
Cgl 0.60.7
CharLS 2.2.0,2.4.2
CheMPS2 1.8.12
Check 0.15.2,0.15.2
Chimera 1.16
ChimeraX 1.6.1,1.7,1.8
Clang 11.0.1,13.0.1,15.0.5,16.0.4,16.0.4
Clp 1.17.8
Code-Server 4.7.0,4.7.0,4.16.1,4.17.0
CoinUtils 2.11.9
Compress-Raw-Zlib 2.202,2.202
CoordgenLibs 3.0.2
Coot 0.9.7,0.9.8.6
CppUnit 1.15.1
Cufflinks 20190706
Cython 0.29.22,3.0.8,3.0.10
Cytoscape 3.9.1
DB 18.1.40,18.1.40
DBD-mysql 4.050,4.050
DB_File 1.855
DBus 1.13.18,1.15.2
DIAMOND 2.0.15,2.1.7
DMTCP 3.0.0,3.0.0
DSSP 4.2.1,4.4.7
Dice 20240101
Doxygen 1.8.20,1.9.5
EDirect 20.4.20230912,20.5.20231006,22.8.20241011
EIGENSOFT 7.2.1
ELPA 2020.11.001,2020.11.001,2021.11.001,2022.05.001
EMAN 1.9
EMAN2 2.91,2.99.47
EMBOSS 6.6.0
ESM-2 2.0.0
ESMF 8.3.0,8.3.0
EasyBuild 4.6.2,4.7.0,4.7.1,4.7.2,4.8.0,4.8.1,4.8.2,4.9.0,4.9.1,4.9.2,4.9.3
Eigen 3.3.8,3.3.9,3.4.0,3.4.0,3.4.0
El-MAVEN 0.12.1beta
Emacs 28.1,28.2
ExifTool 12.58,12.70
Exodus 20240403,20240403
FASTX-Toolkit 0.0.14
FFTW 2.1.5,2.1.5,2.1.5,2.1.5,3.3.8,3.3.8,3.3.8,3.3.8,3.3.8,3.3.10,3.3.10,3.3.10,3.3.10,3.3.10
FFTW.MPI 3.3.10,3.3.10,3.3.10
FFmpeg 4.3.1,5.1.2
FHI-aims 231212_1
FLAC 1.3.3,1.4.2
FLASH 2.2.00
FLTK 1.3.5,1.3.8
FRE-NCtools 2024.05
FSL 6.0.5.2,6.0.5.2,6.0.7.9
FTGL 2.3,2.4.0
Faiss 1.7.4
FastME 2.1.6.3
FastQC 0.11.9,0.12.1
FastUniq 1.1
Fiji 2.14.0,20221201,20230801
Fiona 1.9.2
Flask 2.2.3
FlexiBLAS 3.2.1,3.2.1,3.4.4
FragGeneScan 1.31
FreeImage 3.18.0,3.18.0
FreeSurfer dev,dev,7.3.2,7.4.1
FreeXL 2.0.0
FriBidi 1.0.10,1.0.12
GATK 3.8,4.2.0.0,4.2.6.1,4.4.0.0,4.5.0.0,4.6.0.0
GCC 10.2.0,12.2.0,13.3.0
GCCcore 7.3.0,10.2.0,12.2.0,13.3.0
GCTA 1.94.1
GConf 3.2.6
GDAL 3.2.1,3.6.2
GDB 10.1,13.2
GDCM 3.0.21
GDRCopy 2.1,2.3,2.3.1,2.4.1
GEOS 3.9.1,3.11.1
GL2PS 1.4.2,1.4.2
GLM 0.9.9.8
GLPK 4.65,5.0
GLib 2.66.1,2.75.0
GLibmm 2.49.7
GMP 6.2.0,6.2.1
GObject-Introspection 1.66.1,1.66.1,1.74.0
GRASS 8.2.0
GROMACS 2021.5,2023.3
GSEA 4.3.2
GSL 2.5,2.6,2.6,2.6,2.7,2.7,2.7
GST-libav 1.18.4,1.22.1
GST-plugins-bad 1.22.5
GST-plugins-base 1.18.4,1.18.4,1.22.1,1.22.1
GST-plugins-good 1.18.4,1.22.1
GStreamer 1.18.4,1.18.4,1.22.1,1.22.1
GTK+ 3.24.23
GTK2 2.24.33
GTK3 3.24.35
GTK4 4.11.3
GTS 0.7.6
Garfield++ 5.0
Gaussian 16,16
Gctf 1.18,1.18
Gdk-Pixbuf 2.40.0,2.40.0,2.42.10
Geant4 10.7.1
Geant4-data 11.3
GenomeTools 1.6.1
Ghostscript 9.53.3,10.0.0
GitPython 3.1.31
Globus-CLI 3.18.0,3.30.1
GnuTLS 3.7.8
Go 1.17.6,1.21.1,1.21.4,1.22.1
Grace 5.1.25
Gradle 8.6
Graphene 1.10.8
GraphicsMagick 1.3.36
Graphviz 2.47.0
Guile 2.2.7,3.0.9,3.0.9
Gurobi 9.1.2,10.0.3
HDF 4.2.15,4.2.15
HDF5 1.10.7,1.10.7,1.10.7,1.10.7,1.14.0,1.14.0,1.14.0,1.14.0
HDFView 3.3.1
HH-suite 3.3.0,3.3.0,3.3.0
HISAT-3N 20221013
HISAT2 2.2.1
HMMER 3.3.2,3.3.2,3.4
HOOMD-blue 4.9.1,4.9.1
HPCG 3.1,3.1,3.1,3.1
HPL 2.3,2.3,2.3,2.3
HTSeq 0.13.5
HTSlib 1.11,1.11,1.12,1.16,1.17,1.21
HarfBuzz 2.6.7,5.3.1
Harminv 1.4.1,1.4.2
HepMC3 3.2.6
Highway 1.0.3
HyPhy 2.5.62
Hypre 2.20.0,2.27.0
ICU 67.1,72.1,75.1
IDBA-UD 1.1.3
IGV 2.16.0,2.16.2,2.17.4,2.19.1
IMOD 4.11.15,4.11.16,4.11.24_RHEL7,4.11.24,4.12.56_RHEL7,4.12.62_RHEL8
IOR 4.0.0,4.0.0
IPython 7.18.1,8.14.0
IQ-TREE 2.1.2
ISA-L 2.30.0
ISL 0.23,0.26
ImageMagick 7.0.10,7.1.0
Imath 3.1.6
Infernal 1.1.4
IsoNet 0.2.1
JAGS 4.3.0,4.3.2
Jansson 2.14
JasPer 2.0.24,4.0.0
Java 1.8.345,8.345,11.0.16,17.0.4,21.0.2
JsonCpp 1.9.4,1.9.5
Judy 1.0.5,1.0.5
Julia 1.8.2,1.8.5,1.9.2,1.10.0,1.10.2,1.10.4,1.11.1
Jupyter-bundle 20230823
JupyterHub 4.0.1
JupyterLab 2.2.8,4.0.3
JupyterNotebook 7.0.3
KaHIP 3.14
Kalign 3.3.1,3.4.0
Kent_tools 411,461
Knitro 12.0.0,14.0.0
Kraken2 2.1.3
LAME 3.100,3.100
LAMMPS 2Aug2023,23Jun2022
LDC 0.17.6,1.25.1
LERC 4.0.0
LHAPDF 6.5.4
LLVM 11.0.0,14.0.6,15.0.5,16.0.4
LMDB 0.9.24,0.9.29
LSD2 2.2
LZO 2.10,2.10
Leptonica 1.83.0
LibSoup 3.0.8
LibTIFF 4.1.0,4.2.0,4.4.0
Libint 2.6.0
LittleCMS 2.11,2.14
Lua 5.4.2,5.4.4
M4 1.4.17,1.4.18,1.4.18,1.4.18,1.4.19,1.4.19,1.4.19
MACS2 2.2.7.1,2.2.9.1,2.2.9.1
MACS3 3.0.1
MAFFT 7.475,7.505
MAGeCK 0.5.9.5
MATIO 1.5.23
MATLAB 2018b,2020b,2022a,2022b,2023a,2023b
MCL 14.137
MCR R2019b.8,R2020b.5,R2021b.6,R2022a.6,R2023a
MDI 1.4.16
MEME 5.4.1
METIS 5.1.0,5.1.0,5.1.0
MINC 2.4.06
MMseqs2 13,14
MPB 1.11.1
MPC 1.2.1,1.3.1
MPFR 4.1.0,4.2.0
MPICH 4.2.1
MRIcron 1.0.20190902
MRtrix3 3.0.2
MUMPS 5.3.5,5.6.1
MUMmer 4.0.0rc1
MUSCLE 5.1
MadGraph5_aMC 2.9.16
MafFilter 1.3.1
Mako 1.1.3,1.2.4
MariaDB 10.5.8,10.11.2
Markdown 3.6
Mathematica 13.0.1
Maven 3.9.2
MaxBin 2.2.7
MaxQuant 2.4.2.0,2.4.2.0,2.6.1.0
Meep 1.24.0,1.26.0
Mercurial 5.7.1
Mesa 20.2.1,21.3.3,22.2.4
MeshLab 2023.12
Meson 0.55.3,0.62.1,0.64.0,1.3.1,1.4.0
Metal 2020
MitoGraph 3.0
Mono 6.8.0.105,6.8.0.123
MotionCor2 1.5.0,1.6.4
MotionCor3 1.0.1
MrBayes 3.2.6,3.2.7
MultiQC 1.10.1
NAG 29
NAMD 2.14,2.14,2.14,2.14
NASM 2.15.05,2.15.05
NBO 7.0
NCCL 2.8.3,2.8.4,2.10.3,2.16.2,2.16.2,2.16.2,2.18.3,2.23.4
NCO 5.2.1,5.2.1
NECI 20230620
NEdit 5.7
NGS 2.10.9
NIfTI 2.0.0
NLopt 2.6.2,2.6.2,2.7.0,2.7.1
NSPR 4.29,4.35
NSS 3.57,3.85
NVHPC 21.11,21.11,23.1,24.9
Net-core 3.1.101
NetLogo 6.4.0
Netpbm 10.86.41
Nextflow 22.10.6,23.04.2,23.10.1,24.04.2,24.04.4
Ninja 1.10.1,1.11.1,1.12.1
ORCA 5.0.3,5.0.3,5.0.4,5.0.4,6.0.0,6.0.1
OSU-Micro-Benchmarks 5.7,5.7,6.2,6.2
OligoArray 2.1
OligoArrayAux 3.8
OpenBLAS 0.3.12,0.3.21,0.3.21,0.3.27
OpenBabel 3.1.1
OpenCV 4.5.1,4.8.0
OpenEXR 2.5.5,3.1.5
OpenFOAM v2012,v2206,v2212
OpenJPEG 2.4.0,2.5.0
OpenLibm 0.7.5
OpenMM 7.5.0,7.5.1,7.5.1,7.5.1,7.7.0,8.0.0
OpenMPI 4.0.5,4.0.5,4.0.5,4.0.5,4.0.5,4.1.4,4.1.4,4.1.4
OpenPGM 5.2.122,5.2.122
OpenSSL 1.0,1.1,3
OpenSlide 3.4.1
OpenSlide-Java 0.12.4
OrthoFinder 2.5.4
Osi 0.108.8
PALEOMIX 1.3.8
PAML 4.10.7
PBZIP2 1.1.13
PCRE 8.44,8.45
PCRE2 10.35,10.40
PDBFixer 1.7
PEAR 0.9.11
PEET 1.15.0,1.16.0a
PETSc 3.15.0,3.17.4,3.20.3
PGI 18.10,18.10
PIPseeker 2.1.4
PKTOOLS 2.6.7.6,2.6.7.6
PLINK 1.9b_6.21,2_avx2_20221024
PLUMED 2.6.2,2.7.0,2.7.3,2.9.0,2.9.2
PMIx 5.0.2
POV-Ray 3.7.0.8,3.7.0.10
PRINSEQ 0.20.4
PROJ 7.2.1,9.1.1
PRRTE 3.0.5
PYTHIA 8.309
Pandoc 2.13,3.1.2
Pango 1.47.0,1.50.12
ParMETIS 4.0.3
ParaView 5.8.1,5.11.0
PartitionFinder 2.1.1
Perl 5.28.0,5.32.0,5.32.0,5.32.1,5.36.0,5.36.0,5.36.1,5.38.0,5.38.2
Perl-bundle-CPAN 5.36.1
Phenix 1.20.1,1.20.1
PhyloBayes 4.1e
Pillow 8.0.1,9.4.0
Pillow-SIMD 7.1.2,9.5.0
Pint 0.22
PnetCDF 1.12.2,1.12.3,1.13.0,1.13.0
PostgreSQL 13.2,15.2
PuLP 2.7.0
PyBLP 1.1.0
PyBerny 0.6.3
PyCairo 1.24.0
PyCharm 2022.3.2,2024.3.2
PyCheMPS2 1.8.12
PyGObject 3.44.1
PyInstaller 6.3.0
PyOpenGL 3.1.5,3.1.6
PyQt5 5.15.4,5.15.7
PySCF 2.4.0
PyTables 3.5.2,3.8.0
PyTorch 1.9.0,1.13.1,2.1.2,2.1.2
PyYAML 5.3.1,6.0
PycURL 7.45.2
Pylada-light 2023Oct13
Pysam 0.16.0.1,0.16.0.1,0.16.0.1,0.21.0
Python 2.7.18,2.7.18,3.8.6,3.8.6,3.10.8,3.10.8,3.10.8,3.10.8,3.12.3
Python-bundle-PyPI 2023.06,2024.06
QCA 2.3.5
QScintilla 2.11.6
QTLtools 1.3.1
Qhull 2020.2,2020.2
Qt5 5.14.2,5.15.7
Qt5Webkit 5.212.0,5.212.0
QtKeychain 0.13.2
QtPy 2.3.0
Qtconsole 5.4.0
QuPath 0.5.0,0.5.1
QuantumESPRESSO 6.8,7.0,7.2
Quip 1.1.8,1.1.8,20171217
Qwt 6.1.5,6.2.0
R 4.2.0,4.2.0,4.3.2,4.3.2,4.4.1,4.4.1
R-INLA 24.01.18
R-bundle-Bioconductor 3.15,3.16,3.18,3.19
R-bundle-CRAN 2023.12,2024.06
RDKit 2022.09.5
RE2 2023
RECON 1.08
RELION 3.0.8,3.1.4,3.1.4,3.1.4,4.0.0,4.0.1,4.0.1,5beta,5beta,5.0.0
RELION-composite-masks 5.0.0
RMBlast 2.11.0
ROOT 6.26.06,6.26.10
RSEM 1.3.3
RStudio 2022.07.2,2022.12.0,2024.04.2
RStudio-Server 2024.04.1+748
RapidJSON 1.1.0,1.1.0
Regenie 4.0
RepeatMasker 4.1.2
RepeatScout 1.0.6
ResMap 1.95
RevBayes 1.1.1,1.2.1,1.2.2,1.2.2
Rivet 3.1.9
Rmath 4.0.4,4.4.1
Rosetta 3.12
Ruby 2.7.2,3.0.5,3.2.2
Rust 1.52.1,1.65.0,1.70.0,1.75.0,1.78.0
SAMtools 1.11,1.11,1.16,1.16.1,1.18,1.20,1.21
SAS 9.4M8,9.4
SBGrid 2.11.2
SCOTCH 6.1.0,7.0.3
SCons 4.0.1,4.5.2
SDL2 2.0.14,2.26.3
SHAPEIT 2.r904.glibcv2.17
SHAPEIT4 4.2.2
SLEPc 3.15.0,3.17.2
SMRT-Link 11.1.0.166339,12.0.0
SOCI 4.0.3,4.0.3
SPAGeDi 1.5d
SPAdes 3.15.1,3.15.5
SPM 12.5_r7771
SQLite 3.33.0,3.39.4,3.45.3
SRA-Toolkit 2.10.9,3.0.10,3.1.1,3.1.1
STAR 2.7.6a,2.7.7a,2.7.8a,2.7.9a,2.7.11a,2.7.11a
STREAM 5.10
SWIG 4.0.2,4.1.1
Salmon 1.4.0
Sambamba 0.8.0
ScaFaCoS 1.0.1,1.0.4
ScaLAPACK 2.1.0,2.1.0,2.2.0,2.2.0,2.2.0
SciPy-bundle 2020.11,2020.11,2020.11,2020.11,2020.11,2021.05,2023.02,2024.05
Seaborn 0.12.2,0.13.2
Seq-Gen 1.3.4
SeqKit 2.3.1,2.8.1
Serf 1.3.9,1.3.9
Shapely 1.8.5.post1,2.0.1
Sherpa 3.0.0
Slicer 5.6.2
SpaceRanger 2.1.1
Spark 3.1.1,3.1.1,3.5.0,3.5.0,3.5.1,3.5.3,3.5.4
SpectrA 1.0.0,1.0.1
Stacks 2.59
Stata 17
StringTie 2.1.4
Subread 2.0.3
Subversion 1.14.0,1.14.3
SuiteSparse 5.8.1,5.13.0
Summovie 1.0.2
SuperLU_DIST 8.1.2
Szip 2.1.1,2.1.1
TOMO3D 01
TOPAS 3.9
TRF 4.09.1
TRUST4 1.0.7
TWL-NINJA 0.97
Tcl 8.6.10,8.6.12,8.6.14
TensorFlow 2.5.0,2.7.1,2.13.0,2.15.1
TensorRT 8.6.1
Tk 8.6.10,8.6.12
Tkinter 3.8.6,3.10.8
TopHat 2.1.2,2.1.2
TotalView 2023.3.10
TreeMix 1.13
Trilinos 13.4.1
Trim_Galore 0.6.7
Trimmomatic 0.39
UCC 1.1.0,1.3.0
UCC-CUDA 1.1.0,1.1.0,1.3.0
UCX 1.9.0,1.9.0,1.10.0,1.13.1,1.16.0
UCX-CUDA 1.10.0,1.13.1,1.13.1,1.13.1,1.16.0
UDUNITS 2.2.26,2.2.28
USEARCH 11.0.667
UnZip 6.0,6.0,6.0
Unblur 1.0.2
VASP 5.4.1,5.4.4,5.4.4,6.3.0,6.4.2
VASPsol 5.4.1
VCFtools 0.1.16
VDJtools 1.2.1
VEP 107,110,112,112.0
VESTA 3.5.8
VMD 1.9.4a57
VSCode 1.95.3,1.96.2,1.96.4
VTK 9.0.1,9.0.1,9.2.6
VTune 2023.2.0
Valgrind 3.16.1,3.21.0
ViennaRNA 2.5.1
Vim 9.0.1434
VisPy 0.12.2
Voro++ 0.4.6,0.4.6
WRF 4.4.1
Wannier90 3.1.0,3.1.0
Wayland 1.22.0
Waylandpp 1.0.0
WebKitGTK+ 2.40.4
X11 20201008,20221110
XCFun 2.1.1
XGBoost 2.1.1,2.1.1
XML-LibXML 2.0206,2.0208
XMedCon 0.25.0
XZ 5.2.5,5.2.7,5.4.5
Xerces-C++ 3.1.4,3.2.3,3.2.4
Xvfb 1.20.9,21.1.6
YODA 1.9.9
Yasm 1.3.0,1.3.0
Z3 4.8.10,4.10.2,4.12.2,4.12.2
ZeroMQ 4.3.3,4.3.4
Zip 3.0,3.0
aiohttp 3.8.5
alibuild 1.17.11
angsd 0.940
anndata 0.10.5.post1
annovar 2019Oct24,20200607
ant 1.10.9,1.10.12,1.10.12
archspec 0.1.2,0.2.0
aria2 1.35.0,1.36.0
arpack-ng 3.8.0,3.8.0,3.8.0
arrow-R 6.0.0.2,11.0.0.3,14.0.0.2,16.1.0
at-spi2-atk 2.38.0,2.38.0
at-spi2-core 2.38.0,2.46.0
attr 2.4.48,2.5.1
attrdict3 2.0.2
awscli 2.1.23,2.13.20,2.15.2
bases2Fastq v1.5.1,v1.5.1,v2.0.0
bcl2fastq2 2.20.0,2.20.0
beagle-lib 3.1.2,3.1.2,3.1.2,3.1.2,4.0.0,4.0.1
binutils 2.28,2.30,2.30,2.35,2.35,2.39,2.39,2.40,2.42,2.42
biswebnode 1.3.0
bokeh 2.2.3,2.2.3,3.2.1
boto3 1.20.13,1.26.163
breseq 0.35.5,0.38.0,0.38.1
bsddb3 6.2.9,6.2.9
bzip2 1.0.8,1.0.8,1.0.8
c-ares 1.19.1
cURL 7.55.1,7.72.0,7.86.0,7.86.0,8.7.1
cairo 1.16.0,1.16.0,1.17.4
ccache 4.6.3
cffi 1.16.0
code-server 4.91.1,4.95.3
configurable-http-proxy 4.5.5
cppy 1.2.1
cromwell 86
cryptography 41.0.1,42.0.8
cuDNN 8.0.5.39,8.2.1.32,8.7.0.84,8.8.0.121,8.9.2.26,9.5.0.50
cuTENSOR 1.7.0.1,2.0.2.5
cutadapt 3.4
cxxopts 3.0.0
cyrus-sasl 2.1.28
dSQ 1.05
dask 2021.2.0,2021.2.0,2023.7.1
dbus-glib 0.112
dcm2niix 1.0.20211006,1.0.20230411
dedalus 3.0.2
deepTools 3.5.1,3.5.5
deml 1.1.4
dftd4 3.4.0
dill 0.3.7
dlib 19.22,19.22,19.22
dorado 0.5.3
dotNET-Core 7.0.410
dotNET-SDK 3.1.300
double-conversion 3.1.5,3.2.1
dtcmp 1.1.2,1.1.4
ecBuild 3.8.0
ecCodes 2.31.0
einops 0.7.0
elbencho 2.0,3.0
elfutils 0.183,0.189
eman
enchant-2 2.3.3
ensmallen 2.21.1,2.21.1
exiv2 0.27.5,0.28.0
expat 2.2.5,2.2.9,2.4.9,2.6.2
expecttest 0.1.3
fastjet 3.4.0
fastjet-contrib 1.049
fastp 0.23.2
ffnvcodec 11.1.5.2
file 5.39,5.43
flatbuffers 1.12.0,23.1.4,23.5.26
flatbuffers-python 1.12,2.0,23.1.4,23.5.26
flex 2.6.3,2.6.4,2.6.4,2.6.4,2.6.4,2.6.4
flit 3.9.0,3.9.0
fmriprep 23.1.0,23.1.4,23.2.1,24.1.0
fontconfig 2.13.92,2.14.1
foss 2020b,2022b,2024a
fosscuda 2020b
freeglut 3.2.1,3.4.0
freetype 2.10.3,2.10.3,2.12.1
gc 8.0.4,8.2.2,8.2.4
gcccuda 2020b,2022b
gcloud 382.0.0,494.0.0
gettext 0.19.8.1,0.21,0.21,0.21.1,0.21.1,0.22.5,0.22.5
gfbf 2022b,2024a
gflags 2.2.2
giflib 5.2.1,5.2.1
git 2.28.0,2.30.0,2.38.1,2.45.1
git-lfs 3.2.0,3.5.1
glew 2.1.0,2.2.0
glib-networking 2.72.1
glibc 2.34
gmpy2 2.1.0b5,2.1.5
gmsh 4.11.1,4.11.1
gnuplot 5.4.1,5.4.6
gomkl 2022b
gompi 2020b,2022b,2024a
gompic 2020b
googletest 1.10.0,1.12.1
gperf 3.1,3.1
gperftools 2.14
gpu_burn 20231110
graphite2 1.3.14,1.3.14
groff 1.22.4,1.22.4
grpcio 1.59.3
gsutil 4.42,5.10
gzip 1.10,1.12,1.13
h5py 3.1.0,3.1.0,3.2.1,3.8.0
hatchling 1.18.0,1.24.2
help2man 1.47.4,1.47.16,1.49.2,1.49.3
hiredis 1.2.0
hmmlearn 0.3.0
hunspell 1.7.1
hwloc 2.2.0,2.8.0,2.10.0
hypothesis 5.41.2,5.41.5,6.1.1,6.68.2,6.103.1
iccifort 2020.4.304
igraph 0.9.5,0.10.4,0.10.4,0.10.6,0.10.6,0.10.10
iimkl 2022b
iimpi 2020b,2022b,2024a
imageio 2.9.0,2.31.1
imgaug 0.4.0
imkl 2020.4.304,2020.4.304,2020.4.304,2022.2.1,2022.2.1,2024.2.0
imkl-FFTW 2022.2.1,2024.2.0
impi 2019.9.304,2021.7.1,2021.13.0
inih 57
intel 2020b,2022b,2024a
intel-compilers 2022.2.1,2024.2.0
intltool 0.51.0,0.51.0
iomkl 2020b,2022b
iompi 2020b,2022b
jax 0.2.19,0.3.25,0.4.25,0.4.25
jbigkit 2.1,2.1
jemalloc 5.2.1,5.3.0
json-c 0.16
json-fortran 8.3.0
jupyter-resource-usage 1.0.0
jupyter-server 2.7.0
jupyter-server-proxy 3.2.2
jupyterlmod 4.0.3
kallisto 0.48.0
kim-api 2.2.1,2.3.0
kineto 0.4.0
leidenalg 0.8.8,0.10.2
lftp 4.9.2
libGDSII 0.21
libGLU 9.0.1,9.0.2
libGridXC 0.9.6
libPSML 1.1.10
libRmath 4.1.0
libXp 1.0.3
libaec 1.0.6,1.0.6
libaio 0.3.112,0.3.113
libarchive 3.4.3,3.6.1,3.7.4
libavif 0.11.1,0.11.1
libcerf 1.14,2.3
libcifpp 5.0.6,7.0.3
libcint 5.5.0
libcircle 0.3,0.3
libctl 4.5.1
libdap 3.20.11
libdeflate 1.7,1.15
libdrm 2.4.102,2.4.114
libepoxy 1.5.4,1.5.10
libev 4.33
libevent 2.1.12,2.1.12,2.1.12
libexif 0.6.24,0.6.24
libfabric 1.11.0,1.16.1,1.21.0
libffi 3.3,3.4.4,3.4.5
libgcrypt 1.10.1
libgd 2.3.0,2.3.1,2.3.3
libgdiplus 6.1,6.1
libgeotiff 1.6.0,1.7.1
libgit2 1.1.0,1.5.0
libglvnd 1.3.2,1.6.0
libgpg-error 1.46
libharu 2.3.0
libiconv 1.16,1.17,1.17
libidn 1.41
libidn2 2.3.0,2.3.2
libjpeg-turbo 2.0.5,2.1.4
libleidenalg 0.11.1,0.11.1,0.11.1
libmcfp 1.2.2,1.3.3
libnsl 2.0.0
libogg 1.3.4,1.3.5
libopus 1.3.1
libpci 3.7.0
libpciaccess 0.16,0.17,0.18.1
libpng 1.2.59,1.5.30,1.6.37,1.6.38
libpsl 0.21.1
libreadline 8.0,8.2,8.2
librsvg 2.51.2
librttopo 1.1.0
libsigc++ 2.10.8
libsndfile 1.0.28,1.2.0
libsodium 1.0.18,1.0.18
libspatialindex 1.9.3
libspatialite 5.0.1
libtasn1 4.19.0
libtirpc 1.3.1,1.3.3
libtool 2.4.6,2.4.7,2.4.7
libunistring 0.9.10,1.1,1.1
libunwind 1.4.0,1.6.2
libvorbis 1.3.7,1.3.7
libwebkitgtk-1.0 1.2.4.9
libwebp 1.1.0,1.3.1
libwpe 1.14.1
libxc 4.3.4,4.3.4,5.1.2,5.1.5,6.1.0,6.1.0
libxml++ 2.40.1
libxml2 2.9.10,2.9.14,2.10.3,2.12.7
libxslt 1.1.34,1.1.37
libxsmm 1.16.1
libyaml 0.2.5,0.2.5
libzip 1.9.2
liftOver 2023
loompy 3.0.7
lpsolve 5.5.2.11
lwgrp 1.0.3,1.0.5
lxml 4.9.2
lz4 1.9.2,1.9.4,1.9.4
maeparser 1.3.1
magma 2.5.4,2.7.1,2.7.1
make 4.3,4.3,4.4.1,4.4.1
makeinfo 6.7,6.7,7.0.3
mapDamage 2.2.1
matlab-proxy 0.12.1,0.13.1,0.14.0,0.15.1,0.18.2,0.19.0
matplotlib 3.3.3,3.3.3,3.3.3,3.7.0
maturin 1.1.0,1.4.0,1.6.0
mctc-lib 0.3.1
meson-python 0.11.0,0.15.0,0.16.0
mfold_util 4.7
mgltools
miniconda 22.9.0,22.11.1,23.1.0,23.3.1,23.5.2,24.3.0,24.3.0,24.7.1,24.9.2
minimap2 2.22
minizip 1.1
ml_dtypes 0.3.1
mlpack 4.3.0,4.3.0
mm-common 1.0.4
mongolite 20240424,20240424
morphosamplers 0.0.10
motif 2.3.8,2.3.8
mpi4py 3.1.4
mpifileutils 0.11.1,0.11.1
mrc 1.3.6,1.3.13
mrcfile 1.3.0,1.5.0
mstore 0.2.0
muParser 2.3.4
multicharge 0.2.0
nanobind 2.1.0
napari 0.4.18
nbclassic 1.0.0
ncbi-vdb 2.10.9,3.0.10,3.1.1
ncdu 1.18
ncompress 4.2.4.6
ncurses 5.9,5.9,6.0,6.2,6.2,6.3,6.3,6.5,6.5
ncview 2.1.8,2.1.8
nedit-ng 2020.1
netCDF 4.6.1,4.7.4,4.7.4,4.7.4,4.7.4,4.9.0,4.9.0,4.9.0
netCDF-C++ 4.2
netCDF-C++4 4.3.1,4.3.1
netCDF-Fortran 4.4.4,4.5.3,4.5.3,4.5.3,4.5.3,4.6.0,4.6.0,4.6.0
netcdf4-python 1.6.3
nettle 3.6,3.8.1
networkx 2.5,2.5,2.5.1,3.0
nf-core 2.14.1
nghttp2 1.48.0
nghttp3 0.6.0
ngtcp2 0.7.0
nlohmann_json 3.11.2
nodejs 12.19.0,18.12.1,20.11.1
nsync 1.24.0,1.26.0
numactl 2.0.13,2.0.16,2.0.18
numba 0.58.1
nvofbf 2023.01
nvompi 2023.01
occt 7.5.0p1,7.5.0p1
p11-kit 0.24.1
p7zip 17.04
pam-devel 1.3.1
parallel 20210322
parameterized 0.9.0
patchelf 0.12,0.17.2,0.18.0
phonopy 2.27.0
phyx 1.3
picard 2.18.14,2.25.6
pigz 2.6,2.7
pixman 0.40.0,0.42.2
pkg-config 0.29.2,0.29.2
pkgconf 1.8.0,1.8.0,1.9.3,2.2.0
pkgconfig 1.5.1,1.5.5
plotly.py 4.14.3,5.13.1
pocl 1.6,1.8,5.0
poetry 1.5.1,1.7.1,1.8.3
poppler 21.06.1,21.06.1,22.12.0
popt 1.16
postgis 3.4.2
printproto 1.0.5
prompt-toolkit 3.0.36
protobuf 3.14.0,3.19.4,23.0
protobuf-python 3.14.0,3.19.4,4.23.0
psycopg2 2.9.9
pugixml 1.12.1
py-cpuinfo 9.0.0
py3Dmol 2.0.1.post1,2.1.0
pyFFTW 0.13.1
pySCENIC 0.12.1
pybind11 2.6.0,2.6.2,2.10.3,2.12.0,2.12.0
pydantic 2.5.3
pyfaidx 0.7.2.1
pyproj 3.5.0
pytest 7.4.2
pytest-flakefinder 1.1.0
pytest-rerunfailures 12.0
pytest-shard 0.1.2
pytest-workflow 2.0.1
pytest-xdist 2.3.0,3.3.1
python-igraph 0.9.8,0.11.4
python-isal 0.11.1
qrupdate 1.1.2
rMATS-turbo 4.1.1,4.1.2,4.2.0
rasterio 1.3.8
re2c 2.0.3,3.0
rpmrebuild 2.16,2.18
ruamel.yaml 0.17.21,0.17.21
samblaster 0.1.26
scanpy 1.9.8
scikit-build 0.11.1,0.11.1,0.17.2,0.17.6
scikit-build-core 0.9.3
scikit-image 0.18.1,0.18.1,0.18.3,0.21.0
scikit-learn 0.20.4,0.23.2,0.23.2,0.24.1,1.2.1
segemehl 0.3.4
seqtk 1.3
setuptools 64.0.3
setuptools-rust 1.9.0
shRNA 0.1
siscone 3.0.5
slurm-drmaa 1.1.3
snakemake 7.32.3
snappy 1.1.8,1.1.9,1.1.10
sparsehash 2.0.4
spglib-python 2.0.2,2.3.1
statsmodels 0.12.1,0.14.0
sympy 1.7.1,1.12
t-SNE-CUDA 3.0.1
tabix 0.2.6
tbb 2020.3,2021.9.0,2021.10.0,2021.13.0
tcsh 6.22.03,6.24.07
tensorboard 2.15.1
tesseract 5.3.0,5.3.0
texlive 20220321,20220321,20220321
time 1.9
tmux 3.4
topaz 0.2.5,0.2.5.20240417
torchvision 0.10.0,0.16.0
tqdm 4.56.2,4.60.0,4.64.1
ttyd 1.7.7
typing-extensions 3.7.4.3,4.9.0
umap-learn 0.5.3
unifdef 2.12
unrar 7.0.1
utf8proc 2.5.0,2.8.0
util-linux 2.36,2.38.1
virtualenv 20.23.1,20.26.2
watershed-workflow 1.4.0,1.4.0,1.5.0
wget 1.20.3
wpebackend-fdo 1.14.1
wrapt 1.15.0
wxPython 4.2.1
wxWidgets 3.1.4,3.1.4,3.2.0,3.2.2.1
x264 20201026,20230226
x265 3.3,3.5
xarray 2023.4.2,2023.4.2
xextproto 7.3.0
xmlf90 1.5.4
xorg-macros 1.19.2,1.19.3,1.20.1
xpdf 4.04
xprop 1.2.5,1.2.5
xtb 6.5.1,6.6.0,6.6.1,6.7.1
xxd 8.2.4220,9.0.1696
yaml-cpp 0.7.0,0.7.0
ycga-public 1.6.0,1.7.2,1.7.3,1.7.4,1.7.5,1.7.6,1.7.7
zlib 1.2.11,1.2.11,1.2.11,1.2.12,1.2.12,1.2.13,1.3.1,1.3.1
zstd 1.4.5,1.5.2,1.5.6

Partitions and Hardware

McCleary is made up of several kinds of compute nodes. We group them into (sometimes overlapping) Slurm partitions meant to serve different purposes. By combining the --partition and --constraint Slurm options you can more finely control what nodes your jobs can run on.

Info

YCGA sequence data user? To avoid being charged for your cpu usage for YCGA-related work, make sure to submit jobs to the ycga partition with -p ycga.

Job Submission Limits

  • You are limited to 4 interactive app instances (of any type) at one time. Additional instances will be rejected until you delete older open instances. For OnDemand jobs, closing the window does not terminate the interactive app job. To terminate the job, click the "Delete" button in your "My Interactive Apps" page in the web portal.

  • Job submissions are limited to 200 jobs per hour. See the Rate Limits section in the Common Job Failures page for more info.

Public Partitions

See each tab below for more information about the available common use partitions.

Use the day partition for most batch jobs. This is the default if you don't specify one with --partition.

Request Defaults

Unless specified, your jobs will run with the following options to salloc and sbatch options for this partition.

--time=01:00:00 --nodes=1 --ntasks=1 --cpus-per-task=1 --mem-per-cpu=5120

Job Limits

Jobs submitted to the day partition are subject to the following limits:

Limit Value
Maximum job time limit 1-00:00:00
Maximum CPUs per group 512
Maximum memory per group 6000G
Maximum CPUs per user 256
Maximum memory per user 3000G

Available Compute Nodes

Requests for --cpus-per-task and --mem can't exceed what is available on a single compute node.

Count CPU Type CPUs/Node Memory/Node (GiB) Node Features
26 8358 64 983 icelake, avx512, 8358, nogpu, bigtmp, common
5 6240 36 180 cascadelake, avx512, 6240, nogpu, bigtmp, standard, oldest, common

Use the devel partition to jobs with which you need ongoing interaction. For example, exploratory analyses or debugging compilation.

Request Defaults

Unless specified, your jobs will run with the following options to salloc and sbatch options for this partition.

--time=01:00:00 --nodes=1 --ntasks=1 --cpus-per-task=1 --mem-per-cpu=5120

Job Limits

Jobs submitted to the devel partition are subject to the following limits:

Limit Value
Maximum job time limit 06:00:00
Maximum CPUs per user 4
Maximum memory per user 32G
Maximum submitted jobs per user 1

Available Compute Nodes

Requests for --cpus-per-task and --mem can't exceed what is available on a single compute node.

Count CPU Type CPUs/Node Memory/Node (GiB) Node Features
7 6240 36 180 cascadelake, avx512, 6240, nogpu, bigtmp, standard, oldest, common

Use the week partition for jobs that need a longer runtime than day allows.

Request Defaults

Unless specified, your jobs will run with the following options to salloc and sbatch options for this partition.

--time=01:00:00 --nodes=1 --ntasks=1 --cpus-per-task=1 --mem-per-cpu=5120

Job Limits

Jobs submitted to the week partition are subject to the following limits:

Limit Value
Maximum job time limit 7-00:00:00
Maximum CPUs per group 192
Maximum memory per group 2949G
Maximum CPUs per user 192
Maximum memory per user 2949G

Available Compute Nodes

Requests for --cpus-per-task and --mem can't exceed what is available on a single compute node.

Count CPU Type CPUs/Node Memory/Node (GiB) Node Features
14 8358 64 983 icelake, avx512, 8358, nogpu, bigtmp, common
2 6240 36 180 cascadelake, avx512, 6240, nogpu, bigtmp, standard, oldest, common

Use the long partition for jobs that need a longer runtime than week allows.

Request Defaults

Unless specified, your jobs will run with the following options to salloc and sbatch options for this partition.

--time=7-00:00:00 --nodes=1 --ntasks=1 --cpus-per-task=1 --mem-per-cpu=5120

Job Limits

Jobs submitted to the long partition are subject to the following limits:

Limit Value
Maximum job time limit 28-00:00:00
Maximum CPUs per group 36
Maximum CPUs per user 36

Available Compute Nodes

Requests for --cpus-per-task and --mem can't exceed what is available on a single compute node.

Count CPU Type CPUs/Node Memory/Node (GiB) Node Features
3 6240 36 180 cascadelake, avx512, 6240, nogpu, bigtmp, standard, oldest, common

Use the transfer partition to stage data for your jobs to and from cluster storage.

Request Defaults

Unless specified, your jobs will run with the following options to salloc and sbatch options for this partition.

--time=01:00:00 --nodes=1 --ntasks=1 --cpus-per-task=1 --mem-per-cpu=5120

Job Limits

Jobs submitted to the transfer partition are subject to the following limits:

Limit Value
Maximum job time limit 1-00:00:00
Maximum CPUs per user 4
Maximum running jobs per user 4

Available Compute Nodes

Requests for --cpus-per-task and --mem can't exceed what is available on a single compute node.

Count CPU Type CPUs/Node Memory/Node (GiB) Node Features
2 72 8 227 milan, 72F3, nogpu, standard, common

Use the gpu partition for jobs that make use of GPUs. You must request GPUs explicitly with the --gpus option in order to use them. For example, --gpus=gtx1080ti:2 would request 2 GeForce GTX 1080Ti GPUs per node.

Request Defaults

Unless specified, your jobs will run with the following options to salloc and sbatch options for this partition.

--time=01:00:00 --nodes=1 --ntasks=1 --cpus-per-task=1 --mem-per-cpu=5120

GPU jobs need GPUs!

Jobs submitted to this partition do not request a GPU by default. You must request one with the --gpus option.

Job Limits

Jobs submitted to the gpu partition are subject to the following limits:

Limit Value
Maximum job time limit 2-00:00:00
Maximum GPUs per group 24
Maximum GPUs per user 12

Available Compute Nodes

Requests for --cpus-per-task and --mem can't exceed what is available on a single compute node.

Count CPU Type CPUs/Node Memory/Node (GiB) GPU Type GPUs/Node vRAM/GPU (GB) Node Features
9 6326 32 206 a5000 4 24 icelake, avx512, 6326, doubleprecision, a5000, common
1 8358 64 983 a100 4 80 icelake, avx512, 8358, doubleprecision, bigtmp, common, a100, a100-80g
1 8358 64 983 a100 4 80 icelake, avx512, 8358, gpu, bigtmp, common, doubleprecision, a100, a100-80g
1 8358 64 984 a100 4 80 icelake, avx512, 8358, doubleprecision, bigtmp, common, a100, a100-80g
3 5222 8 163 rtx3090 4 24 cascadelake, avx512, 5222, doubleprecision, common, rtx3090
4 5222 8 163 rtx5000 4 16 cascadelake, avx512, 5222, doubleprecision, common, bigtmp, rtx5000

Use the gpu_devel partition to debug jobs that make use of GPUs, or to develop GPU-enabled code.

Request Defaults

Unless specified, your jobs will run with the following options to salloc and sbatch options for this partition.

--time=01:00:00 --nodes=1 --ntasks=1 --cpus-per-task=1 --mem-per-cpu=5120

GPU jobs need GPUs!

Jobs submitted to this partition do not request a GPU by default. You must request one with the --gpus option.

Job Limits

Jobs submitted to the gpu_devel partition are subject to the following limits:

Limit Value
Maximum job time limit 06:00:00
Maximum CPUs per user 10
Maximum GPUs per user 2
Maximum submitted jobs per user 2

Available Compute Nodes

Requests for --cpus-per-task and --mem can't exceed what is available on a single compute node.

Count CPU Type CPUs/Node Memory/Node (GiB) GPU Type GPUs/Node vRAM/GPU (GB) Node Features
3 6326 32 206 a5000 4 24 icelake, avx512, 6326, doubleprecision, a5000, common
2 5222 8 163 rtx5000 4 16 cascadelake, avx512, 5222, doubleprecision, common, bigtmp, rtx5000
1 5222 8 163 rtx3090 4 24 cascadelake, avx512, 5222, doubleprecision, common, rtx3090
1 6240 36 352 a100 4 40 cascadelake, avx512, 6240, doubleprecision, common, bigtmp, oldest, a100, a100-40g

Use the bigmem partition for jobs that have memory requirements other partitions can't handle.

Request Defaults

Unless specified, your jobs will run with the following options to salloc and sbatch options for this partition.

--time=01:00:00 --nodes=1 --ntasks=1 --cpus-per-task=1 --mem-per-cpu=5120

Job Limits

Jobs submitted to the bigmem partition are subject to the following limits:

Limit Value
Maximum job time limit 1-00:00:00
Maximum CPUs per user 32
Maximum memory per user 3960G

Available Compute Nodes

Requests for --cpus-per-task and --mem can't exceed what is available on a single compute node.

Count CPU Type CPUs/Node Memory/Node (GiB) Node Features
4 6346 32 3960 icelake, avx512, 6346, nogpu, bigtmp, common
3 6240 36 1486 cascadelake, avx512, 6240, nogpu, pi, bigtmp, oldest
2 6234 16 1486 cascadelake, avx512, 6234, nogpu, common, bigtmp

Use the scavenge partition to run preemptable jobs on more resources than normally allowed. For more information about scavenge, see the Scavenge documentation.

Request Defaults

Unless specified, your jobs will run with the following options to salloc and sbatch options for this partition.

--time=01:00:00 --nodes=1 --ntasks=1 --cpus-per-task=1 --mem-per-cpu=5120

GPU jobs need GPUs!

Jobs submitted to this partition do not request a GPU by default. You must request one with the --gpus option.

Job Limits

Jobs submitted to the scavenge partition are subject to the following limits:

Limit Value
Maximum job time limit 1-00:00:00
Maximum CPUs per user 1000
Maximum memory per user 20000G

Available Compute Nodes

Requests for --cpus-per-task and --mem can't exceed what is available on a single compute node.

Count CPU Type CPUs/Node Memory/Node (GiB) GPU Type GPUs/Node vRAM/GPU (GB) Node Features
48 8362 64 479 icelake, avx512, 8362, nogpu, standard, pi
1 8358 64 1007 a5000 8 24 icelake, avx512, 8358, doubleprecision, bigtmp, pi, a5000
17 6326 32 206 a5000 4 24 icelake, avx512, 6326, doubleprecision, a5000, common
2 6326 32 984 a100 4 80 icelake, avx512, 6326, doubleprecision, pi, a100, a100-80g
40 8358 64 983 icelake, avx512, 8358, nogpu, bigtmp, common
1 8358 64 983 a100 4 80 icelake, avx512, 8358, doubleprecision, bigtmp, common, a100, a100-80g
1 8358 64 983 a100 4 80 icelake, avx512, 8358, doubleprecision, bigtmp, pi, a100, a100-80g
1 8358 64 983 a100 4 80 icelake, avx512, 8358, gpu, bigtmp, common, doubleprecision, a100, a100-80g
1 8358 64 984 a100 4 80 icelake, avx512, 8358, doubleprecision, bigtmp, common, a100, a100-80g
4 6346 32 1991 icelake, avx512, 6346, nogpu, pi
4 6326 32 479 a5000 4 24 icelake, avx512, 6326, doubleprecision, a5000, pi
1 8358 64 1007 l40s 8 48 icelake, avx512, 8358, doubleprecision, pi, bigtmp, l40s,
4 6346 32 3960 icelake, avx512, 6346, nogpu, bigtmp, common
41 6240 36 180 cascadelake, avx512, 6240, nogpu, bigtmp, standard, pi
4 6240 36 730 cascadelake, avx512, 6240, nogpu, bigtmp, standard, pi
4 6240 36 352 cascadelake, avx512, 6240, nogpu, bigtmp, standard, pi
12 6240 36 180 cascadelake, avx512, 6240, nogpu, bigtmp, standard, oldest, common
9 6240 36 163 cascadelake, avx512, 6240, nogpu, bigtmp, standard, pi
2 6240 36 166 cascadelake, avx512, 6240, nogpu, bigtmp, standard, pi
3 5222 8 163 rtx3090 4 24 cascadelake, avx512, 5222, doubleprecision, common, rtx3090
6 6240 36 1486 cascadelake, avx512, 6240, nogpu, pi, bigtmp, oldest
10 8268 48 352 cascadelake, avx512, 8268, nogpu, bigtmp, pi
1 6248r 48 352 cascadelake, avx512, 6248r, nogpu, pi, bigtmp
2 6234 16 1486 cascadelake, avx512, 6234, nogpu, common, bigtmp
1 6240 36 352 v100 4 16 cascadelake, avx512, 6240, pi, oldest, v100
4 5222 8 163 rtx5000 4 16 cascadelake, avx512, 5222, doubleprecision, common, bigtmp, rtx5000
8 5222 8 163 rtx5000 4 16 cascadelake, avx512, 5222, doubleprecision, pi, bigtmp, rtx5000
2 6240 36 352 a100 4 40 cascadelake, avx512, 6240, doubleprecision, pi, bigtmp, oldest, a100, a100-40g
1 6226r 32 163 rtx3090 4 24 cascadelake, avx512, 6226r, doubleprecision, pi, rtx3090
2 6240 36 163 rtx2080ti 4 11 cascadelake, avx512, 6240, singleprecision, pi, bigtmp, oldest, rtx2080ti
1 6242 32 981 rtx8000 2 48 cascadelake, avx512, 6242, doubleprecision, pi, bigtmp, oldest, rtx8000
1 6240 36 352 rtx3090 8 24 cascadelake, avx512, 6240, doubleprecision, pi, bigtmp, oldest, rtx3090
1 6240 36 730 a100 4 40 cascadelake, avx512, 6240, doubleprecision, pi, bigtmp, oldest, a100, a100-40g
1 6240 36 163 rtx3090 4 24 cascadelake, avx512, 6240, doubleprecision, pi, bigtmp, oldest, rtx3090
1 6240 36 163 rtx3090 8 24 cascadelake, avx512, 6240, doubleprecision, pi, bigtmp, oldest, rtx3090
1 6132 28 730 skylake, avx512, 6132, nogpu, standard, bigtmp, pi
2 6132 28 163 skylake, avx512, 6132, nogpu, standard, bigtmp, pi
2 5122 8 163 rtx2080 4 8 skylake, avx512, 5122, singleprecision, pi, rtx2080

Use the scavenge_gpu partition to run preemptable jobs on more GPU resources than normally allowed. For more information about scavenge, see the Scavenge documentation.

Request Defaults

Unless specified, your jobs will run with the following options to salloc and sbatch options for this partition.

--time=01:00:00 --nodes=1 --ntasks=1 --cpus-per-task=1 --mem-per-cpu=5120

GPU jobs need GPUs!

Jobs submitted to this partition do not request a GPU by default. You must request one with the --gpus option.

Job Limits

Jobs submitted to the scavenge_gpu partition are subject to the following limits:

Limit Value
Maximum job time limit 1-00:00:00
Maximum GPUs per group 100
Maximum GPUs per user 64

Available Compute Nodes

Requests for --cpus-per-task and --mem can't exceed what is available on a single compute node.

Count CPU Type CPUs/Node Memory/Node (GiB) GPU Type GPUs/Node vRAM/GPU (GB) Node Features
1 8358 64 1007 a5000 8 24 icelake, avx512, 8358, doubleprecision, bigtmp, pi, a5000
17 6326 32 206 a5000 4 24 icelake, avx512, 6326, doubleprecision, a5000, common
2 6326 32 984 a100 4 80 icelake, avx512, 6326, doubleprecision, pi, a100, a100-80g
1 8358 64 983 a100 4 80 icelake, avx512, 8358, doubleprecision, bigtmp, common, a100, a100-80g
1 8358 64 983 a100 4 80 icelake, avx512, 8358, doubleprecision, bigtmp, pi, a100, a100-80g
1 8358 64 983 a100 4 80 icelake, avx512, 8358, gpu, bigtmp, common, doubleprecision, a100, a100-80g
1 8358 64 984 a100 4 80 icelake, avx512, 8358, doubleprecision, bigtmp, common, a100, a100-80g
4 6326 32 479 a5000 4 24 icelake, avx512, 6326, doubleprecision, a5000, pi
1 8358 64 1007 l40s 8 48 icelake, avx512, 8358, doubleprecision, pi, bigtmp, l40s,
3 5222 8 163 rtx3090 4 24 cascadelake, avx512, 5222, doubleprecision, common, rtx3090
1 6240 36 352 v100 4 16 cascadelake, avx512, 6240, pi, oldest, v100
4 5222 8 163 rtx5000 4 16 cascadelake, avx512, 5222, doubleprecision, common, bigtmp, rtx5000
8 5222 8 163 rtx5000 4 16 cascadelake, avx512, 5222, doubleprecision, pi, bigtmp, rtx5000
2 6240 36 352 a100 4 40 cascadelake, avx512, 6240, doubleprecision, pi, bigtmp, oldest, a100, a100-40g
1 6226r 32 163 rtx3090 4 24 cascadelake, avx512, 6226r, doubleprecision, pi, rtx3090
2 6240 36 163 rtx2080ti 4 11 cascadelake, avx512, 6240, singleprecision, pi, bigtmp, oldest, rtx2080ti
1 6242 32 981 rtx8000 2 48 cascadelake, avx512, 6242, doubleprecision, pi, bigtmp, oldest, rtx8000
1 6240 36 352 rtx3090 8 24 cascadelake, avx512, 6240, doubleprecision, pi, bigtmp, oldest, rtx3090
1 6240 36 730 a100 4 40 cascadelake, avx512, 6240, doubleprecision, pi, bigtmp, oldest, a100, a100-40g
1 6240 36 163 rtx3090 4 24 cascadelake, avx512, 6240, doubleprecision, pi, bigtmp, oldest, rtx3090
1 6240 36 163 rtx3090 8 24 cascadelake, avx512, 6240, doubleprecision, pi, bigtmp, oldest, rtx3090
2 5122 8 163 rtx2080 4 8 skylake, avx512, 5122, singleprecision, pi, rtx2080

Private Partitions

With few exceptions, jobs submitted to private partitions are not considered when calculating your group's Fairshare. Your group can purchase additional hardware for private use, which we will make available as a pi_groupname partition. These nodes are purchased by you, but supported and administered by us. After vendor support expires, we retire compute nodes. Compute nodes can range from $10K to upwards of $50K depending on your requirements. If you are interested in purchasing nodes for your group, please contact us.

PI Partitions (click to expand)

Request Defaults

Unless specified, your jobs will run with the following options to salloc and sbatch options for this partition.

--time=01:00:00 --nodes=1 --ntasks=1 --cpus-per-task=1 --mem-per-cpu=5120

GPU jobs need GPUs!

Jobs submitted to this partition do not request a GPU by default. You must request one with the --gpus option.

Job Limits

Jobs submitted to the pi_bunick partition are subject to the following limits:

Limit Value
Maximum job time limit 7-00:00:00

Available Compute Nodes

Requests for --cpus-per-task and --mem can't exceed what is available on a single compute node.

Count CPU Type CPUs/Node Memory/Node (GiB) GPU Type GPUs/Node vRAM/GPU (GB) Node Features
1 6240 36 352 a100 4 40 cascadelake, avx512, 6240, doubleprecision, pi, bigtmp, oldest, a100, a100-40g

Request Defaults

Unless specified, your jobs will run with the following options to salloc and sbatch options for this partition.

--time=01:00:00 --nodes=1 --ntasks=1 --cpus-per-task=1 --mem-per-cpu=5120

GPU jobs need GPUs!

Jobs submitted to this partition do not request a GPU by default. You must request one with the --gpus option.

Job Limits

Jobs submitted to the pi_butterwick partition are subject to the following limits:

Limit Value
Maximum job time limit 7-00:00:00

Available Compute Nodes

Requests for --cpus-per-task and --mem can't exceed what is available on a single compute node.

Count CPU Type CPUs/Node Memory/Node (GiB) GPU Type GPUs/Node vRAM/GPU (GB) Node Features
1 6240 36 352 a100 4 40 cascadelake, avx512, 6240, doubleprecision, pi, bigtmp, oldest, a100, a100-40g

Request Defaults

Unless specified, your jobs will run with the following options to salloc and sbatch options for this partition.

--time=01:00:00 --nodes=1 --ntasks=1 --cpus-per-task=1 --mem-per-cpu=5120

Job Limits

Jobs submitted to the pi_chenlab partition are subject to the following limits:

Limit Value
Maximum job time limit 14-00:00:00

Available Compute Nodes

Requests for --cpus-per-task and --mem can't exceed what is available on a single compute node.

Count CPU Type CPUs/Node Memory/Node (GiB) Node Features
1 8268 48 352 cascadelake, avx512, 8268, nogpu, bigtmp, pi

Request Defaults

Unless specified, your jobs will run with the following options to salloc and sbatch options for this partition.

--time=1-00:00:00 --nodes=1 --ntasks=1 --cpus-per-task=1 --mem-per-cpu=5120

GPU jobs need GPUs!

Jobs submitted to this partition do not request a GPU by default. You must request one with the --gpus option.

Job Limits

Jobs submitted to the pi_cryo_realtime partition are subject to the following limits:

Limit Value
Maximum job time limit 14-00:00:00
Maximum GPUs per user 12
Maximum running jobs per user 2

Available Compute Nodes

Requests for --cpus-per-task and --mem can't exceed what is available on a single compute node.

Count CPU Type CPUs/Node Memory/Node (GiB) GPU Type GPUs/Node vRAM/GPU (GB) Node Features
1 6326 32 206 a5000 4 24 icelake, avx512, 6326, doubleprecision, a5000, common

Request Defaults

Unless specified, your jobs will run with the following options to salloc and sbatch options for this partition.

--time=1-00:00:00 --nodes=1 --ntasks=1 --cpus-per-task=1 --mem-per-cpu=5120

GPU jobs need GPUs!

Jobs submitted to this partition do not request a GPU by default. You must request one with the --gpus option.

Job Limits

Jobs submitted to the pi_cryoem partition are subject to the following limits:

Limit Value
Maximum job time limit 4-00:00:00
Maximum CPUs per user 32
Maximum GPUs per user 12
Maximum running jobs per user 2

Available Compute Nodes

Requests for --cpus-per-task and --mem can't exceed what is available on a single compute node.

Count CPU Type CPUs/Node Memory/Node (GiB) GPU Type GPUs/Node vRAM/GPU (GB) Node Features
6 6326 32 206 a5000 4 24 icelake, avx512, 6326, doubleprecision, a5000, common

Request Defaults

Unless specified, your jobs will run with the following options to salloc and sbatch options for this partition.

--time=01:00:00 --nodes=1 --ntasks=1 --cpus-per-task=1 --mem-per-cpu=5120

Job Limits

Jobs submitted to the pi_dewan partition are subject to the following limits:

Limit Value
Maximum job time limit 7-00:00:00

Available Compute Nodes

Requests for --cpus-per-task and --mem can't exceed what is available on a single compute node.

Count CPU Type CPUs/Node Memory/Node (GiB) Node Features
2 6240 36 163 cascadelake, avx512, 6240, nogpu, bigtmp, standard, pi

Request Defaults

Unless specified, your jobs will run with the following options to salloc and sbatch options for this partition.

--time=01:00:00 --nodes=1 --ntasks=1 --cpus-per-task=1 --mem-per-cpu=5120

GPU jobs need GPUs!

Jobs submitted to this partition do not request a GPU by default. You must request one with the --gpus option.

Job Limits

Jobs submitted to the pi_dijk partition are subject to the following limits:

Limit Value
Maximum job time limit 7-00:00:00

Available Compute Nodes

Requests for --cpus-per-task and --mem can't exceed what is available on a single compute node.

Count CPU Type CPUs/Node Memory/Node (GiB) GPU Type GPUs/Node vRAM/GPU (GB) Node Features
1 6240 36 352 rtx3090 8 24 cascadelake, avx512, 6240, doubleprecision, pi, bigtmp, oldest, rtx3090

Request Defaults

Unless specified, your jobs will run with the following options to salloc and sbatch options for this partition.

--time=01:00:00 --nodes=1 --ntasks=1 --cpus-per-task=1 --mem-per-cpu=5120

Job Limits

Jobs submitted to the pi_dunn partition are subject to the following limits:

Limit Value
Maximum job time limit 7-00:00:00

Available Compute Nodes

Requests for --cpus-per-task and --mem can't exceed what is available on a single compute node.

Count CPU Type CPUs/Node Memory/Node (GiB) Node Features
1 6240 36 163 cascadelake, avx512, 6240, nogpu, bigtmp, standard, pi

Request Defaults

Unless specified, your jobs will run with the following options to salloc and sbatch options for this partition.

--time=01:00:00 --nodes=1 --ntasks=1 --cpus-per-task=1 --mem-per-cpu=5120

Job Limits

Jobs submitted to the pi_edwards partition are subject to the following limits:

Limit Value
Maximum job time limit 7-00:00:00

Available Compute Nodes

Requests for --cpus-per-task and --mem can't exceed what is available on a single compute node.

Count CPU Type CPUs/Node Memory/Node (GiB) Node Features
1 6240 36 163 cascadelake, avx512, 6240, nogpu, bigtmp, standard, pi

Request Defaults

Unless specified, your jobs will run with the following options to salloc and sbatch options for this partition.

--time=01:00:00 --nodes=1 --ntasks=1 --cpus-per-task=1 --mem-per-cpu=5120

GPU jobs need GPUs!

Jobs submitted to this partition do not request a GPU by default. You must request one with the --gpus option.

Job Limits

Jobs submitted to the pi_falcone partition are subject to the following limits:

Limit Value
Maximum job time limit 7-00:00:00

Available Compute Nodes

Requests for --cpus-per-task and --mem can't exceed what is available on a single compute node.

Count CPU Type CPUs/Node Memory/Node (GiB) GPU Type GPUs/Node vRAM/GPU (GB) Node Features
1 6240 36 163 cascadelake, avx512, 6240, nogpu, bigtmp, standard, pi
1 6240 36 1486 cascadelake, avx512, 6240, nogpu, pi, bigtmp, oldest
1 6240 36 352 v100 4 16 cascadelake, avx512, 6240, pi, oldest, v100

Request Defaults

Unless specified, your jobs will run with the following options to salloc and sbatch options for this partition.

--time=01:00:00 --nodes=1 --ntasks=1 --cpus-per-task=1 --mem-per-cpu=5120

Job Limits

Jobs submitted to the pi_galvani partition are subject to the following limits:

Limit Value
Maximum job time limit 7-00:00:00

Available Compute Nodes

Requests for --cpus-per-task and --mem can't exceed what is available on a single compute node.

Count CPU Type CPUs/Node Memory/Node (GiB) Node Features
7 8268 48 352 cascadelake, avx512, 8268, nogpu, bigtmp, pi

Request Defaults

Unless specified, your jobs will run with the following options to salloc and sbatch options for this partition.

--time=01:00:00 --nodes=1 --ntasks=1 --cpus-per-task=1 --mem-per-cpu=5120

Job Limits

Jobs submitted to the pi_gerstein partition are subject to the following limits:

Limit Value
Maximum job time limit 7-00:00:00

Available Compute Nodes

Requests for --cpus-per-task and --mem can't exceed what is available on a single compute node.

Count CPU Type CPUs/Node Memory/Node (GiB) Node Features
1 6132 28 730 skylake, avx512, 6132, nogpu, standard, bigtmp, pi
2 6132 28 163 skylake, avx512, 6132, nogpu, standard, bigtmp, pi

Request Defaults

Unless specified, your jobs will run with the following options to salloc and sbatch options for this partition.

--time=01:00:00 --nodes=1 --ntasks=1 --cpus-per-task=1 --mem-per-cpu=5120

GPU jobs need GPUs!

Jobs submitted to this partition do not request a GPU by default. You must request one with the --gpus option.

Job Limits

Jobs submitted to the pi_gerstein_gpu partition are subject to the following limits:

Limit Value
Maximum job time limit 7-00:00:00

Available Compute Nodes

Requests for --cpus-per-task and --mem can't exceed what is available on a single compute node.

Count CPU Type CPUs/Node Memory/Node (GiB) GPU Type GPUs/Node vRAM/GPU (GB) Node Features
1 8358 64 983 a100 4 80 icelake, avx512, 8358, doubleprecision, bigtmp, pi, a100, a100-80g
1 6240 36 163 rtx3090 4 24 cascadelake, avx512, 6240, doubleprecision, pi, bigtmp, oldest, rtx3090
1 6240 36 163 rtx3090 8 24 cascadelake, avx512, 6240, doubleprecision, pi, bigtmp, oldest, rtx3090

Request Defaults

Unless specified, your jobs will run with the following options to salloc and sbatch options for this partition.

--time=01:00:00 --nodes=1 --ntasks=1 --cpus-per-task=1 --mem-per-cpu=5120

GPU jobs need GPUs!

Jobs submitted to this partition do not request a GPU by default. You must request one with the --gpus option.

Job Limits

Jobs submitted to the pi_hall partition are subject to the following limits:

Limit Value
Maximum job time limit 28-00:00:00

Available Compute Nodes

Requests for --cpus-per-task and --mem can't exceed what is available on a single compute node.

Count CPU Type CPUs/Node Memory/Node (GiB) GPU Type GPUs/Node vRAM/GPU (GB) Node Features
2 6326 32 984 a100 4 80 icelake, avx512, 6326, doubleprecision, pi, a100, a100-80g
39 6240 36 180 cascadelake, avx512, 6240, nogpu, bigtmp, standard, pi

Request Defaults

Unless specified, your jobs will run with the following options to salloc and sbatch options for this partition.

--time=01:00:00 --nodes=1 --ntasks=1 --cpus-per-task=1 --mem-per-cpu=5120

Job Limits

Jobs submitted to the pi_hall_bigmem partition are subject to the following limits:

Limit Value
Maximum job time limit 28-00:00:00

Available Compute Nodes

Requests for --cpus-per-task and --mem can't exceed what is available on a single compute node.

Count CPU Type CPUs/Node Memory/Node (GiB) Node Features
2 6240 36 1486 cascadelake, avx512, 6240, nogpu, pi, bigtmp, oldest

Request Defaults

Unless specified, your jobs will run with the following options to salloc and sbatch options for this partition.

--time=01:00:00 --nodes=1 --ntasks=1 --cpus-per-task=1 --mem-per-cpu=5120

Job Limits

Jobs submitted to the pi_jetz partition are subject to the following limits:

Limit Value
Maximum job time limit 7-00:00:00

Available Compute Nodes

Requests for --cpus-per-task and --mem can't exceed what is available on a single compute node.

Count CPU Type CPUs/Node Memory/Node (GiB) Node Features
2 8358 64 1991 icelake, avx512, 8358, nogpu, bigtmp, pi
4 6240 36 730 cascadelake, avx512, 6240, nogpu, bigtmp, standard, pi
4 6240 36 352 cascadelake, avx512, 6240, nogpu, bigtmp, standard, pi

Request Defaults

Unless specified, your jobs will run with the following options to salloc and sbatch options for this partition.

--time=01:00:00 --nodes=1 --ntasks=1 --cpus-per-task=1 --mem-per-cpu=5120

Job Limits

Jobs submitted to the pi_kleinstein partition are subject to the following limits:

Limit Value
Maximum job time limit 7-00:00:00

Available Compute Nodes

Requests for --cpus-per-task and --mem can't exceed what is available on a single compute node.

Count CPU Type CPUs/Node Memory/Node (GiB) Node Features
2 6240 36 163 cascadelake, avx512, 6240, nogpu, bigtmp, standard, pi

Request Defaults

Unless specified, your jobs will run with the following options to salloc and sbatch options for this partition.

--time=01:00:00 --nodes=1 --ntasks=1 --cpus-per-task=1 --mem-per-cpu=5120

GPU jobs need GPUs!

Jobs submitted to this partition do not request a GPU by default. You must request one with the --gpus option.

Job Limits

Jobs submitted to the pi_krishnaswamy partition are subject to the following limits:

Limit Value
Maximum job time limit 7-00:00:00

Available Compute Nodes

Requests for --cpus-per-task and --mem can't exceed what is available on a single compute node.

Count CPU Type CPUs/Node Memory/Node (GiB) GPU Type GPUs/Node vRAM/GPU (GB) Node Features
1 6240 36 730 a100 4 40 cascadelake, avx512, 6240, doubleprecision, pi, bigtmp, oldest, a100, a100-40g

Request Defaults

Unless specified, your jobs will run with the following options to salloc and sbatch options for this partition.

--time=01:00:00 --nodes=1 --ntasks=1 --cpus-per-task=1 --mem-per-cpu=5120

Job Limits

Jobs submitted to the pi_ma partition are subject to the following limits:

Limit Value
Maximum job time limit 7-00:00:00

Available Compute Nodes

Requests for --cpus-per-task and --mem can't exceed what is available on a single compute node.

Count CPU Type CPUs/Node Memory/Node (GiB) Node Features
1 8268 48 352 cascadelake, avx512, 8268, nogpu, bigtmp, pi

Request Defaults

Unless specified, your jobs will run with the following options to salloc and sbatch options for this partition.

--time=01:00:00 --nodes=1 --ntasks=1 --cpus-per-task=1 --mem-per-cpu=5120

Job Limits

Jobs submitted to the pi_medzhitov partition are subject to the following limits:

Limit Value
Maximum job time limit 7-00:00:00

Available Compute Nodes

Requests for --cpus-per-task and --mem can't exceed what is available on a single compute node.

Count CPU Type CPUs/Node Memory/Node (GiB) Node Features
2 6240 36 166 cascadelake, avx512, 6240, nogpu, bigtmp, standard, pi

Request Defaults

Unless specified, your jobs will run with the following options to salloc and sbatch options for this partition.

--time=01:00:00 --nodes=1 --ntasks=1 --cpus-per-task=1 --mem-per-cpu=5120

Job Limits

Jobs submitted to the pi_miranker partition are subject to the following limits:

Limit Value
Maximum job time limit 7-00:00:00

Available Compute Nodes

Requests for --cpus-per-task and --mem can't exceed what is available on a single compute node.

Count CPU Type CPUs/Node Memory/Node (GiB) Node Features
1 6248r 48 352 cascadelake, avx512, 6248r, nogpu, pi, bigtmp

Request Defaults

Unless specified, your jobs will run with the following options to salloc and sbatch options for this partition.

--time=01:00:00 --nodes=1 --ntasks=1 --cpus-per-task=1 --mem-per-cpu=5120

Job Limits

Jobs submitted to the pi_ohern partition are subject to the following limits:

Limit Value
Maximum job time limit 7-00:00:00

Available Compute Nodes

Requests for --cpus-per-task and --mem can't exceed what is available on a single compute node.

Count CPU Type CPUs/Node Memory/Node (GiB) Node Features
4 8358 64 984 icelake, avx512, 8358, nogpu, bigtmp, pi

Request Defaults

Unless specified, your jobs will run with the following options to salloc and sbatch options for this partition.

--time=01:00:00 --nodes=1 --ntasks=1 --cpus-per-task=1 --mem-per-cpu=5120

GPU jobs need GPUs!

Jobs submitted to this partition do not request a GPU by default. You must request one with the --gpus option.

Job Limits

Jobs submitted to the pi_reinisch partition are subject to the following limits:

Limit Value
Maximum job time limit 7-00:00:00

Available Compute Nodes

Requests for --cpus-per-task and --mem can't exceed what is available on a single compute node.

Count CPU Type CPUs/Node Memory/Node (GiB) GPU Type GPUs/Node vRAM/GPU (GB) Node Features
2 5122 8 163 rtx2080 4 8 skylake, avx512, 5122, singleprecision, pi, rtx2080

Request Defaults

Unless specified, your jobs will run with the following options to salloc and sbatch options for this partition.

--time=01:00:00 --nodes=1 --ntasks=1 --cpus-per-task=1 --mem-per-cpu=5120

Job Limits

Jobs submitted to the pi_sestan partition are subject to the following limits:

Limit Value
Maximum job time limit 7-00:00:00

Available Compute Nodes

Requests for --cpus-per-task and --mem can't exceed what is available on a single compute node.

Count CPU Type CPUs/Node Memory/Node (GiB) Node Features
2 8358 64 1991 icelake, avx512, 8358, nogpu, bigtmp, pi

Request Defaults

Unless specified, your jobs will run with the following options to salloc and sbatch options for this partition.

--time=01:00:00 --nodes=1 --ntasks=1 --cpus-per-task=1 --mem-per-cpu=5120

GPU jobs need GPUs!

Jobs submitted to this partition do not request a GPU by default. You must request one with the --gpus option.

Job Limits

Jobs submitted to the pi_sigworth partition are subject to the following limits:

Limit Value
Maximum job time limit 7-00:00:00

Available Compute Nodes

Requests for --cpus-per-task and --mem can't exceed what is available on a single compute node.

Count CPU Type CPUs/Node Memory/Node (GiB) GPU Type GPUs/Node vRAM/GPU (GB) Node Features
1 6240 36 163 rtx2080ti 4 11 cascadelake, avx512, 6240, singleprecision, pi, bigtmp, oldest, rtx2080ti

Request Defaults

Unless specified, your jobs will run with the following options to salloc and sbatch options for this partition.

--time=01:00:00 --nodes=1 --ntasks=1 --cpus-per-task=1 --mem-per-cpu=5120

GPU jobs need GPUs!

Jobs submitted to this partition do not request a GPU by default. You must request one with the --gpus option.

Job Limits

Jobs submitted to the pi_sindelar partition are subject to the following limits:

Limit Value
Maximum job time limit 7-00:00:00

Available Compute Nodes

Requests for --cpus-per-task and --mem can't exceed what is available on a single compute node.

Count CPU Type CPUs/Node Memory/Node (GiB) GPU Type GPUs/Node vRAM/GPU (GB) Node Features
1 6240 36 163 rtx2080ti 4 11 cascadelake, avx512, 6240, singleprecision, pi, bigtmp, oldest, rtx2080ti

Request Defaults

Unless specified, your jobs will run with the following options to salloc and sbatch options for this partition.

--time=1-00:00:00 --nodes=1 --ntasks=1 --cpus-per-task=1 --mem-per-cpu=5120

GPU jobs need GPUs!

Jobs submitted to this partition do not request a GPU by default. You must request one with the --gpus option.

Job Limits

Jobs submitted to the pi_tomography partition are subject to the following limits:

Limit Value
Maximum job time limit 4-00:00:00
Maximum CPUs per user 32
Maximum GPUs per user 24
Maximum running jobs per user 2

Available Compute Nodes

Requests for --cpus-per-task and --mem can't exceed what is available on a single compute node.

Count CPU Type CPUs/Node Memory/Node (GiB) GPU Type GPUs/Node vRAM/GPU (GB) Node Features
8 5222 8 163 rtx5000 4 16 cascadelake, avx512, 5222, doubleprecision, pi, bigtmp, rtx5000
1 6242 32 981 rtx8000 2 48 cascadelake, avx512, 6242, doubleprecision, pi, bigtmp, oldest, rtx8000

Request Defaults

Unless specified, your jobs will run with the following options to salloc and sbatch options for this partition.

--time=01:00:00 --nodes=1 --ntasks=1 --cpus-per-task=1 --mem-per-cpu=5120

Job Limits

Jobs submitted to the pi_townsend partition are subject to the following limits:

Limit Value
Maximum job time limit 7-00:00:00

Available Compute Nodes

Requests for --cpus-per-task and --mem can't exceed what is available on a single compute node.

Count CPU Type CPUs/Node Memory/Node (GiB) Node Features
2 6240 36 180 cascadelake, avx512, 6240, nogpu, bigtmp, standard, pi

Request Defaults

Unless specified, your jobs will run with the following options to salloc and sbatch options for this partition.

--time=01:00:00 --nodes=1 --ntasks=1 --cpus-per-task=1 --mem-per-cpu=5120

Job Limits

Jobs submitted to the pi_tsang partition are subject to the following limits:

Limit Value
Maximum job time limit 7-00:00:00

Available Compute Nodes

Requests for --cpus-per-task and --mem can't exceed what is available on a single compute node.

Count CPU Type CPUs/Node Memory/Node (GiB) Node Features
4 8358 64 983 icelake, avx512, 8358, nogpu, bigtmp, pi

Request Defaults

Unless specified, your jobs will run with the following options to salloc and sbatch options for this partition.

--time=01:00:00 --nodes=1 --ntasks=1 --cpus-per-task=1 --mem-per-cpu=5120

Job Limits

Jobs submitted to the pi_ya-chi_ho partition are subject to the following limits:

Limit Value
Maximum job time limit 7-00:00:00

Available Compute Nodes

Requests for --cpus-per-task and --mem can't exceed what is available on a single compute node.

Count CPU Type CPUs/Node Memory/Node (GiB) Node Features
1 8268 48 352 cascadelake, avx512, 8268, nogpu, bigtmp, pi

Request Defaults

Unless specified, your jobs will run with the following options to salloc and sbatch options for this partition.

--time=01:00:00 --nodes=1 --ntasks=1 --cpus-per-task=1 --mem-per-cpu=5120

GPU jobs need GPUs!

Jobs submitted to this partition do not request a GPU by default. You must request one with the --gpus option.

Job Limits

Jobs submitted to the pi_yong_xiong partition are subject to the following limits:

Limit Value
Maximum job time limit 7-00:00:00

Available Compute Nodes

Requests for --cpus-per-task and --mem can't exceed what is available on a single compute node.

Count CPU Type CPUs/Node Memory/Node (GiB) GPU Type GPUs/Node vRAM/GPU (GB) Node Features
1 8358 64 1007 a5000 8 24 icelake, avx512, 8358, doubleprecision, bigtmp, pi, a5000
4 6326 32 479 a5000 4 24 icelake, avx512, 6326, doubleprecision, a5000, pi
1 8358 64 1007 l40s 8 48 icelake, avx512, 8358, doubleprecision, pi, bigtmp, l40s,
1 6226r 32 163 rtx3090 4 24 cascadelake, avx512, 6226r, doubleprecision, pi, rtx3090

Request Defaults

Unless specified, your jobs will run with the following options to salloc and sbatch options for this partition.

--time=01:00:00 --nodes=1 --ntasks=1 --cpus-per-task=1 --mem-per-cpu=5120

Job Limits

Jobs submitted to the pi_zhao partition are subject to the following limits:

Limit Value
Maximum job time limit 7-00:00:00

Available Compute Nodes

Requests for --cpus-per-task and --mem can't exceed what is available on a single compute node.

Count CPU Type CPUs/Node Memory/Node (GiB) Node Features
2 6240 36 163 cascadelake, avx512, 6240, nogpu, bigtmp, standard, pi

YCGA Partitions

The following partitions are intended for projects related to the Yale Center for Genome Analysis. Please do not use these partitions for other projects. Access is granted on a group basis. If you need access to these partitions, please contact us to get approved and added.

YCGA Partitions (click to expand)

Request Defaults

Unless specified, your jobs will run with the following options to salloc and sbatch options for this partition.

--time=01:00:00 --nodes=1 --ntasks=1 --cpus-per-task=1 --mem-per-cpu=5120

Job Limits

Jobs submitted to the ycga partition are subject to the following limits:

Limit Value
Maximum job time limit 2-00:00:00
Maximum CPUs per group 1024
Maximum memory per group 3934G
Maximum CPUs per user 256
Maximum memory per user 1916G

Available Compute Nodes

Requests for --cpus-per-task and --mem can't exceed what is available on a single compute node.

Count CPU Type CPUs/Node Memory/Node (GiB) Node Features
40 8362 64 479 icelake, avx512, 8362, nogpu, standard, pi

Request Defaults

Unless specified, your jobs will run with the following options to salloc and sbatch options for this partition.

--time=01:00:00 --nodes=1 --ntasks=1 --cpus-per-task=1 --mem-per-cpu=5120

Available Compute Nodes

Requests for --cpus-per-task and --mem can't exceed what is available on a single compute node.

Count CPU Type CPUs/Node Memory/Node (GiB) Node Features
2 8362 64 479 icelake, avx512, 8362, nogpu, standard, pi

Request Defaults

Unless specified, your jobs will run with the following options to salloc and sbatch options for this partition.

--time=01:00:00 --nodes=1 --ntasks=1 --cpus-per-task=1 --mem-per-cpu=5120

Job Limits

Jobs submitted to the ycga_bigmem partition are subject to the following limits:

Limit Value
Maximum job time limit 4-00:00:00
Maximum CPUs per user 64
Maximum memory per user 1991G

Available Compute Nodes

Requests for --cpus-per-task and --mem can't exceed what is available on a single compute node.

Count CPU Type CPUs/Node Memory/Node (GiB) Node Features
4 6346 32 1991 icelake, avx512, 6346, nogpu, pi

Request Defaults

Unless specified, your jobs will run with the following options to salloc and sbatch options for this partition.

--time=01:00:00 --nodes=1 --ntasks=1 --cpus-per-task=1 --mem-per-cpu=5120

Job Limits

Jobs submitted to the ycga_long partition are subject to the following limits:

Limit Value
Maximum job time limit 14-00:00:00
Maximum CPUs per group 64
Maximum memory per group 479G
Maximum CPUs per user 32
Maximum memory per user 239G

Available Compute Nodes

Requests for --cpus-per-task and --mem can't exceed what is available on a single compute node.

Count CPU Type CPUs/Node Memory/Node (GiB) Node Features
6 8362 64 479 icelake, avx512, 8362, nogpu, standard, pi

Public Datasets

We host datasets of general interest in a loosely organized directory tree in /gpfs/gibbs/data:

├── alphafold-2.3
├── alphafold-2.2 (deprecated)
├── alphafold-2.0 (deprecated)
├── annovar
│   └── humandb
├── cryoem
├── db
│   ├── annovar
│   ├── blast
│   ├── busco
│   └── Pfam
└── genomes
    ├── 1000Genomes
    ├── 10xgenomics
    ├── Aedes_aegypti
    ├── Bos_taurus
    ├── Chelonoidis_nigra
    ├── Danio_rerio
    ├── Drosophila_melanogaster
    ├── Gallus_gallus
    ├── hisat2
    ├── Homo_sapiens
    ├── Macaca_mulatta
    ├── Mus_musculus
    ├── Monodelphis_domestica
    ├── PhiX
    └── Saccharomyces_cerevisiae
    └── tmp
└── hisat2
    └── mouse

If you would like us to host a dataset or questions about what is currently available, please contact us.

YCGA Data

Data associated with YCGA projects and sequenceers are located on the YCGA storage system, accessible at /gpfs/ycga.

For more information on accessing this data as well as sequencing data retention polices, see the YCGA Data documentation.

Storage

McCleary has access to a number of GPFS filesystems. /vast/palmer is McCleary's primary filesystem where Home and Scratch60 directories are located. Every group on McCleary also has access to a Project allocation on the Gibbs filesytem on /gpfs/gibbs. For more details on the different storage spaces, see our Cluster Storage documentation.

You can check your current storage usage & limits by running the getquota command. Your ~/project and ~/palmer_scratch directories are shortcuts. Get a list of the absolute paths to your directories with the mydirectories command. If you want to share data in your Project or Scratch directory, see the permissions page.

For information on data recovery, see the Backups and Snapshots documentation.

Warning

Files stored in palmer_scratch are purged if they are older than 60 days. You will receive an email alert one week before they are deleted. Artificial extension of scratch file expiration is forbidden without explicit approval from the YCRC. Please purchase storage if you need additional longer term storage.

Partition Root Directory Storage File Count Backups Snapshots
home /vast/palmer/home.mccleary 125GiB/user 500,000 Yes >=2 days
project /gpfs/gibbs/project 1TiB/group, increase to 4TiB on request 5,000,000 No >=2 days
scratch /vast/palmer/scratch 10TiB/group 15,000,000 No No

Last update: April 7, 2025