HPC Facilities

TPAC offers the following HPC facilities for research purposes:

Hardware:

System NameVendor modelNodesCoresMemoryStorageOS
kunanyiHuawei E9000248694432TB
128GB per node
DMFCentos 6.8
EddySGI UV10001*5124TBDirect-attached + DMFSLES 11SP4
KNLIntel Xeon Phi Knights Landing 721046496GBDMF + local SSDCentos 7.3
GPU nodesTesla K80 GPUs8User selected
VortexSGI ICE 8200645121TB Total
16GB per node
DMFSLES 11SP2
KatabaticSGI ICE 8200645121.7TB Total
4 x 96GB, 4 x 48GB, 16 x 16GB, 40 x 12GB
DMFSLES 10SP2

* SLES = SUSE Linux Enterprise Server

All systems are provided

  • 100TB of DMF cache
  • 70TB of SAS scratch
  • 2PB of tape storage
  • Fully connected via Infiniband fabric

 

Supported Libraries / Modules:

The following is a sample of some of the modules that are installed.  Use “module avail” to view specific versions provided on the system you are using:

R
ansys
bzip2
chkfeature
curl
dot
ferret
gaussian
git
het
intel
matlab
matlab-compiler
merlin
mira
module-cvs
module-cvsblast
module-info
module-infoblender
modules
modulesbundler
mpfr
mpt
mrbayes
mvapich2_inteldavfs2
ncl
nco
neon
netcdf
null
nullgaussian
octave
openmpi
paup
pcre
perfboost
perfcatcher
perfcatchergit
pism
plink
povray
python
rsync
scirun
stacks
starccm+
structure
svn
test
udunits
use.own
use.ownkmh
valgrind
zlib

Supported usage profile:

A default user has access to up to 128 cores, and can schedule jobs into a number of queues. For further information please contact TPAC.

For users requiring more compute we recommend the following options:

If you would like to use the TPAC HPC facilities, you will need to sign up for an account, and meet the requirements outlined here.