TPAC offers the following HPC facilities for research purposes:
Hardware:
System Name | Vendor model | Nodes | Cores | Memory | Storage | OS |
---|---|---|---|---|---|---|
kunanyi | Huawei E9000 (Intel CPU) | 240 | 6720 | 128GB per node | Cephfs | Centos 7.4 |
kunanyi (expansion) | DELL R6525 (AMD CPU) | 3 | 384 | 1TB per node | Cephfs | Centos 7.4 |
The cluster provides
- 240 nodes with 28cpus and 128GB RAM
- Infiniband connected to support large MPI jobs
- 2 nodes with 128cpus and 1TB RAM
- 1PB of ceph storage and scratch
- 2PB of tape archive
Supported Libraries / Modules:
The following is a sample of some of the modules that are installed. Use “module avail” to view specific versions provided on the system you are using:
R
ansys
bzip2
chkfeature
curl
dot
ferret
gaussian
git
het
intel
matlab
matlab-compiler
merlin
mira
module-cvs
module-cvsblast
module-info
module-infoblender
modules
modulesbundler
mpfr
mpt
mrbayes
mvapich2_inteldavfs2
ncl
nco
neon
netcdf
null
nullgaussian
octave
openmpi
paup
pcre
perfboost
perfcatcher
perfcatchergit
pism
plink
povray
python
rsync
scirun
stacks
starccm+
structure
svn
test
udunits
use.own
use.ownkmh
valgrind
zlib
Supported usage profile:
The default queue allows up to 4000 cores or 3000 jobs to run for 48 hours. There are other queues to allow jobs to run for longer but with less cores. For further information please contact TPAC.
For users requiring more compute we recommend the following options:
If you would like to use the TPAC HPC facilities, you will need to sign up for an account, and meet the requirements outlined here.
Recent Comments