TPAC offers the following HPC facilities for research purposes:
Hardware:
System Name | Vendor model | Nodes | Cores | Memory | Storage | OS |
---|---|---|---|---|---|---|
kunanyi-ohpc | Huawei E9000 | 256 | 7168 | 32TB 128GB per node | DMF | Centos 7.4 |
All systems are provided
- 100TB of DMF cache
- 70TB of SAS scratch
- 2PB of tape storage
- Fully connected via Infiniband fabric
NOTE: One node in kunanyi has 512GB RAM instead of the standard 128GB. Jobs requesting more than 128GB of RAM will automatically be routed to a queue this node is dedicated to.
Supported Libraries / Modules:
The following is a sample of some of the modules that are installed. Use “module avail” to view specific versions provided on the system you are using:
R
ansys
bzip2
chkfeature
curl
dot
ferret
gaussian
git
het
intel
matlab
matlab-compiler
merlin
mira
module-cvs
module-cvsblast
module-info
module-infoblender
modules
modulesbundler
mpfr
mpt
mrbayes
mvapich2_inteldavfs2
ncl
nco
neon
netcdf
null
nullgaussian
octave
openmpi
paup
pcre
perfboost
perfcatcher
perfcatchergit
pism
plink
povray
python
rsync
scirun
stacks
starccm+
structure
svn
test
udunits
use.own
use.ownkmh
valgrind
zlib
Supported usage profile:
A default user has access to up to 128 cores, and can schedule jobs into a number of queues. For further information please contact TPAC.
For users requiring more compute we recommend the following options:
If you would like to use the TPAC HPC facilities, you will need to sign up for an account, and meet the requirements outlined here.