NIIF Systems
NIIF systems are maintained by NIIF.
There are useful guides about it in their wikipedia, but mostly in hungarian.
Legacy information: Computing Service Legacy
Debrecen
This system maintained by NIIF.
Specification
Type | SGI ICE8400EX |
# of CPUs / node | 2 |
# of cores / CPU | 6 |
Memory / node | 47 GB |
Memory / core | 3.9 GB |
CPU | Intel Xeon X5680 @ 3.33 GHz SMT on |
Architecture | x86_64 / intel64 / em64t little-endian |
Scheduler | Slurm |
MPI | SGI MPT (mpt) |
ESZR | local |
Shell Environment
Precompiled Environment
From Skynet sync up the precompiled environment:
cd /share/niif/debrecen sshput
The precompiled environment does not contain the program binaries and data libraries. To transfer a PROGRAM:
cd /share/niif/debrecen/pkg sshtx put debrecen PROGRAM
Shell Framework
Login to debrecen:
sshin debrecen
On debrecen:
cd $HOME git clone git://github.com/thieringgergo/shf3.git
Source and setup the Shell Framework in $HOME/.profile
:
source $HOME/shf3/bin/shfrc # set the prompt shf3/ps1 DEBRECEN[\\h] # set framework features shf3/alias yes shf3/screen yes shf3/mc/color yes # screen workaround if shf3/is/screen ; then source "/etc/profile.d/modules.sh" fi # tab complete source $HOME/shf3/bin/complete
Parallel Compressor
Enable the parallel compressor for the framework:
cd $HOME echo "sys_zip_xg_gz=pigz" > shf3/lib/sys/zip/xg/config.$USER
Module Environment
ESZR is our unified computing environment. Enable ESZR system modules in $HOME/.profile
:
# reset module purge module use ${HOME}/site/eszr/mod/common module load eszr/local module load eszr/env/local module load eszr/sys/niif/debrecen module load eszr/sys/niif/debrecen.mpt module use ${HOME}/site/eszr/mod/local module load sgi/2011 source ${ESZR_ROOT}/env/alias
Logout and login again.
Scheduler
The job scheduler is SGE.
General information about the queues:
qstat -g c
Queue | Allowed Groups | Purpose |
---|---|---|
test.q | ALL | Test queue (2 nodes) |
debrecen.q | ALL | Production |
General information on jobs:
qstat -u "*"
Job Setup
Setup the Queue file and edit the parameters:
cd $HOME/shf3/mid/que cp templates/niif/debrecen . mcedit debrecen
Job template is in $HOME/shf3/mid/que/templates/niif/debrecen.job
Job Monitoring
Average node utilization of a job:
jobmon JOBID
Per node utilization:
pcpview -j JOBID
Check the last 3 columns of cpu:
us - user load sy - system load id - idle
The user load should be around the maximum and the other two around 0. Node utilization chart:
pcpview -c -j JOBID
Maximum utilization is 50% since SMT is enabled, in the chart it is 12 (# of cores per node).
Special Options
If you need to allocate a full node but want to start arbitrary number of MPI processes set in the job file:
SLTPN=12
which will specify the total number of SGE slots per node and the total number of slots will be NODES*SLTPN
.
Pécs
The system is maintained by NIIF.
Specification
Type | SGI UV 1000 |
# of CPUs / node | 2 |
# of cores / CPU | 6 |
Memory | 6 TB |
CPU | Intel Xeon X7542 @ 2.66 GHz SMT off |
Architecture | x86_64 / intel64 / em64t little-endian |
Scheduler | Slurm |
MPI | SGI MPT (mpt) |
ESZR | local |
Shell Environment
Precompiled Environment
From Skynet sync up the precompiled environment:
cd /share/niif/pecs sshput
The precompiled environment does not contain the program binaries and data libraries. To transfer a PROGRAM:
cd /share/niif/pecs/pkg sshtx put pecs PROGRAM
Shell Framework
Login to pecs:
sshin pecs
On pecs:
cd $HOME git clone git://github.com/hornos/shf3.git
Source and setup the Shell Framework in $HOME/.profile
:
source $HOME/shf3/bin/shfrc # set the prompt shf3/ps1 PECS[\\h] # set framework features shf3/alias yes shf3/screen yes shf3/mc/color yes # screen workaround if shf3/is/screen ; then source "/etc/profile.d/modules.sh" fi # tab complete source $HOME/shf3/bin/complete
Parallel Compressor
Enable the parallel compressor for the framework:
cd $HOME echo "sys_zip_xg_gz=pigz" > shf3/lib/sys/zip/xg/config.$USER
Module Environment
ESZR is our unified computing environment. Enable ESZR system modules in $HOME/.profile
:
# reset module purge module use ${HOME}/site/eszr/mod/common module load eszr/local module load eszr/env/local module load eszr/sys/niif/pecs module load eszr/sys/niif/pecs.mpt module use ${HOME}/site/eszr/mod/local module load sgi/2011 source ${ESZR_ROOT}/env/alias
Logout and login again.
Scheduler
The job scheduler is SGE.
General information about the queues:
qstat -g c
Queue | Allowed Groups | Purpose |
---|---|---|
test.q | ALL | Test queue |
pecs.q | ALL | Production |
General information on jobs:
qstat -u "*"
Job Setup
Setup the Queue file and edit the parameters:
cd $HOME/shf3/mid/que cp templates/niif/pecs . mcedit debrecen
Job template is in $HOME/shf3/mid/que/templates/niif/pecs.job
Job Monitoring
Currently, monitoring is possible only by chart. You also have to enable the numainfo
script in the queue file. In the job's submit directory:
pcpview -c -j StdOut
Maximum utilization in the chart is 6 (# of cores per node).
Special Options
The UV is a ccNUMA SMP machine and you allocate CPU sockets and cores on one node. It is mandatory to set in the job file:
NODES=1
The total number of SGE slots will be SCKTS*CORES
.
Szeged
The system is maintained by NIIF.
Specification
Type | HP CP4000BL |
# of CPUs / node | 4 |
# of cores / CPU | 12 |
Memory / node | 132 GB |
Memory / core | 2.75 GB |
CPU | AMD Opteron 6174 @ 2.2GHz |
Architecture | x86_64 / intel64 / em64t little-endian |
Scheduler | Slurm |
MPI | Intel (impi) |
ESZR | local |
Shell Environment
Precompiled Environment
From Skynet sync up the precompiled environment:
cd /share/niif/szeged sshput
The precompiled environment does not contain the program binaries and data libraries. To transfer a PROGRAM:
cd /share/niif/szeged/pkg sshtx put szeged PROGRAM
Shell Framework
Login to szeged:
sshin szeged
On szeged:
cd $HOME git clone git://github.com/hornos/shf3.git
Source and setup the Shell Framework in $HOME/.bash_profile
:
source $HOME/shf3/bin/shfrc # set the prompt shf3/ps1 SZEGED[\\h] # set framework features shf3/alias yes shf3/screen yes shf3/mc/color yes # screen workaround if shf3/is/screen ; then source "/etc/profile.d/modules.sh" fi # tab complete source $HOME/shf3/bin/complete
Parallel Compressor
Enable the parallel compressor for the framework:
cd $HOME echo "sys_zip_xg_gz=pigz" > shf3/lib/sys/zip/xg/config.$USER
Module Environment
ESZR is our unified computing environment. Enable ESZR system modules in $HOME/.bash_profile
:
# reset module purge module use ${HOME}/site/eszr/mod/common module load eszr/local module load eszr/env/local module load eszr/sys/niif/szeged module use ${HOME}/site/eszr/mod/local source ${ESZR_ROOT}/env/alias
Scheduler
The job scheduler is SGE.
General information about the queues:
qstat -g c
Queue | Allowed Groups | Purpose |
---|---|---|
test.q | ALL | Test queue (2 nodes) |
szeged.q | ALL | Production |
General information on jobs:
qstat -u "*"
Job Setup
Setup the Queue file and edit the parameters:
cd $HOME/shf3/mid/que cp templates/niif/szeged . mcedit szeged
Job template is in $HOME/shf3/mid/que/templates/niif/szeged.job
Special Options
If you need to allocate a full node but want to start arbitrary number of MPI processes set in the job file:
SLTPN=48
which will specify the total number of SGE slots per node and the total number of slots will be NODES*SLTPN
. You can use the following combinations in MPI-OMP mode if you run out of memory. Set hybrid mode in the job file:
MODE=mpiomp/impi
and the socket/core number accoring to your needs.
SCKTS (# of MPI proces / node) | CORES (# of OMP threads / MPI proc) | Memory / MPI proc |
---|---|---|
2 | 24 | 66 GB |
4 | 12 | 33 GB |
8 | 6 | 16.5 GB |
12 | 4 | 8.3 GB |
24 | 2 | 4.3 GB |
Budapest
The system is maintained by NIIF.
Specification
Type | HP CP4000BL |
# of CPUs / node | 2 |
# of cores / CPU | 12 |
Memory / node | 66 GB |
Memory / core | 2.75 GB |
CPU | AMD Opteron 6174 @ 2.2GHz |
Architecture | x86_64 / intel64 / em64t little-endian |
Scheduler | Slurm |
MPI | Intel (impi) |
ESZR | local |
Shell Environment
Precompiled Environment
From Skynet sync up the precompiled environment:
cd /share/niif/budapest sshput
The precompiled environment does not contain the program binaries and data libraries. To transfer a PROGRAM:
cd /share/niif/budapest/pkg sshtx put budapest PROGRAM
Shell Framework
Login to budapest:
sshin budapest
On budapest:
cd $HOME git clone git://github.com/hornos/shf3.git
Source and setup the Shell Framework in $HOME/.bash_profile
:
source $HOME/shf3/bin/shfrc # set the prompt shf3/ps1 BUDAPEST[\\h] # set framework features shf3/alias yes shf3/screen yes shf3/mc/color yes # screen workaround if shf3/is/screen ; then source "/etc/profile.d/modules.sh" fi # tab complete source $HOME/shf3/bin/complete
Parallel Compressor
Enable the parallel compressor for the framework:
cd $HOME echo "sys_zip_xg_gz=pigz" > shf3/lib/sys/zip/xg/config.$USER
Module Environment
ESZR is our unified computing environment. Enable ESZR system modules in $HOME/.bash_profile
:
# reset module purge module use ${HOME}/site/eszr/mod/common module load eszr/local module load eszr/env/local module load eszr/sys/niif/budapest module use ${HOME}/site/eszr/mod/local source ${ESZR_ROOT}/env/alias
Scheduler
The job scheduler is SGE.
General information about the queues:
qstat -g c
Queue | Allowed Groups | Purpose |
---|---|---|
test.q | ALL | Test queue (2 nodes) |
budapest.q | ALL | Production |
General information on jobs:
qstat -u "*"
Job Setup
Setup the Queue file and edit the parameters:
cd $HOME/shf3/mid/que cp templates/niif/budapest . mcedit budapest
Job template is in $HOME/shf3/mid/que/templates/niif/budapest.job
Special Options
If you need to allocate a full node but want to start arbitrary number of MPI processes set in the job file:
SLTPN=24
which will specify the total number of SGE slots per node and the total number of slots will be NODES*SLTPN
. You can use the following combinations in MPI-OMP mode if you run out of memory. Set hybrid mode in the job file:
MODE=mpiomp/impi
and the socket/core number accoring to your needs.
SCKTS (# of MPI proc / node) | CORES (# of OMP threads / MPI proc) | Memory / MPI proc |
---|---|---|
2 | 12 | 33 GB |
4 | 6 | 16.5 GB |
8 | 3 | 8.3 GB |