Computing Service Legacy
THIS PAGE CONTAINS NOT SUPPORTED LEGACY INFORMATION
Introduction
Security Tips
First, please follow our security advises:
- Use a Unix-based/like operating system. We support and recommend OS X, SuSE and Ubuntu Linux.
- Use automatic OS updates frequently.
- Use a strict firewall and apply strong passwords. There is a console-based password manager for random passwords in Shpak.
- Keep your ssh private keys in safe and use string key passwords as well.
- Setup and use GPG encrypted mails and chat.
- Use strong encryption for your private date.
- Use strong encryption for cloud-based drives as well. Shpak has support for directory encryption by the Fuse encfs module. If you need interoperability try Wuala.
A good general guide on privacy can be found at eff.org. OS X users should follow Apple's security manual. Linux users can find security notices here for SuSE and for Ubuntu. Please also subscribe to the corresponding security mailing lists.
For a more detailed security advises please see our Notes on Security page.
SZFKI System
Skynet
This is our main SGI cluster installed in May 2011. A late descendant of the venerable TPA batch systems and the CEDRUS environment.
You can find a large collection of documents and manuals on using SGI systems in SGI's Techpubs Library. We recommend the application tuning and the SGI MPT part.
Specification
- CPU: Intel Xeon E5620 @ 2.40GHz SMT off
- Cores per CPU: 4
- Available Memory per Node: 34GB
- Architecture: x86_64 / intel64 / em64t little-endian
- Total number of nodes: 26
- Allocation Unit: 1 node (8 cores)
- Scheduler: Slurm
- Purpose: Materials Science and Nanotechnology
Projects
Project ID | Priority | Description | Participiants | Collaborators |
---|---|---|---|---|
diamond | high | Hugo Pinto, Tamás Simon | ||
sic | normal | Viktor Ivády, Krisztián Szász, Bálint Somogyi, Tamás Hornos | ||
diavib | low | Márton Vörös, Tamás Demjén | ||
solar | low | Márton Vörös |
Shell Environment
Setup the Shell
You can fine-tune the shell by defining aliases and functions for frequently used commands. System default setup can be sourced in $HOME/.profile
by:
source /data1/ngb/site/bin/skynet
Install Shpak
Shpak is our ultimate shell toolkit. Regular job submission is supported only by Shpak. It promotes a unified and clean interface for various queue systems and simulation programs. To install use git in your $HOME
:
git clone git://github.com/hornos/shpak.git
and set the path in your $HOME/.profile
:
PATH=$PATH:$HOME/shpak/bin
Install Shpak (full)
Alternatively, you can use our 3-component package group for ab initio people:
git clone git://github.com/hornos/shpak.git git clone git://github.com/hornos/qcpak.git git clone git://github.com/hornos/pypak.git
in this case you have to source qcpak only in your $HOME/.profile
:
source $HOME/qcpak/qcpakrc &> /dev/null PS1_HOST=SKYNET ps1
The last two command sets the prompt with a label PS1_HOST
. You can switch between the simple and the advanced version by the command
ps1
Storing Files
Use compression in every case to save storage space! You can use the parallel compressor programs pigz
or pbzip2
as a replacement for gzip
or bzip2
, respectively. You can configure Shpak to use these programs by creating a user defined config file in $HOME/shpak/lib/z/usr.cfg
with the following content:
sp_b_z="pigz -9 -f" sp_b_uz="pigz -f -d"
Transfering Files
Regular users can use Shpak's transfer capabilities between xfer
directories. First, create the remote xfer
directory on each machine by:
sshcmd -m <MID> -x "mkdir xfer"
Copy to remote machine, where <SOURCE>
is the local path to be copied.:
sshpush -m <MID> -s <SOURCE>
Copy from the remote machine:
sshpull -m <MID> -s <SOURCE>
Transfer mode can be set by -t <MODE>
, where <MODE>
is
- Regular SCP transfer
- Tar files and use SCP for the transfer
- Rsync over SSH
To sync large amount of data use transfer mode 3.
Users of the fuse group can mount remote systems by sshfs. At first, you have to prepare the local sshfs
directory:
mkdir /local/home/${USER}/sshfs ln -s /local/home/${USER}/sshfs ${HOME}/sshfs
Mounting a remote machine:
sshmnt -m <MID>
Unmounting:
sshumnt -m <MID>
After a not clean unmount a lock will remain preventing further mounts. Use -f
option to clear the lock.
Storage
Enable ESZR-related modules in your $HOME/.modenv
:
module use /site/eszr/mod/common module load eszr/site module load eszr/sys/skynet
Check ESZR environment by:
eszr_info
Your storage data directory is $ESZR_DATA
. Storage is accessible only form the login node:
cd $ESZR_DATA
Backup
By omitting xfer
directory in a machines's MID file (sp_p_scp_remote
variable) you can transfer files or directories directly to or from your remote home. If you want to synchronize from a remote machine to your current directory on your local machine:
sshsync -m <MID> -s "<SOURCES>"
where <SOURCES>
is a space-separated list of files or directories in your remote home. If you want to synchronize local files or directories from your current directory to a remote machine:
sshsync -p -m <MID> -s "<SOURCES>"
sshsync uses Rsync over SSH (transfer mode 3) best for large backups. You do not have to specify MID and SOURCES every time if you create a .sshsync
file with following content:
_m=<MID> _s="<SOURCES>"
If you want to do a full remote home backup then omitt <REMOTE USER>/xfer
in the MID file and
sshsync -m <MID> -s <REMOTE USER>
Large backups takes a long time at first. you can use Shapk's screen wrapper to open a new screen shell session and let the backup run uninterrupted. To open or reopen a new screen:
scrsel
Leave the session by Ctrl+A Ctrl+D
. To enter type the command again and select a session.
Module System
To activate the module system either type modenv
or create the $HOME/.modenv
file with your desired module commands, eg:
module use /data1/ngb/modulefiles module load sys/skynet module load env/sgi
Basic Module Commands
Short alias commands are also available if you use Shpak. The module system resolves dependencies or conflicts automatically thus explicit unload is not necessary.
Command | Shpak alias | Descritpion |
---|---|---|
module avail | mla | Show available modules |
module list | mls | List loaded modules |
module display | mdp | About the module |
module load/unload <MODULE> | mld/mlu <MODULE> | Load / unload a module |
MPI Subsystem
The recommended MPI subsystem is SGI MPT. You can check the MPI subsystem either by module list
or by which mpirun
. MPI environments are mutually exclusive and environment modules unload each other.
SGI MPT
Versions with mpt suffix are compiled with SGI MPT. You can load the environment and the corresponding program version by:
module load env/sgi module load <PROGRAM>/<VERSION>.mpt
If there is a tuned configuration for the <PROGRAM>
please load the tuner configuration as well:
module load <PROGRAM>/tuner.mpt
Jobfile
Enable SGI MPT in the jobfile:
MPIRUN="mpt"
and also set the following for binding:
MPT_BIND="dplace -s 1"
Scheduler
The job scheduler is Slurm. In Slurm each user is assigned with one or more account which you have to set in the queue file.
General information about the partitions:
sinfo -l
Partition | Allowed Groups | Purpose |
---|---|---|
service | pexpress | Small and short jobs / Development |
batch | pbatch | General production |
General information on jobs:
squeue -l
List your jobs:
myqstat
Pending job priorities:
sprio -l
Slurm accounts and priorities:
sshare -l
Job Submission
Shpak has a wrapper for job submission as well as running various programs. You need 3 components:
- Queue file: permanent parameters for the scheduler and the queue
- Job file: job specific resource parameters
- Guide file: program specific parameters for Shpak's
runprg
command
Queue file
Queue files are in $HOME/shpak/que/
. You have to create a queue file for each scheduler/queue pair only once.
Key | Value | Descritpion |
---|---|---|
SCHED | slurm / sge / pbs | Scheduler type |
QUEUE_MAIL_TO | your@email | Your email address |
QUEUE_MAIL | ALL / abe | Set scheduler email notifications |
QUEUE_PROJECT | string | Slurm account (project id) |
QUEUE_CONST | list | Space-separated list of constraints |
QUEUE_PART | string | Slurm partition |
QUEUE_QUEUE | string | Scheduler queue |
QUEUE_QOS | string | QOS type |
QUEUE_SETUPS | string | Space-separated list of queue setup scripts |
QUEUE_ULIMIT | commands | Ulimit commands |
Example queue file ($HOME/shpak/que/example
):
SCHED="slurm" QUEUE_MAIL_TO="<YOUR@EMAIL>" QUEUE_MAIL="ALL" QUEUE_PROJECT="<PROJECT ID>" QUEUE_CONST="ib" QUEUE_SETUPS="/data1/ngb/site/bin/jobsetup" QUEUE_ULIMIT="ulimit -s unlimited; ulimit -l unlimited"
Job file
You need a job file to submit a job by the jobsub
command. The job file refers to the above discussed queue file.
Key | Value | Descritpion |
---|---|---|
NAME | string | Job name |
TIME | hh:mm:ss | Wallclock time limit |
MEMORY | integer | Memory per core limit in MB |
NODES | integer | Number of nodes |
SCKTS | integer | Number of CPU sockets |
CORES | integer | Number of cores per socket |
QUEUE | string | Name of the queue file |
MPIRUN | mpt / ompi / impi | MPI subsystem type |
MPT_BIND | omplace / dplace | SGI MPT CPU bind command (dplace or omplace) |
OMPI_BIND | -by* -bind-to-* | Open MPI CPU bind options |
SETUPS | list | Space-separated list of custom setup scripts |
COMMAND | command | Command or script to submit |
Example job file:
NAME=test TIME=00:30:00 MEMORY=2000 NODES=2 SCKTS=2 CORES=4 QUEUE=example MPIRUN="mpt" MPT_BIND="dplace -s 1" COMMAND="runprg -p vasp -g vasp.guide"
NIIF Systems
Debrecen
This a an SGI Altix ICE8400EX system maintained by NIIF.
Specification
- CPU: Intel Xeon X5680 @ 3.33GHz SMT on
- Cores per CPU: 6
- Available Memory per Node: 48GB
- Architecture: x86_64 / intel64 / em64t little-endian
- Allocation Unit: 1 core
- Scheduler: SGE
- Purpose: General
Shell Environment
Setup the Shell
We have a prepared and precompiled environment which you can sync from Skynet. Follow these steps:
1. Synchronize the environment
cd /share/niif/debrecen.legacy sshsync -p -m <DEBRCEN MID>
2. Login to the Debrecen machine and enable git
PATH=$PATH:$HOME/local/usr/bin
3. Install paks
git clone git://github.com/hornos/shpak.git git clone git://github.com/hornos/qcpak.git git clone git://github.com/hornos/pypak.git
4. Relogin
Module System
To activate the module system edit your $HOME/.profile
:
module purge module use ${HOME}/local/modulefiles module load sys/debrecen module load env/sgi # other modules ...
Scheduler
The job scheduler is SGE.
General information about the queues:
qstat -f -g c
General information on jobs:
qstat
Job submission
If you use Shpak job submission is almost the same as on Skynet. Differences are the following.
Queue file
An example queue file:
SCHED="sge" QUEUE_MAIL_TO=<YOUR@EMAIL> QUEUE_MAIL="abe" QUEUE_QUEUE="debrecen.q" QUEUE_PE="mpi" QUEUE_EXCLUSIVE="yes" QUEUE_SHELL="/bin/bash" QUEUE_OPTS="-cwd -j y -V" QUEUE_SETUPS="${HOME}/local/bin/jobsetup"
Job file
Uncomment MEMORY
requirement:
# MEMORY=...
This machine has 6 cores per socket:
CORES=6
Szeged
This a HP CP4000BL blade cluster system maintained by NIIF.
Specification
- CPU: AMD Opteron 6174 @ 2.20 Ghz
- Cores per CPU: 12
- Available Memory per Node: 124GB
- Architecture: x86_64 / intel64 / em64t little-endian
- Total number of nodes: 48
- Allocation Unit: 1 core
- Scheduler: SGE
- Purpose: General
Shell Environment
Setup the Shell
We have a prepared and precompiled environment which you can sync from Skynet. Follow these steps:
1. Synchronize the environment
cd /share/niif/szeged.legacy sshsync -p -m <SZEGED MID>
2. Login to the Szeged machine and enable git
PATH=$PATH:$HOME/local/bin
3. Install paks
git clone git://github.com/hornos/shpak.git git clone git://github.com/hornos/qcpak.git git clone git://github.com/hornos/pypak.git
4. Relogin
Module System
To activate the module system edit your $HOME/.bash_profile
:
module purge module use ${HOME}/local/modulefiles module load sys/szeged module load env/intel # other modules ...
Scheduler
The job scheduler is SGE.
General information about the queues:
qstat -f -g c
General information on jobs:
qstat
Job submission
If you use Shpak job submission is almost the same as on Skynet. Differences are the following.
Queue file
An example queue file:
SCHED="sge" QUEUE_MAIL_TO=<YOUR@EMAIL> QUEUE_MAIL="abe" QUEUE_QUEUE="szeged.q" QUEUE_PE="mpi" QUEUE_EXCLUSIVE="yes" QUEUE_SHELL="/bin/bash" QUEUE_OPTS="-cwd -j y -V" QUEUE_SETUPS="${HOME}/local/bin/jobsetup"
Job file
Uncomment MEMORY
requirement:
# MEMORY=...
This machine has 12 cores per socket and 4 CPUs per node:
NODES=1 SCKTS=4 CORES=12
Pécs
This a an SGI Altix UV system maintained by NIIF.
Specification
- CPU: Intel Xeon X7542 @ 2.67GHz SMT off
- Cores per CPU: 6
- Available Memory: 6TB
- Architecture: x86_64 / intel64 / em64t little-endian
- Total number of CPUs: 1151
- Allocation Unit: 1 core
- Scheduler: SGE
- Purpose: General
Shell Environment
Setup the Shell
NSC Systems
Neolith
This a Linux cluster maintained by NSC.
Specification
- CPU: Intel Xeon E5345 @ 2.33 GHz SMT off
- Cores per CPU: 4
- Available Memory per Node: 16/32GB
- Architecture: x86_64 / intel64 / em64t little-endian
- Total number of nodes: 805
- Allocation Unit: 1 core
- Scheduler: Slurm
- Purpose: General
- User Guide
Shell Environment
Setup the Shell
We have a prepared and precompiled environment which you can sync from Skynet. Follow these steps:
1. Synchronize the environment
cd /share/nsc/neolith.nsc.liu.se sshsync -p -m <NEOLITH MID>
2. Login to the Neolith machine and enable git
PATH=$PATH:$HOME/local/usr/bin
3. Install paks
git clone git://github.com/hornos/shpak.git git clone git://github.com/hornos/qcpak.git git clone git://github.com/hornos/pypak.git
4. Relogin
Module System
NSC uses a rather old module system. Please use the provided .bash_profile
or refer to the User Guide if you have special needs.
Scheduler
The job scheduler is Slurm. For basic commands refer to the User Guide.
Job submission
If you use Shpak job submission is almost the same as on Skynet. Differences are the following.
Queue file
An example queue file:
SCHED="slurm" QUEUE_MAIL_TO=<YOUR@EMAIL> QUEUE_MAIL="ALL" QUEUE_PROJECT="<PROJECTID>" QUEUE_SETUPS="${HOME}/local/bin/jobsetup"
PRACE Systems
PRACE systems use a certificate based SSH from the Globus Toolkit. You can load to it on Skynet by:
mld globus/5.0.4
Static binaries for your local machine can be downloaded from LRZ. Shpak commands start with gssh
employs the Globus method. Currently, FUSE sshfs mount is not supported with certificates and you have to use push, pull or sync (see Transfering Files).
Huygens
This is an IBM Power6 cluster maintaind by SARA.
Setup the Shell
We have a prepared and precompiled environment which you can sync from Skynet. Follow these steps:
1. Synchronize the environment
cd /share/prace/huygens.sara.nl gsshsync -p -m <HUYGENS MID>
2. Install paks
git clone git://github.com/hornos/shpak.git git clone git://github.com/hornos/qcpak.git git clone git://github.com/hornos/pypak.git
3. Relogin
PRACE Environment
The prace module defines global variables which you can use on all sites. Check the variables by:
set | grep PRACE
For PRACE-related services (eg.: internal gridftp) there is a utility script. Check the help by:
prace_service -h