Debrecen

From Nano Group Budapest
Jump to navigation Jump to search

This system maintained by NIIF.

Specification

Type SGI ICE8400EX
# of CPUs / node 2
# of cores / CPU 6
Memory / node 47 GB
Memory / core 3.9 GB
CPU Intel Xeon X5680 @ 3.33 GHz SMT on
Architecture x86_64 / intel64 / em64t little-endian
Scheduler Slurm
MPI SGI MPT (mpt), Intel MPI (impi), Open MPI (ompi)

Logging in

Set up the SSH access from Skynet, and mount its storage on skynet. (log into skynet and type:)

cd ~/shf3/mid/ssh
cp niif/debrecen debrecen
cd ~/shf3/key/ssh

Then place your private NIIF key here, and rename it as:

mv <YOUR_NIIF_KEY> debrecen.sec

You might need to export the private key from putty in OPENSSH format, if you used puttygen to generate the keypair.

Precompiled Environment

From Skynet sync up the precompiled environment:

 cd /share/niif/debrecen
 sshput -m debrecen -s .

Log into debrecen: You can do this with putty, etc if you don't like logging into debrecen from Skynet.

 sshto -m debrecen

Then add the following into your .profile:

 PATH=$PATH:$HOME/bin

VASP

We use intel MPI for VASP. The SGI MPT version might be good, but it might break the nodes with unkillable orphan processes, beware! Generally please stick to the impi version supplied here if possible.

First please transfer the compiled VASP binary and projectors from skynet. Log into "skynet"

 cd /share/niif/
 sshput -t 2 -m debrecen -s vasp/5.4.1.03082016.impi
 sshput -t 2 -m debrecen -s vasp/proj
 echo -e "\nexport VASP_PROJ_HOME=$HOME/vasp/proj" >> $HOME/.profile
 

Log into "debrecen"

Then add the following into your .profile:

PATH=$PATH:$HOME/bin
export VASP_PROJ=$HOME/vasp/proj
export VASP_PROJ_HOME=$HOME/vasp/proj
alias mc=". /usr/share/mc/bin/mc-wrapper.sh"


You can find a sample job of an ozone molecule in $HOME/jobsamples

 cd $HOME/jobsamples/ozone

Please fill your email in the debrecen jobfile,

 mcedit debrecen_impi

then submit it:

 sbatch debrecen_impi

This job should finish in mere seconds. Please replace the partition to "prod" from "test" for actual large scale runs.

 #SBATCH --partition=prod

You can easily increase the number of nodes:

 #SBATCH --nodes=4

Job Monitoring