Resources available on Avogadro

From SmartHPC Wiki
Jump to: navigation, search

Hardware[edit]

The Avogadro computational cluster is composed of a total of 145 computational nodes and has a total of 3120 CPUs, 5.4 TB of RAM and 96 TB of storage. The cluster includes several different node types, organized in homogeneous groups each named after a Nobel Prize in Chemistry (with the exception of Stanislao Cannizzaro). Currently, there are 11 groups, named after the following chemists:

  1. Cannizzaro: 7 nodes configured as follows:
    1. Cannizzaro01: 64 AMD Opteron cores, 128 GB of RAM (2GB/core), 4 Tesla NVIDIA GPUs
    2. Cannizzaro02: 64 AMD Opteron cores, 128 GB of RAM (2GB/core), 4 Tesla NVIDIA GPUs
    3. Cannizzaro03: 16 Intel cores, 128 GB of RAM (8GB/core), 4 Tesla NVIDIA GPUs
    4. Cannizzaro04: 24 Intel cores, 64 GB of RAM (2.6GB/core), 4 Tesla NVIDIA GPUs
    5. Cannizzaro05: 24 Intel cores, 64 GB of RAM (2.6GB/core), 4 Tesla NVIDIA GPUs
    6. Cannizzaro06: 64 AMD Opteron cores, 128 GB of RAM (2GB/core), 4 Tesla NVIDIA GPUs
    7. Cannizzaro07: 64 AMD Opteron cores, 128 GB of RAM (2GB/core), 3 Tesla NVIDIA GPUs
  2. Curie: 28 nodes, 16 Intel Xeon cores, 64 GB of RAM (4 GB/core)
  3. Hoffmann: 24 nodes, 16 Intel Xeon cores, 128 GB of RAM (8 GB/core)
  4. IIT: 24 nodes, 12 Intel Xeon cores, 24 GB of RAM (2 GB/core)
  5. Kohn: 4 nodes, 12 Intel Xeon cores, 24 GB of RAM (2 GB/core)
  6. Lee: 8 nodes, 24 Intel Xeon cores, 128 GB of RAM (5.5 GB/core)
  7. Pople: 14 nodes, 64 AMD Opteron Cores, 128 GB of RAM (2 GB/core)
  8. Vanthoff: single SGI Ultraviolet 2000 node, 240 Intel Xeon cores, 6 TB of RAM (25 GB/core)
  9. Zewail: 6 nodes, 8 Intel Xeon cores, 12 GB of RAM (1.5 GB/core)

Information about the specific features of each group may be found logging in the cluster web portal and going on the nodes tab. Note that the page gives also information about the current availability of each server. Please try to select nodes and queues according to the real necessities of your calculation, and avoid crowding on the newest nodes only.

https://avogadro.sns.it/userportal/login.php?redirect=%2Fuserportal%2Findex.php
caption

https://avogadro.sns.it//userportal/nodes.php
caption

Local Scratch Areas[edit]

Every user has a scratch space on every slave node in /local/scratch/<username>. You can check the total amount of free space in a local scratch area with the following command:

$ df -h
Filesystem        	Size  Used Avail Use% Mounted on
/dev/sda1          	19G  2.6G   16G  15% /
none              	5.9G 	0  5.9G   0% /dev/shm
/dev/sda6         	193G   48G  136G  26% /local ← THIS IS THE LOCAL SCRATCH!
/dev/sda3         	1.9G   35M  1.8G   2% /tmp
/dev/sda2         	1.9G  585M  1.3G  33% /var
10.0.1.250:/HPC/share/hpc
                   	43T   13T   31T  30% /share
10.0.1.250:/HPC/share/hpc
                   	43T   13T   31T  30% /cm/shared
10.0.1.250:/HPC/home/chemistry
                   	43T   13T   31T  30% /home

To check instead how much space you are currently using:

$ pwd
/local/scratch/g.mancini

$ du -hs
7.5G    .

Note that Gaussian produces very big temporary files, so it is necessary that you take periodically care of cleaning your scratch area from unnecessary files. “Periodically” means at least every month. Or the staff will take care of it, removing without further advice.

Ramdisk Scratch Areas - SGI only[edit]

In addition to the regular scratch disk, the SGI nodes Vanthoff and Pauling have also a fast scratch area mounted under /local/ramdisk. This is a volatile disk space that uses a portion of the machine's RAM, useful for faster I/O. Be aware anyway that all the files saved here will disappear if the node is turned off or in case of power loss.

Software[edit]

Various software are available on the cluster, you can see a list with the command module avail:

$ module avail

[ ... output cut to shorten it ... ]
---------------------------------------------------------------- /cm/shared/modulefiles/software ---------------------------------------------------------------------
amber/amd/12                    dalton/intel/2016-mpi           gromacs/intel/4.6.5             molpro/2015.1                   namd/intel/2.9b3-cuda55
amber/intel/12                  dalton/intel/2016-mpi-int64     gromacs/intel/4.6.5-icc         namd/amd/2.10                   namd/intel/2.9b3-plumed
amber/intel/12-patched          dalton/intel/2016-mpi-mkl       gromacs/nehalem/5.0.5           namd/amd/2.11b1                 nwchem/intel/6.6
ambertools/amd/gnu/16           dalton/intel/2016-sgi           gromacs/sandybridge/5.0.5       namd/amd/2.12                   orca/3.0.2
ambertools/intel/gnu/14         dalton/intel/2016-sgi-mkl       gromacs/sandybridge/5.1.4       namd/amd/2.12pre                orca/4.0.0.2
ambertools/intel/gnu/16         espresso/bulldozer/5.2.0        gromacs/sandybridge/5.1.4-debug namd/amd/2.9b3                  orca/default
autodock/4.2.6                  espresso/nehalem/5.2.0          gromacs/sgi/4.6.5               namd/amd/2.9b3-cuda55           pbspro/12.2.4.142262
cfour/binaries                  espresso/sandybridge/5.2.0      gromacs/sgi/5.0                 namd/intel/2.10                 plumed/2.1.2
cfour/vanthoff                  gamess/intel                    gromacs/sgi/5.0-mpi             namd/intel/2.11b1               psi4/1.0
cmgui/7.0                       grace/5.1.25                    gromacs/sgi/5.1.4-mpi           namd/intel/2.12                 python/2.7.6
cp2k/intel                      graphviz/2.38                   hpl/2.1                         namd/intel/2.12pre              python/3.4.0
cp2k/intel-4.1                  gromacs/bulldozer/4.6.5         lammps/intel/stable             namd/intel/2.12pre-MKL          python/intel2.7
cp2k/vanthoff                   gromacs/bulldozer/5.0.5         lammps/intel/stable-cuda        namd/intel/2.12pre-vanthoff     python/intel3.5
cp2k/vanthoff-openmpi           gromacs/cuda/4.6.5              molcas/gcc/78                   namd/intel/2.12-vanthoff
dalton/intel/2013               gromacs/cuda/4.6.5-amd          moldy/intel/2.16e               namd/intel/2.9b3

Use module load modulename to load a specific module, this will set up your shell environment to use that particular software. For example, doing module load gromacs/sandybridge/5.0.5 will load the software needed to run this version of Gromacs, like OpenMPI and MKL. It will also add the binaries path of Gromacs 5.0.5 to your environment and set the relative libraries and man paths, plus other variables specific to this software.

The module load line will usually go into the script given to qsub to submit your job to PBS, more info on PBS here.

Compile your software[edit]

If you want to compile your software, you first have to load the appropriate compiler module (PGI or GCC), parallel libraries if needed (e.g. OpenMPI) and mathematical libraries (e.g. MKL, FFTW etc.) if needed, the same way you do when loading a module for pre-installed software. For example:

$ module load gcc/6.1.0

will load a set of variables into your current environment, pointing to the compiler and the other tools that you choose. Modules can be loaded or unloaded dynamically into the computing nodes. Suppose you want to compile a Fortran program with the Portland compiler; by default your path does not include it, as you can verify with the command env | grep -i pgi since it gives you no output because no variables called pgi are set. Once you load the PGI module:

$ module load pgi

$ env | grep -i pgi
MANPATH=/cm/shared/pgi_server/linux86-64/12.5/man:ignore:/cm/local/apps/environment-modules/3.2.6/man:/cm/shared/apps/pbspro/11.2.0.113417/man
LD_LIBRARY_PATH=/cm/shared/pgi_server/linux86-64/12.5/lib:/cm/shared/pgi_server/linux86-64/12.5/libso:/cm/shared/apps/gcc/4.7.0/lib:/cm/shared/apps/gcc/4.7.0/lib64:/cm/shared/apps/pbspro\
/11.2.0.113417/lib/
CPP=/cm/shared/pgi_server/linux86-64/12.5/bin/pgcpp
PGI=/cm/shared/pgi_server
PATH=/cm/shared/pgi_server/linux86-64/12.5/bin:/cm/shared/apps/gcc/4.7.0/bin:/usr/lib64/qt-3.3/bin:/usr/local/bin:/bin:/usr/bin:/usr/local/sbin:/usr/sbin:/sbin:/sbin:/usr/sbin:\
/cm/shared/apps/pbspro/11.2.0.113417/
bin:/cm/shared/apps/pbspro/11.2.0.113417/sbin:/home/m.martino/bin
F90=/cm/shared/pgi_server/linux86-64/12.5/bin/pgf90
_LMFILES_=/cm/shared/modulefiles/gcc/4.7.0:/cm/shared/modulefiles/pbspro/11.2.0.113417:/cm/shared/modulefiles/pgi/12.5
LOADEDMODULES=gcc/4.7.0:pbspro/11.2.0.113417:pgi/12.5
F77=/cm/shared/pgi_server/linux86-64/12.5/bin/pgf77
CXX=/cm/shared/pgi_server/linux86-64/12.5/bin/pgcpp
FC=/cm/shared/pgi_server/linux86-64/12.5/bin/pgfortran
CC=/cm/shared/pgi_server/linux86-64/12.5/bin/pgcc

As you can see many environment variables have been set by loading the module. Now you are ready to compile with pgf77:

$ pgf77 helloworld.f77

User support[edit]

If you notice any problem, please contact the staff writing to Avogadro Staff. Please contact the staff and NOT a single member of it. If your problem can be redirected to someone in particular, we’ll let you know.
The Avogadro Users mailing list is used to send information about cluster news and maintenance. Also, please take care of providing an email address (one you actually use) that will be added to the mailing list.