High Performance Computing on Gentoo

This document was written by people at the Adelie Linux R&D Center as a step-by-step guide to turn a Gentoo System into a High Performance Computing (HPC) system.

Introduction
Gentoo Linux, a special flavor of Linux that can be automatically optimized and customized for just about any application or need. Extreme performance, configurability and a top-notch user and developer community are all hallmarks of the Gentoo experience.

Thanks to a technology called Portage, Gentoo Linux can become an ideal secure server, development workstation, professional desktop, gaming system, embedded solution or... a High Performance Computing system. Because of its near-unlimited adaptability, we call Gentoo Linux a metadistribution.

This document explains how to turn a Gentoo system into a High Performance Computing system. Step by step, it explains what packages one may want to install and helps configure them.

Obtain Gentoo Linux from the website http://www.gentoo.org/, and refer to the documentation at the same location to install it.

Recommended Optimizations
During the installation process, you will have to set your USE variables in. We recommended that you deactivate all the defaults (see ) by negating them in. However, you may want to keep such use variables as 3dnow, gpm, mmx, nptl, nptlonly, sse, ncurses, pam and tcpd. Refer to the USE documentation for more information.

Or simply:

In step 15 ("Installing the kernel and a System Logger") for stability reasons, we recommend the vanilla-sources, the official kernel sources released on http://www.kernel.org/, unless you require special support such as xfs.

When you install miscellaneous packages, we recommend installing the following:

Communication Layer (TCP/IP Network)
A cluster requires a communication layer to interconnect the slave nodes to the master node. Typically, a FastEthernet or GigaEthernet LAN can be used since they have a good price/performance ratio. Other possibilities include use of products like Myrinet, QsNet or others.

A cluster is composed of two node types: master and slave. Typically, your cluster will have one master node and several slave nodes.

The master node is the cluster's server. It is responsible for telling the slave nodes what to do. This server will typically run such daemons as dhcpd, nfs, pbs-server, and pbs-sched. Your master node will allow interactive sessions for users, and accept job executions.

The slave nodes listen for instructions (via ssh/rsh perhaps) from the master node. They should be dedicated to crunching results and therefore should not run any unnecessary services.

The rest of this documentation will assume a cluster configuration as per the hosts file below. You should maintain on every node such a hosts file with entries for each node participating node in the cluster.

To setup your cluster dedicated LAN, edit your file on the master node.

Finally, setup a DHCP daemon on the master node to avoid having to maintain a network configuration on each slave node.

NFS/NIS
The Network File System (NFS) was developed to allow machines to mount a disk partition on a remote machine as if it were on a local hard drive. This allows for fast, seamless sharing of files across a network.

There are other systems that provide similar functionality to NFS which could be used in a cluster environment. The Andrew File System from IBM, recently open-sourced, provides a file sharing mechanism with some additional security and performance features. The Coda File System is still in development, but is designed to work well with disconnected clients. Many of the features of the Andrew and Coda file systems are slated for inclusion in the next version of NFS (Version 4). The advantage of NFS today is that it is mature, standard, well understood, and supported robustly across a variety of platforms.

Configure and install a kernel to support NFS v3 on all nodes:

On the master node, edit your file to allow connections from slave nodes. If your cluster LAN is on 192.168.1.0/24, your will look like:

Edit the file of the master node to export a work directory structure ( is good for this).

Add nfs to your master node's default runlevel:

To mount the nfs exported filesystem from the master, you also have to configure your salve nodes'. Add a line like this one:

You'll also need to set up your nodes so that they mount the nfs filesystem by issuing this command:

RSH/SSH
SSH is a protocol for secure remote login and other secure network services over an insecure network. OpenSSH uses public key cryptography to provide secure authorization. Generating the public key, which is shared with remote systems, and the private key which is kept on the local system, is done first to configure OpenSSH on the cluster.

For transparent cluster usage, private/public keys may be used. This process has two steps:


 * Generate public and private keys
 * Copy public key to slave nodes

For user based authentication, generate and copy as follows:

For host based authentication, you will also need to edit your.

And a few modifications to the file:

If your application require RSH communications, you will need to emerge and.

Then configure the rsh deamon. Edit your file.

Edit your to permit rsh connections:

Or you can simply trust your cluster LAN:

Finally, configure host authentication from.

And, add xinetd to your default runlevel:

NTP
The Network Time Protocol (NTP) is used to synchronize the time of a computer client or server to another server or reference time source, such as a radio or satellite receiver or modem. It provides accuracies typically within a millisecond on LANs and up to a few tens of milliseconds on WANs relative to Coordinated Universal Time (UTC) via a Global Positioning Service (GPS) receiver, for example. Typical NTP configurations utilize multiple redundant servers and diverse network paths in order to achieve high accuracy and reliability.

Select a NTP server geographically close to you from Public NTP Time Servers, and configure your and  files on the master node.

Edit your file on the master to setup an external synchronization source:

And on all your slave nodes, setup your synchronization source as your master node.

Then add ntpd to the default runlevel of all your nodes:

IPTABLES
To setup a firewall on your cluster, you will need iptables.

Required kernel configuration:

And the rules required for this firewall:

Then add iptables to the default runlevel of all your nodes:

OpenPBS
The Portable Batch System (PBS) is a flexible batch queueing and workload management system originally developed for NASA. It operates on networked, multi-platform UNIX environments, including heterogeneous clusters of workstations, supercomputers, and massively parallel systems. Development of PBS is provided by Altair Grid Technologies.

Before starting using OpenPBS, some configurations are required. The files you will need to personalize for your system are:



Here is a sample :

To submit a task to OpenPBS, the command  is used with some optional parameters. In the example below, "-l" allows you to specify the resources required, "-j" provides for redirection of standard out and standard error, and the "-m" will e-mail the user at beginning (b), end (e) and on abort (a) of the job. In the next example, a script is submitted on 2 nodes.

Normally jobs submitted to OpenPBS are in the form of scripts. Sometimes, you may want to try a task manually. To request an interactive shell from OpenPBS, use the "-I" parameter.

To check the status of your jobs, use the qstat command:

MPICH
Message passing is a paradigm used widely on certain classes of parallel machines, especially those with distributed memory. MPICH is a freely available, portable implementation of MPI, the Standard for message-passing libraries.

The mpich ebuild provided by Adelie Linux allows for two USE flags: doc and crypt. doc will cause documentation to be installed, while crypt will configure MPICH to use  instead of.

You may need to export a mpich work directory to all your slave nodes in :

Most massively parallel processors (MPPs) provide a way to start a program on a requested number of processors;  makes use of the appropriate command whenever possible. In contrast, workstation clusters require that each process in a parallel job be started individually, though programs to help start these processes exist. Because workstation clusters are not already organized as an MPP, additional information is required to make use of them. Mpich should be installed with a list of participating workstations in the file in the directory. This file is used by  to choose processors to run on.

Edit this file to reflect your cluster-lan configuration:

Use the script  in  to ensure that you can use all of the machines that you have listed. This script performs an  and a short directory listing; this tests that you both have access to the node and that a program in the current directory is visible on the remote node. If there are any problems, they will be listed. These problems must be fixed before proceeding.

The only argument to  is the name of the architecture; this is the same name as the extension on the machines file. For example, the following tests that a program in the current directory can be executed by all of the machines in the LINUX machines list.

The output from this command might look like:

If  finds a problem, it will suggest possible reasons and solutions. In brief, there are three tests:


 * Can processes be started on remote machines? tstmachines attempts to run the shell command true on each machine in the machines files by using the remote shell command.
 * Is current working directory available to all machines? This attempts to ls a file that tstmachines creates by running ls using the remote shell command.
 * Can user programs be run on remote systems? This checks that shared libraries and other components have been properly installed on all machines.

And the required test for every development tool:

For further information on MPICH, consult the documentation at http://www-unix.mcs.anl.gov/mpi/mpich/docs/mpichman-chp4/mpichman-chp4.htm.