OpenAFS/fr

Ce guide explique comment installer un serveur et un cilent OpenAFS sur Gentoo Linux.

À propos de ce document
Ce document vous guide à travers toutes les étapes nécessaires pour installer un serveur OpenAFS sur Gentoo Linux. Certaines parties de ce document sont issues de la FAQ AFS et du AFS Quick Beginnings guide d'IBM. Il ne sert à rien de réinventer la roue.

Qu'est-ce qu'AFS ?
AFS est un système de fichiers distribué qui permet à des hôtes coopérants (clients et serveurs) de partager efficacement des ressources de systèmes de fichiers à travers à la fois, le réseau local et des réseaux étendus. Les clients détiennent un cache pour les objets d'usage fréquent (fichiers), pour accélérer leur accès à ces derniers.

AFS est basé sur un système de fichiers distribué, développé à l'origine par l'Information Technology Center de l'université de Carnegie-Mellon, appelé Andrew File System. Andrew était le nom d'un projet de recherche de l'université de Carnégie-Mellon, en l'honneur de ses fondateurs. Une fois que Transarc fut créé et que AFS devint un produit, le Andrew fut abandonné pour indiquer que AFS avait dépassé les limites du projet et était devenu un système de fichiers de qualité et un produit exportable. Néanmoins, il y avait un nombre de cellules existantes qui utilisaient /afs comme racine de leur système de fichiers. À cette époque, changer la racine d'un système de fichiers était loin d'être une entreprise triviale. Aussi, pour éviter aux premiers site AFS d'avoir à renommer leur systèmes de fichiers, AFS fut retenu, à la fois,  comme le nom du système de fichier  et comme le nom de  sa racine.

Qu'est-ce qu'une cellule AFS ?
Une cellule AFS est un collection de serveurs administrativement regroupés, qui présente un système de fichiers unique et cohésif. Typiquement, une cellule AFS est un ensemble d'hôtes qui utilisent le même nom de domaine (par exemple gentoo.org). Les utilisateurs se connectent par des stations de travail clientes AFS qui demandent des informations et des fichiers à une cellules de serveurs au nom des utilisateurs. Les utilisateurs ne savent pas sur quel serveur le fichier auquel ils accèdent se trouve. Il ne se rendent même pas compte si un serveur est situé dans la pièce d'à coté dans la mesure ou chacun des volumes peut être répliqué et déplacé vers un autre serveur sans que les utilisateurs en soient avertis. Les fichiers sont toujours accessibles.

Quels sont les bénéfices de l'utilisation d'AFS ?
Les principales forces de AFS sont : ses possibilités de cache ( du coté client, typiquement de 100 Mo à 1 Go), ses fonctionnalités de sécurité (basées sur Kerberos 4, listes de contrôle d'accès), sa simplicité d'adressage (vous n'avez qu'un seul système de fichiers), sa modulabilité (vous pouvez rajoute des serveurs à votre cellule en fonction de vos besoins), ses protocoles de communication.

Où puis-je trouver plus d'information ?
Lisez les FAQ AFS.

La page d'accueil de OpenAFS est à www.openafs.org.

AFS a été développé à l'origine par Transarc qui est maintenant détenu par IBM. Depuis avril 2005, il a été retiré du catalogue des produits d'IBM.

Comment résoudre les problèmes ?
OpenAFS possède des fonctions de journalisation importantes. Cependant, par défaut, il tient ses propres journaux au lieu d'utilise les fonctions de journalisation de votre système. Pour que le serveur utilise les fonctions de journalisation du système,  utilisez l'option   pour toutes les commandes.

Introduction
Cette section vous guide dans le processus de mise à jour d'une installation OpenAFS existante vers une version d'OpenAFS 1.4.0 ou postérieure (ou 1.2.x partant de 1.2.13. Cette dernière ne sera pas prise en compte spécifiquement car la majorité des gens désireront 1.4 pour disposer d'une prise en charge de a.o.linux-2.6, de fichiers de grande taille et de la résolution des bogues).

Si vous disposez d'une installation fraîche de la version 1.4 d'OpenAFS, vous pouvez sans risque sauter ce chapitre. Néanmoins, si vous mettez à jour depuis une version antérieure, nous vous recommandons fortement de suivre les conseils des sections suivantes. Le script de transition dans l'ebuild est conçu pour vous assister dans la mise à jour rapide et le redémarrage. Notez que :il n'effacera pas (pour des raisons de sécurité) les fichiers de configuration et les scripts de démarrage aux anciens emplacements; ne changera pas automatiquement votre configuration de démarrage pour utiliser les nouveaux scripts, etc. Si vous n'êtes pas encore convaincu, l'utilisation d'un ancien module du noyau OpenAFS avec le système de binaires à jour, peut rendre votre noyau imprévisible. C'est pourquoi vous devez continuer à lire pour une transition claire et facile.

Différences avec les versions précédentes
Traditionnellement, OpenAFS a utilisé les mêmes conventions de chemin qu'IBM TransArc labs, jusqu'à ce que le code choisisse un embranchement différent. C'est compréhensible que la configuration de l'ancien AFS continue à utiliser ces conventions héritées. Des configuration plus récentes se conforment à FHS en utilisant des emplacements standards (comme dans beaucoup de distributions Linux). La table suivante est une compilation du script configure et du fichier README qui accompagne l'archive de distribution d'OpenAFS :

Il y a quelques autres curiosités, comme les binaires placés dans  pour le mode Transarc, mais cette liste ne prétend pas être exhaustive. Elle est plutôt destinée à servir de référence pour la transition créatrice de dysfonctionnement de ces fichiers de configuration.

Also as a result of the path changes, the default disk cache location has been changed from to.

Furthermore, the init-script has been split into a client and a server part. You used to have, but now you'll end up with both  and. Consequently, the configuration file has been split into  and. Also, options in to turn either client or server on or off have been obsoleted.

Another change to the init script is that it doesn't check your disk cache setup anymore. The old code required that a separate ext2 partition be mounted at. There were some problems with that:


 * Though it's a very logical setup, your cache doesn't need to be on a separate partition. As long as you make sure that the amount of space specified in really is available for disk cache usage, you're safe. So there is no real problem with having the cache on your root partition.
 * Some people use soft-links to point to the real disk cache location. The init script didn't like this, because then this cache location didn't turn up in.
 * Many prefer ext3 over ext2 nowadays. Both filesystems are valid for usage as a disk cache. Any other filesystem is unsupported (like: don't try reiserfs, you'll get a huge warning, expect failure afterwards).

Transition to the new paths
First of all, emerging a newer OpenAFS version should not overwrite any old configuration files. The script is designed to not change any files already present on the system. So even if you have a totally messed up configuration with a mix of old and new locations, the script should not cause further problems. Also, if a running OpenAFS server is detected, the installation will abort, preventing possible database corruption.

One caveat though -- there have been ebuilds floating around the internet that partially disable the protection that Gentoo puts on. These ebuilds have never been distributed by Gentoo. You might want to check the  variable in the output of the following command:

Though nothing in this ebuild would touch the files in, upgrading will cause the removal of your older OpenAFS installation. Files in  that belong to the older installation will be removed as well.

It should be clear to the experienced user that in the case he has tweaked his system by manually adding soft links (e.g. to  ), the new installation may run fine while still using the old configuration files. In this case, there has been no real transition, and cleaning up the old installation will result in a broken OpenAFS config.

Now that you know what doesn't happen, you may want to know what does:


 * is copied to
 * is copied to
 * is copied to
 * is copied to, while replacing occurrences of  with  ,  with  and  (without the / as previously) with
 * is copied to
 * The configuration file is copied to , as all known old options were destined for client usage only.

The upgrade itself
So you haven't got an OpenAFS server setup? Or maybe you do, the previous sections have informed you about what is going to happen, and you're still ready for it?

Let's go ahead with it then!

If you do have a server running, you want to shut it down now.

And then the upgrade itself.

Restarting OpenAFS
If you had an OpenAFS server running, you would have not have been forced to shut it down. Now is the time to do that.

As you may want keep the downtime to a minimum, so you can restart your OpenAFS server right away.

You can check whether it's running properly with the following command:

Before starting the OpenAFS client again, please take time to check your cache settings. They are determined by. To restart your OpenAFS client installation, please type the following:

Cleaning up afterwards
Before cleaning up, please make really sure that everything runs smoothly and that you have restarted after the upgrade (otherwise, you may still be running your old installation).

The following directories may be safely removed from the system:



The following files are also unnecessary:



In case you've previously used ebuilds =openafs-1.2.13 or =openafs-1.3.85, you may also have some other unnecessary files:



Init Script changes
Now most people would have their systems configured to automatically start the OpenAFS client and server on startup. Those who don't can safely skip this section. If you had your system configured to start them automatically, you will need to re-enable this, because the names of the init scripts have changed.

If you had  or  , you should remove  and  from the default runlevel, instead of.

Troubleshooting: what if the automatic upgrade fails
Don't panic. You shouldn't have lost any data or configuration files. So let's analyze the situation. Please file a bug at bugs.gentoo.org in any case, preferably with as much information as possible.

If you're having problems starting the client, this should help you diagnosing the problem:


 * Run  . The client normally sends error messages there.
 * Check . It should be of the form: /afs:{path to disk cache}:{number of blocks for disk cache}. Normally, your disk cache will be located at.
 * Check the output of  . You will want to see a line beginning with the word openafs.
 * will tell you whether afsd is running or not
 * should reveal whether has been mounted.

If you're having problems starting the server, then these hints may be useful:


 * tells you whether the overseer is running or not. If you have more than one overseer running, then something has gone wrong. In that case, you should try a graceful OpenAFS server shutdown with , check the result with   , kill all remaining overseer processes and then finally check whether any server processes are still running (   to get a list of them). Afterwards, do   to reset the status of the server and   to try launching it again.
 * If you're using OpenAFS' own logging system (which is the default setting), check out . If you're using the syslog service, go check out its logs for any useful information.

Getting AFS Documentation
You can get the original IBM AFS Documentation. It is very well written and you really want read it if it is up to you to administer a AFS Server.

You also have the option of using the documentation delivered with OpenAFS. It is installed when you have the USE flag  enabled while emerging OpenAFS. It can be found in. At the time of writing, this documentation was a work in progress. It may however document newer features in OpenAFS that aren't described in the original IBM AFS Documentation.

Building the Client
After successful compilation you're ready to go.

A simple global-browsing client installation
If you're not part of a specific OpenAFS-cell you want to access, and you just want to try browsing globally available OpenAFS-shares, then you can just install OpenAFS, not touch the configuration at all, and start.

Accessing a specific OpenAFS cell
If you need to access a specific cell, say your university's or company's own cell, then some adjustments to your configuration have to be made.

Firstly, you need to update with the database servers for your cell. This information is normally provided by your administrator.

Secondly, in order to be able to log onto the OpenAFS cell, you need to specify its name in.

Adjusting CellServDB and ThisCell

CellServDB tells your client which server(s) it needs to contact for a specific cell. ThisCell should be quite obvious. Normally you use a name which is unique for your organisation. Your (official) domain might be a good choice.

For a quick start, you can now start and use   to authenticate yourself and start using your access to the cell. For automatic logons to you cell, you want to consult the appropriate section below.

Adjusting the cache
You can house your cache on an existing filesystem (if it's ext2/3), or you may want to have a separate partition for that. The default location of the cache is, but you can change that by editing. A standard size for your cache is 200MB, but more won't hurt.

Starting AFS on startup
The following command will create the appropriate links to start your afs client on system startup.

Building the Server
If you haven't already done so, the following command will install all necessary binaries for setting up an AFS Server and Client.

Starting AFS Server
You need to run the  command to initialize the Basic OverSeer (BOS) Server, which monitors and controls other AFS server processes on its server machine. Think of it as init for the system. Include the  flag to disable authorization checking, since you haven't added the admin user yet.

Verify that the BOS Server created and

Defining Cell Name and Membership for Server Process
Now assign your cell's name.

Run the  command to set the cell name:

Starting the Database Server Process
Next use the  command to create entries for the four database server processes in the  file. The four processes run on database server machines only.

You can verify that all servers are running with the  command:

Initializing Cell Security
Now we'll initialize the cell's security mechanisms. We'll begin by creating the following two initial entries in the Authentication Database: The main administrative account, called admin by convention and an entry for the AFS server processes, called. No user logs in under the identity afs, but the Authentication Server's Ticket Granting Service (TGS) module uses the account to encrypt the server tickets that it grants to AFS clients. This sounds pretty much like Kerberos :)

Enter  interactive mode

Run the  command, to add the admin user to the.

Issue the  command to define the AFS Server encryption key in

Issue the  command to create a Protection Database entry for the admin user.

Issue the  command to make the admin user a member of the system:administrators group, and the   command to verify the new membership

Properly (re-)starting the AFS server
At this moment, proper authentication is possible, and the OpenAFS server can be started in a normal fashion. Note that authentication also requires a running OpenAFS client (setting it up is described in the previous chapter).

Starting the File Server, Volume Server and Salvager
Start the  process, which consists of the File Server, Volume Server and Salvager (fileserver, volserver and salvager processes).

Verify that all processes are running:

Your next action depends on whether you have ever run AFS file server machines in the cell.

If you are installing the first AFS Server ever in the cell, create the first AFS volume, root.afs

If there are existing AFS file server machines and volumes in the cell issue the  and   commands to synchronize the VLDB (Volume Location Database) with the actual state of volumes on the local machine. This will copy all necessary data to your new server.

If the command fails with the message "partition /vicepa does not exist on the server", ensure that the partition is mounted before running OpenAFS servers, or mount the directory and restart the processes using.

Configuring the Top Level of the AFS filespace
First you need to set some ACLs, so that any user can lookup.

Then you need to create the root volume, mount it readonly on and read/write on.

Finally you're done!!! You should now have a working AFS file server on your local network. Time to get a big cup of coffee and print out the AFS documentation!!!

Disclaimer
OpenAFS is an extensive technology. Please read the AFS documentation for more information. We only list a few administrative tasks in this chapter.

Configuring PAM to Acquire an AFS Token on Login
To use AFS you need to authenticate against the KA Server if using an implementation AFS Kerberos 4, or against a Kerberos 5 KDC if using MIT, Heimdal, or ShiShi Kerberos 5. However in order to login to a machine you will also need a user account, this can be local in, NIS, LDAP (OpenLDAP), or a Hesiod database. PAM allows Gentoo to tie the authentication against AFS and login to the user account.

You will need to update which is used by the other configurations. "use_first_pass" indicates it will be checked first against the user login, and "ignore_root" stops the local superuser being checked so as to order to allow login if AFS or the network fails.

/etc/pam.d/system-auth

In order for  to keep the real user's token and to prevent local users gaining AFS access change  as follows:

/etc/pam.d/su

Acknowledgements
We would like to thank the following authors and editors for their contributions to this guide:


 * Stefaan De Roeck
 * Holger Brueckner
 * Benny Chuang
 * Tiemo Kieft
 * Steven McCoy
 * Shyam Mani