OpenAFS

From Gentoo Wiki
Revision as of 16:40, 18 August 2013 by SwifT (Talk | contribs)

Jump to: navigation, search
Other languages:English 100% • ‎français 80% • ‎русский 26%

This guide shows you how to install an OpenAFS server and client on Gentoo Linux.

Overview

About this Document

This document provides you with all necessary steps to install an OpenAFS server on Gentoo Linux. Parts of this document are taken from the AFS FAQ and IBM's Quick Beginnings guide on AFS. Well, never reinvent the wheel. :)

What is AFS?

AFS is a distributed filesystem that enables co-operating hosts (clients and servers) to efficiently share filesystem resources across both local area and wide area networks. Clients hold a cache for often used objects (files), to get quicker access to them.

AFS is based on a distributed file system originally developed at the Information Technology Center at Carnegie-Mellon University that was called the "Andrew File System". "Andrew" was the name of the research project at CMU - honouring the founders of the University. Once Transarc was formed and AFS became a product, the "Andrew" was dropped to indicate that AFS had gone beyond the Andrew research project and had become a supported, product quality filesystem. However, there were a number of existing cells that rooted their filesystem as /afs. At the time, changing the root of the filesystem was a non-trivial undertaking. So, to save the early AFS sites from having to rename their filesystem, AFS remained as the name and filesystem root.

What is an AFS cell?

An AFS cell is a collection of servers grouped together administratively and presenting a single, cohesive filesystem. Typically, an AFS cell is a set of hosts that use the same Internet domain name (for example, gentoo.org) Users log into AFS client workstations which request information and files from the cell's servers on behalf of the users. Users won't know on which server a file which they are accessing, is located. They even won't notice if a server will be located to another room, since every volume can be replicated and moved to another server without any user noticing. The files are always accessible. Well, it's like NFS on steroids :)

What are the benefits of using AFS?

The main strengths of AFS are its: caching facility (on client side, typically 100M to 1GB), security features (Kerberos 4 based, access control lists), simplicity of addressing (you just have one filesystem), scalability (add further servers to your cell as needed), communications protocol.

Where can I get more information?

Read the AFS FAQ .

OpenAFS main page is at www.openafs.org .

AFS was originally developed by Transarc which is now owned by IBM. Since April 2005, it has been withdrawn from IBM's product catalogue.

How Can I Debug Problems?

OpenAFS has great logging facilities. However, by default it logs straight into its own logs instead of through the system logging facilities you have on your system. To have the servers log through your system logger, use the -syslog option for all bos commands.

Upgrading from previous versions

Introduction

This section aims to help you through the process of upgrading an existing OpenAFS installation to OpenAFS version 1.4.0 or higher (or 1.2.x starting from 1.2.13. The latter will not be handled specifically, as most people will want 1.4 for a.o. linux-2.6 support, large file support and bug fixes).

If you're dealing with a clean install of a 1.4 version of OpenAFS, then you can safely skip this chapter. However, if you're upgrading from a previous version, we strongly urge you to follow the guidelines in the next sections. The transition script in the ebuild is designed to assist you in quickly upgrading and restarting. Please note that it will (for safety reasons) not delete configuration files and startup scripts in old places, not automatically change your boot configuration to use the new scripts, etc. If you need further convincing, using an old OpenAFS kernel module together with the updated system binaries, may very well cause your kernel to freak out. So, let's read on for a clean and easy transition, shall we?


Note
This chapter has been written bearing many different system configurations in mind. Still, it is possible that due to peculiar tweaks a user has made, his or her specific situation may not be described here. A user with enough self-confidence to tweak his system should be experienced enough to apply the given remarks where appropriate. Vice versa, a user that has done little to his system but install the previous ebuild, can skip most of the warnings further on.


Differences to previous versions

Traditionally, OpenAFS has used the same path-conventions that IBM TransArc labs had used, before the code was forked. Understandably, old AFS setups continue using these legacy path conventions. More recent setups conform with FHS by using standard locations (as seen in many Linux distributions). The following table is a compilation of the configure-script and the README accompanying the OpenAFS distribution tarballs:

Directory Purpose Transarc Mode Default Mode translation to Gentoo
viceetcdir Client configuration /usr/vice/etc $(sysconfdir)/openafs /etc/openafs
unnamed Client binaries unspecified $(bindir) /usr/bin
afsconfdir Server configuration /usr/afs/etc $(sysconfdir)/openafs/server /etc/openafs/server
afssrvdir Internal server binaries /usr/afs/bin (servers) $(libexecdir)/openafs /usr/libexec/openafs
afslocaldir Server state /usr/afs/local $(localstatedir)/openafs /var/lib/openafs
afsdbdir Auth/serverlist/... databases /usr/afs/db $(localstatedir)/openafs/db /var/lib/openafs/db
afslogdir Log files /usr/afs/logs $(localstatedir)/openafs/logs /var/lib/openafs/logs
afsbosconfig Overseer config $(afslocaldir)/BosConfig $(afsconfdir)/BosConfig /etc/openafs/BosConfig

There are some other oddities, like binaries being put in /usr/vice/etc in Transarc mode, but this list is not intended to be comprehensive. It is rather meant to serve as a reference to those troubleshooting config file transition.

Also as a result of the path changes, the default disk cache location has been changed from /usr/vice/cache to /var/cache/openafs .

Furthermore, the init-script has been split into a client and a server part. You used to have /etc/init.d/afs , but now you'll end up with both /etc/init.d/openafs-client and /etc/init.d/openafs-server . Consequently, the configuration file /etc/conf.d/afs has been split into /etc/conf.d/openafs-client and /etc/conf.d/openafs-server . Also, options in /etc/conf.d/afs to turn either client or server on or off have been obsoleted.

Another change to the init script is that it doesn't check your disk cache setup anymore. The old code required that a separate ext2 partition be mounted at /usr/vice/cache . There were some problems with that:

  • Though it's a very logical setup, your cache doesn't need to be on a separate partition. As long as you make sure that the amount of space specified in /etc/openafs/cacheinfo really is available for disk cache usage, you're safe. So there is no real problem with having the cache on your root partition.
  • Some people use soft-links to point to the real disk cache location. The init script didn't like this, because then this cache location didn't turn up in /proc/mounts .
  • Many prefer ext3 over ext2 nowadays. Both filesystems are valid for usage as a disk cache. Any other filesystem is unsupported (like: don't try reiserfs, you'll get a huge warning, expect failure afterwards).

Transition to the new paths

First of all, emerging a newer OpenAFS version should not overwrite any old configuration files. The script is designed to not change any files already present on the system. So even if you have a totally messed up configuration with a mix of old and new locations, the script should not cause further problems. Also, if a running OpenAFS server is detected, the installation will abort, preventing possible database corruption.

One caveat though -- there have been ebuilds floating around the internet that partially disable the protection that Gentoo puts on /etc . These ebuilds have never been distributed by Gentoo. You might want to check the CONFIG_PROTECT_MASK variable in the output of the following command:

root # emerge info | grep "CONFIG_PROTECT_MASK"
CONFIG_PROTECT_MASK="/etc/gconf /etc/terminfo /etc/texmf/web2c /etc/env.d"

Though nothing in this ebuild would touch the files in /etc/afs , upgrading will cause the removal of your older OpenAFS installation. Files in CONFIG_PROTECT_MASK that belong to the older installation will be removed as well.

It should be clear to the experienced user that in the case he has tweaked his system by manually adding soft links (e.g. /usr/afs/etc to /etc/openafs ), the new installation may run fine while still using the old configuration files. In this case, there has been no real transition, and cleaning up the old installation will result in a broken OpenAFS config.

Now that you know what doesn't happen, you may want to know what does:

  • /usr/afs/etc is copied to /etc/openafs/server
  • /usr/vice/etc is copied to /etc/openafs
  • /usr/afs/local is copied to /var/lib/openafs
  • /usr/afs/local/BosConfig is copied to /etc/openafs/BosConfig , while replacing occurrences of /usr/afs/bin/ with /usr/libexec/openafs , /usr/afs/etc with /etc/openafs/server and /usr/afs/bin (without the / as previously) with /usr/bin
  • /usr/afs/db is copied to /var/lib/openafs/db
  • The configuration file /etc/conf.d/afs is copied to /etc/conf.d/openafs-client , as all known old options were destined for client usage only.

The upgrade itself

So you haven't got an OpenAFS server setup? Or maybe you do, the previous sections have informed you about what is going to happen, and you're still ready for it?

Let's go ahead with it then!

If you do have a server running, you want to shut it down now.

root # /etc/init.d/afs stop

And then the upgrade itself.

root # emerge --ask openafs

Restarting OpenAFS

If you had an OpenAFS server running, you would have not have been forced to shut it down. Now is the time to do that.

root # /etc/init.d/afs stop

As you may want keep the downtime to a minimum, so you can restart your OpenAFS server right away.

root # /etc/init.d/openafs-server start

You can check whether it's running properly with the following command:

root # /usr/bin/bos status localhost -localauth

Before starting the OpenAFS client again, please take time to check your cache settings. They are determined by /etc/openafs/cacheinfo . To restart your OpenAFS client installation, please type the following:

root # /etc/init.d/openafs-client start

Cleaning up afterwards

Before cleaning up, please make really sure that everything runs smoothly and that you have restarted after the upgrade (otherwise, you may still be running your old installation).

Important
Please make sure you're not using /usr/vice/cache for disk cache if you are deleting /usr/vice !!

The following directories may be safely removed from the system:

  • /etc/afs
  • /usr/vice
  • /usr/afs
  • /usr/afsws

The following files are also unnecessary:

  • /etc/init.d/afs
  • /etc/conf.d/afs
root # tar czf /root/oldafs-backup.tgz /etc/afs /usr/vice /usr/afs /usr/afsws
root #
rm -R /etc/afs /usr/vice /usr/afs /usr/afsws
root #
rm /etc/init.d/afs /etc/conf.d/afs

In case you've previously used ebuilds =openafs-1.2.13 or =openafs-1.3.85, you may also have some other unnecessary files:

  • /etc/init.d/afs-client
  • /etc/init.d/afs-server
  • /etc/conf.d/afs-client
  • /etc/conf.d/afs-server

Init Script changes

Now most people would have their systems configured to automatically start the OpenAFS client and server on startup. Those who don't can safely skip this section. If you had your system configured to start them automatically, you will need to re-enable this, because the names of the init scripts have changed.

root # rc-update del afs default
root #
rc-update add openafs-client default
root #
rc-update add openafs-server default

If you had =openafs-1.2.13 or =openafs-1.3.85 , you should remove afs-client and afs-server from the default runlevel, instead of afs .

Troubleshooting: what if the automatic upgrade fails

Don't panic. You shouldn't have lost any data or configuration files. So let's analyze the situation. Please file a bug at bugs.gentoo.org in any case, preferably with as much information as possible.

If you're having problems starting the client, this should help you diagnosing the problem:

  • Run dmesg . The client normally sends error messages there.
  • Check /etc/openafs/cacheinfo . It should be of the form: /afs:{path to disk cache}:{number of blocks for disk cache}. Normally, your disk cache will be located at /var/cache/openafs .
  • Check the output of lsmod . You will want to see a line beginning with the word openafs.
  • pgrep afsd will tell you whether afsd is running or not
  • cat /proc/mounts should reveal whether /afs has been mounted.

If you're having problems starting the server, then these hints may be useful:

  • pgrep bosserver tells you whether the overseer is running or not. If you have more than one overseer running, then something has gone wrong. In that case, you should try a graceful OpenAFS server shutdown with bos shutdown localhost -localauth -wait , check the result with bos status localhost -localauth , kill all remaining overseer processes and then finally check whether any server processes are still running ( ls /usr/libexec/openafs to get a list of them). Afterwards, do /etc/init.d/openafs-server zap to reset the status of the server and /etc/init.d/openafs-server start to try launching it again.
  • If you're using OpenAFS' own logging system (which is the default setting), check out /var/lib/openafs/logs/* . If you're using the syslog service, go check out its logs for any useful information.

Documentation

Getting AFS Documentation

You can get the original IBM AFS Documentation. It is very well written and you really want read it if it is up to you to administer a AFS Server.

root # emerge --ask app-doc/afsdoc

You also have the option of using the documentation delivered with OpenAFS. It is installed when you have the USE flag doc enabled while emerging OpenAFS. It can be found in /usr/share/doc/openafs-*/ . At the time of writing, this documentation was a work in progress. It may however document newer features in OpenAFS that aren't described in the original IBM AFS Documentation.

Client Installation

Building the Client

root # emerge --ask net-fs/openafs

After successful compilation you're ready to go.

A simple global-browsing client installation

If you're not part of a specific OpenAFS-cell you want to access, and you just want to try browsing globally available OpenAFS-shares, then you can just install OpenAFS, not touch the configuration at all, and start /etc/init.d/openafs-client .

Accessing a specific OpenAFS cell

If you need to access a specific cell, say your university's or company's own cell, then some adjustments to your configuration have to be made.

Firstly, you need to update /etc/openafs/CellServDB with the database servers for your cell. This information is normally provided by your administrator.

Secondly, in order to be able to log onto the OpenAFS cell, you need to specify its name in /etc/openafs/ThisCell .

CodeAdjusting CellServDB and ThisCell

CellServDB:
>netlabs        #Cell name
10.0.0.1        #storage
  
ThisCell:
netlabs


Warning
Only use spaces inside the CellServDB file. The client will most likely fail if you use TABs.

CellServDB tells your client which server(s) it needs to contact for a specific cell. ThisCell should be quite obvious. Normally you use a name which is unique for your organisation. Your (official) domain might be a good choice.

For a quick start, you can now start /etc/init.d/openafs-client and use klog to authenticate yourself and start using your access to the cell. For automatic logons to you cell, you want to consult the appropriate section below.

Adjusting the cache

Note
Unfortunately the AFS Client needs a ext2/3 filesystem for its cache to run correctly. There are some issues when using other filesystems (using e.g. reiserfs is not a good idea).

You can house your cache on an existing filesystem (if it's ext2/3), or you may want to have a separate partition for that. The default location of the cache is /var/cache/openafs , but you can change that by editing /etc/openafs/cacheinfo . A standard size for your cache is 200MB, but more won't hurt.

Starting AFS on startup

The following command will create the appropriate links to start your afs client on system startup.

Warning
You should always have a running afs server in your domain when trying to start the afs client. Your system won't boot until it gets some timeout if your AFS server is down (and this is quite a long long time.)
root # rc-update add openafs-client default

Server Installation

Building the Server

Note
All commands should be written in one line!! In this document they are sometimes wrapped to two lines to make them easier to read.

If you haven't already done so, the following command will install all necessary binaries for setting up an AFS Server and Client.

root # emerge --ask net-fs/openafs

Starting AFS Server

You need to run the bosserver command to initialize the Basic OverSeer (BOS) Server, which monitors and controls other AFS server processes on its server machine. Think of it as init for the system. Include the -noauth flag to disable authorization checking, since you haven't added the admin user yet.

Warning
Disabling authorization checking gravely compromises cell security. You must complete all subsequent steps in one uninterrupted pass and must not leave the machine unattended until you restart the BOS Server with authorization checking enabled. Well, this is what the AFS documentation says. :)
root # bosserver -noauth &

Verify that the BOS Server created /etc/openafs/server/CellServDB and /etc/openafs/server/ThisCell

root # ls -al /etc/openafs/server/
-rw-r--r--    1 root     root           41 Jun  4 22:21 CellServDB
-rw-r--r--    1 root     root            7 Jun  4 22:21 ThisCell

Defining Cell Name and Membership for Server Process

Now assign your cell's name.

Important
There are some restrictions on the name format. Two of the most important restrictions are that the name cannot include uppercase letters or more than 64 characters. Remember that your cell name will show up under /afs , so you might want to choose a short one.
Note
In the following and every instruction in this guide, for the SERVER_NAME argument substitute the full-qualified hostname (such as afs.gentoo.org ) of the machine you are installing. For the CELL_NAME argument substitute your cell's complete name (such as gentoo )

Run the bos setcellname command to set the cell name:

root # bos setcellname SERVER_NAME CELL_NAME -noauth

Starting the Database Server Process

Next use the bos create command to create entries for the four database server processes in the /etc/openafs/BosConfig file. The four processes run on database server machines only.

kaserver The Authentication Server maintains the Authentication Database. This can be replaced by a Kerberos 5 daemon. If anybody wants to try that feel free to update this document :)
buserver The Backup Server maintains the Backup Database
ptserver The Protection Server maintains the Protection Database
vlserver The Volume Location Server maintains the Volume Location Database (VLDB). Very important :)
root # bos create SERVER_NAME kaserver simple /usr/libexec/openafs/kaserver -cell CELL_NAME -noauth
root #
bos create SERVER_NAME buserver simple /usr/libexec/openafs/buserver -cell CELL_NAME -noauth
root #
bos create SERVER_NAME ptserver simple /usr/libexec/openafs/ptserver -cell CELL_NAME -noauth
root #
bos create SERVER_NAME vlserver simple /usr/libexec/openafs/vlserver -cell CELL_NAME -noauth

You can verify that all servers are running with the bos status command:

root # bos status SERVER_NAME -noauth
Instance kaserver, currently running normally.
Instance buserver, currently running normally.
Instance ptserver, currently running normally.
Instance vlserver, currently running normally.

Initializing Cell Security

Now we'll initialize the cell's security mechanisms. We'll begin by creating the following two initial entries in the Authentication Database: The main administrative account, called admin by convention and an entry for the AFS server processes, called afs . No user logs in under the identity afs , but the Authentication Server's Ticket Granting Service (TGS) module uses the account to encrypt the server tickets that it grants to AFS clients. This sounds pretty much like Kerberos :)

Enter kas interactive mode

root # kas -cell CELL_NAME -noauth
ka> create afs
initial_password:
Verifying, please re-enter initial_password:
ka> create admin
initial_password:
Verifying, please re-enter initial_password:
ka> examine afs
User data for afs
key (0) cksum is 2651715259, last cpw: Mon Jun  4 20:49:30 2001
password will never expire.
An unlimited number of unsuccessful authentications is permitted.
entry never expires.  Max ticket lifetime 100.00 hours.
last mod on Mon Jun  4 20:49:30 2001 by <none>
permit password reuse
ka> setfields admin -flags admin
ka> examine admin
User data for admin (ADMIN)
key (0) cksum is 2651715259, last cpw: Mon Jun  4 20:49:59 2001
password will never expire.
An unlimited number of unsuccessful authentications is permitted.
entry never expires.  Max ticket lifetime 25.00 hours.
last mod on Mon Jun  4 20:51:10 2001 by <none>
permit password reuse

Run the bos adduser command, to add the admin user to the /etc/openafs/server/UserList .

root # bos adduser SERVER_NAME admin -cell CELL_NAME -noauth

Issue the bos addkey command to define the AFS Server encryption key in /etc/openafs/server/KeyFile

Note
If asked for the input key, give the password you entered when creating the AFS entry with kas
root # bos addkey SERVER_NAME -kvno 0 -cell CELL_NAME -noauth
input key:
Retype input key:

Issue the pts createuser command to create a Protection Database entry for the admin user.

Note
By default, the Protection Server assigns AFS UID 1 to the admin user, because it is the first user entry you are creating. If the local password file ( /etc/passwd or equivalent) already has an entry for admin that assigns a different UID use the -id argument to create matching UIDs.
root # pts createuser -name admin -cell CELL_NAME [-id AFS_UID] -noauth

Issue the pts adduser command to make the admin user a member of the system:administrators group, and the pts membership command to verify the new membership

root # pts adduser admin system:administrators -cell CELL_NAME -noauth
root #
pts membership admin -cell CELL_NAME -noauth
Groups admin (id: 1) is a member of:
system:administrators

Properly (re-)starting the AFS server

At this moment, proper authentication is possible, and the OpenAFS server can be started in a normal fashion. Note that authentication also requires a running OpenAFS client (setting it up is described in the previous chapter).

root # bos shutdown SERVER_NAME -wait -noauth
root #
killall bosserver
root # /etc/init.d/openafs-server start
root #
/etc/init.d/openafs-client start
root # rc-update add openafs-server default
root # klog admin

Starting the File Server, Volume Server and Salvager

Start the fs process, which consists of the File Server, Volume Server and Salvager (fileserver, volserver and salvager processes).

root # bos create SERVER_NAME fs fs /usr/libexec/openafs/fileserver /usr/libexec/openafs/volserver /usr/libexec/openafs/salvager -cell CELL_NAME -noauth

Verify that all processes are running:

root # bos status SERVER_NAME -long -noauth
Instance kaserver, (type is simple) currently running normally.
Process last started at Mon Jun  4 21:07:17 2001 (2 proc starts)
Last exit at Mon Jun  4 21:07:17 2001
Command 1 is '/usr/libexec/openafs/kaserver'
  
Instance buserver, (type is simple) currently running normally.
Process last started at Mon Jun  4 21:07:17 2001 (2 proc starts)
Last exit at Mon Jun  4 21:07:17 2001
Command 1 is '/usr/libexec/openafs/buserver'
  
Instance ptserver, (type is simple) currently running normally.
Process last started at Mon Jun  4 21:07:17 2001 (2 proc starts)
Last exit at Mon Jun  4 21:07:17 2001
Command 1 is '/usr/libexec/openafs/ptserver'
  
Instance vlserver, (type is simple) currently running normally.
Process last started at Mon Jun  4 21:07:17 2001 (2 proc starts)
Last exit at Mon Jun  4 21:07:17 2001
Command 1 is '/usr/libexec/openafs/vlserver'
  
Instance fs, (type is fs) currently running normally.
Auxiliary status is: file server running.
Process last started at Mon Jun  4 21:09:30 2001 (2 proc starts)
Command 1 is '/usr/libexec/openafs/fileserver'
Command 2 is '/usr/libexec/openafs/volserver'
Command 3 is '/usr/libexec/openafs/salvager'

Your next action depends on whether you have ever run AFS file server machines in the cell.

If you are installing the first AFS Server ever in the cell, create the first AFS volume, root.afs

Note
For the partition name argument, substitute the name of one of the machine's AFS Server partitions. Any filesystem mounted under a directory called /vicepx , where x is in the range of a-z, will be considered and used as an AFS Server partition. Any unix filesystem will do (as opposed to the client's cache, which can only be ext2/3). Tip: the server checks for each /vicepx mount point whether a filesystem is mounted there. If not, the server will not attempt to use it. This behaviour can be overridden by putting a file named AlwaysAttach in this directory.
root # vos create SERVER_NAME PARTITION_NAME root.afs -cell CELL_NAME -noauth

If there are existing AFS file server machines and volumes in the cell issue the vos sncvldb and vos syncserv commands to synchronize the VLDB (Volume Location Database) with the actual state of volumes on the local machine. This will copy all necessary data to your new server.

If the command fails with the message "partition /vicepa does not exist on the server", ensure that the partition is mounted before running OpenAFS servers, or mount the directory and restart the processes using bos restart SERVER_NAME -all -cell CELL_NAME -noauth .

root # vos syncvldb SERVER_NAME -cell CELL_NAME -verbose -noauth
root #
vos syncserv SERVER_NAME -cell CELL_NAME -verbose -noauth

Starting the Server Portion of the Update Server

root # bos create SERVER_NAME upserver simple "/usr/libexec/openafs/upserver -crypt /etc/openafs/server -clear /usr/libexec/openafs" -cell CELL_NAME -noauth

Configuring the Top Level of the AFS filespace

First you need to set some ACLs, so that any user can lookup /afs .

Note
The default OpenAFS client configuration has dynroot enabled. This option turns /afs into a virtual directory composed of the contents of your /etc/openafs/CellServDB file. As such, the following command will not work, because it requires a real AFS directory. You can temporarily switch dynroot off by setting ENABLE_DYNROOT to no in /etc/conf.d/openafs-client . Don't forget to issue a client restart after changing parameters.
root # fs setacl /afs system:anyuser rl

Then you need to create the root volume, mount it readonly on /afs/<cell name> and read/write on /afs/.<cell name> .

root # vos create SERVER_NAME PARTITION_NAME root.cell
root #
fs mkmount /afs/CELL_NAME root.cell
root #
fs setacl /afs/CELL_NAME system:anyuser rl
root #
fs mkmount /afs/.CELL_NAME root.cell -rw
root # vos create SERVER_NAME PARTITION_NAME VOLUME_NAME
root #
fs mkmount /afs/CELL_NAME/MOUNT_POINT VOLUME_NAME
root #
fs mkmount /afs/CELL_NAME/.MOUNT_POINT VOLUME_NAME -rw
root #
fs setquota /afs/CELL_NAME/.MOUNT_POINT -max QUOTUM

Finally you're done!!! You should now have a working AFS file server on your local network. Time to get a big cup of coffee and print out the AFS documentation!!!

Note
It is very important for the AFS server to function properly, that all system clocks are synchronized. This is best accomplished by installing a ntp server on one machine (e.g. the AFS server) and synchronize all client clocks with the ntp client. This can also be done by the AFS client.

Basic Administration

Disclaimer

OpenAFS is an extensive technology. Please read the AFS documentation for more information. We only list a few administrative tasks in this chapter.

Configuring PAM to Acquire an AFS Token on Login

To use AFS you need to authenticate against the KA Server if using an implementation AFS Kerberos 4, or against a Kerberos 5 KDC if using MIT, Heimdal, or ShiShi Kerberos 5. However in order to login to a machine you will also need a user account, this can be local in /etc/passwd , NIS, LDAP (OpenLDAP), or a Hesiod database. PAM allows Gentoo to tie the authentication against AFS and login to the user account.

You will need to update /etc/pam.d/system-auth which is used by the other configurations. "use_first_pass" indicates it will be checked first against the user login, and "ignore_root" stops the local superuser being checked so as to order to allow login if AFS or the network fails.

Code/etc/pam.d/system-auth

auth       required     pam_env.so
auth       sufficient   pam_unix.so likeauth nullok
auth       sufficient   pam_afs.so.1 use_first_pass ignore_root
auth       required     pam_deny.so
  
account    required     pam_unix.so
  
password   required     pam_cracklib.so retry=3
password   sufficient   pam_unix.so nullok md5 shadow use_authtok
password   required     pam_deny.so
  
session    required     pam_limits.so
session    required     pam_unix.so

In order for sudo to keep the real user's token and to prevent local users gaining AFS access change /etc/pam.d/su as follows:

Code/etc/pam.d/su

# Here, users with uid > 100 are considered to belong to AFS and users with
# uid <= 100 are ignored by pam_afs.
auth       sufficient   pam_afs.so.1 ignore_uid 100
  
auth       sufficient   pam_rootok.so
  
# If you want to restrict users begin allowed to su even more,
# create /etc/security/suauth.allow (or to that matter) that is only
# writable by root, and add users that are allowed to su to that
# file, one per line.
#auth       required     pam_listfile.so item=ruser \
#       sense=allow onerr=fail file=/etc/security/suauth.allow
  
# Uncomment this to allow users in the wheel group to su without
# entering a passwd.
#auth       sufficient   pam_wheel.so use_uid trust
  
# Alternatively to above, you can implement a list of users that do
# not need to supply a passwd with a list.
#auth       sufficient   pam_listfile.so item=ruser \
#       sense=allow onerr=fail file=/etc/security/suauth.nopass
  
# Comment this to allow any user, even those not in the 'wheel'
# group to su
auth       required     pam_wheel.so use_uid
  
auth       required     pam_stack.so service=system-auth
  
account    required     pam_stack.so service=system-auth
  
password   required     pam_stack.so service=system-auth
  
session    required     pam_stack.so service=system-auth
session    optional     pam_xauth.so
  
# Here we prevent the real user id's token from being dropped
session    optional     pam_afs.so.1 no_unlog

Acknowledgements

We would like to thank the following authors and editors for their contributions to this guide:


  • Stefaan De Roeck
  • Holger Brueckner
  • Benny Chuang
  • Tiemo Kieft
  • Steven McCoy
  • Shyam Mani