OpenAFS

From Gentoo Wiki
Jump to:navigation Jump to:search
This page is a translated version of the page OpenAFS and the translation is 41% complete.
Outdated translations are marked like this.
Other languages:

В этой статье показывается, как установить клиент и сервер OpenAFS на Gentoo Linux.

Общий обзор

О данном документе

Этот документ проведёт вас по всем необходимым этапам установки сервера OpenAFS на Gentoo Linux. Часть материалов этого документа взята из AFS FAQ и IBM's Quick Beginnings guide on AFS. Не стоит переизобретать колесо. :)

Что такое AFS?

AFS — распределённая файловая система, работающая на взаимодействующих друг с другом хостах (клиентах и серверах) для обеспечения эффективного доступа к ресурсам ФС через локальные и глобальные сети. Клиенты кэшируют часто используемые объекты (файлы), чтобы ускорить доступ к ним.

AFS была создана на основе разработанной в центре информационных технологий университета Карнеги-Меллона (CMU) распределённой файловой системы, называвшейся Andrew File System. Andrew — название исследовательского проекта в CMU — было дано в честь основателя университета. Когда AFS стала продвигать новообразованная компания Transarc, от имени Andrew было решено отказаться с целью показать, что AFS прошла стадию исследовательского проекта, имеет поддержку и готова для промышленного использования. Однако на определённом числе уже существующих ячеек уже был жёстко закреплен каталог /afs. В то время изменение корневого каталога ФС было нетривиальной задачей. Так что, чтобы уберечь ранние площадки на AFS от необходимости переименования, название и корневой каталог сохранили.

Что такое ячейка AFS?

Ячейка AFS — это группа административно объединённых серверов, предоставляющая единую, связанную файловую систему. Ячейка обычно представлена набором хостов под одним доменным именем (например, gentoo.org). Пользователь подключается к клиентской рабочей станции AFS, которая от его имени запрашивает информацию и файлы у серверов ячейки. Пользователи не знают, на каком сервере находится запрашиваемый файл. Можно даже переместить сервер в другое помещение незаметно для них, так как любой том данных можно дублировать и сменить незаметным для пользователей образом. Файлы всегда доступны. Это как NFS на стероидах :)

Какая выгода от использования AFS?

Главная сила AFS: возможности кэширования (на клиентской стороне, обычно от 100 МБ до 1 ГБ), функции безопасности (на основе Kerberos 5, списки контроля доступа), простота адресации (у вас только одна файловая система), масштабируемость (возможность добавления серверов в ячейку), протокол обмена информацией.

Где можно получить больше информации?

Посмотрите AFS FAQ.

Главная страница проекта OpenAFS: www.openafs.org.

AFS была создана компанией Transarc, которой теперь владеет IBM. В апреле 2005 года AFS убрали из каталога продуктов IBM.

Как отлавливать проблемы?

У OpenAFS прекрасные средства журналирования. Однако по умолчанию события попадают непосредственно в его собственные журналы, минуя системные средства журналирования. Чтобы регистрировать события при помощи системных средств, установите для всех команд bos параметр -syslog.

Обновление с предыдущих версий

Введение

Данный раздел поможет вам обновить существующую установку OpenAFS до версии 1.4.0 или более новой (до версии 1.2.x с 1.2.13. Этот вариант не будет рассматриваться отдельно, так как большинству пользователей нужна версия 1.4 для поддержки ядра linux версии 2.6, поддержки файлов большого размера и (или) исправления ошибок).

If you're dealing with a clean install of a 1.4 version of OpenAFS, then you can safely skip this chapter. However, if you're upgrading from a previous version, we strongly urge you to follow the guidelines in the next sections. The transition script in the ebuild is designed to assist you in quickly upgrading and restarting. Please note that it will (for safety reasons) not delete configuration files and startup scripts in old places, not automatically change your boot configuration to use the new scripts, etc. If you need further convincing, using an old OpenAFS kernel module together with the updated system binaries, may very well cause your kernel to freak out. So, let's read on for a clean and easy transition, shall we?


Заметка
This chapter has been written bearing many different system configurations in mind. Still, it is possible that due to peculiar tweaks a user has made, their specific situation may not be described here. A user with enough self-confidence to tweak their system should be experienced enough to apply the given remarks where appropriate. Vice versa, a user that has done little to their system but install the previous ebuild, can skip most of the warnings further on.


Различия с предыдущими версиями

Traditionally, OpenAFS has used the same path-conventions that IBM TransArc labs had used, before the code was forked. Understandably, old AFS setups continue using these legacy path conventions. More recent setups conform with FHS by using standard locations (as seen in many Linux distributions). The following table is a compilation of the configure-script and the README accompanying the OpenAFS distribution tarballs:

Directory Purpose Transarc Mode Default Mode translation to Gentoo
viceetcdir Client configuration /usr/vice/etc $(sysconfdir)/openafs /etc/openafs
unnamed Client binaries unspecified $(bindir) /usr/bin
afsconfdir Server configuration /usr/afs/etc $(sysconfdir)/openafs/server /etc/openafs/server
afssrvdir Internal server binaries /usr/afs/bin (servers) $(libexecdir)/openafs /usr/libexec/openafs
afslocaldir Server state /usr/afs/local $(localstatedir)/openafs /var/lib/openafs
afsdbdir Auth/serverlist/... databases /usr/afs/db $(localstatedir)/openafs/db /var/lib/openafs/db
afslogdir Log files /usr/afs/logs $(localstatedir)/openafs/logs /var/lib/openafs/logs
afsbosconfig Overseer config $(afslocaldir)/BosConfig $(afsconfdir)/BosConfig /etc/openafs/BosConfig

There are some other oddities, like binaries being put in /usr/vice/etc in Transarc mode, but this list is not intended to be comprehensive. It is rather meant to serve as a reference to those troubleshooting config file transition.

Also as a result of the path changes, the default disk cache location has been changed from /usr/vice/cache to /var/cache/openafs . Please note, however, that the directory in this path, /var/cache/openafs, is not created by the ebuild. You will need to create it yourself.

Furthermore, the init-script has been split into a client and a server part. You used to have /etc/init.d/afs , but now you'll end up with both /etc/init.d/openafs-client and /etc/init.d/openafs-server . Consequently, the configuration file /etc/conf.d/afs has been split into /etc/conf.d/openafs-client and /etc/conf.d/openafs-server . Also, options in /etc/conf.d/afs to turn either client or server on or off have been obsoleted.

Another change to the init script is that it doesn't check your disk cache setup anymore. The old code required that a separate ext2 partition be mounted at /usr/vice/cache . There were some problems with that:

  • Though it's a very logical setup, your cache doesn't need to be on a separate partition. As long as you make sure that the amount of space specified in /etc/openafs/cacheinfo really is available for disk cache usage, you're safe. So there is no real problem with having the cache on your root partition.
  • Some people use soft-links to point to the real disk cache location. The init script didn't like this, because then this cache location didn't turn up in /proc/mounts .
  • Many prefer ext3 over ext2 nowadays. Both filesystems are valid for usage as a disk cache. Any other filesystem is unsupported (like: don't try reiserfs, you'll get a huge warning, expect failure afterwards).

Переход к новым путям

First of all, emerging a newer OpenAFS version should not overwrite any old configuration files. The script is designed to not change any files already present on the system. So even if you have a totally messed up configuration with a mix of old and new locations, the script should not cause further problems. Also, if a running OpenAFS server is detected, the installation will abort, preventing possible database corruption.

One caveat though -- there have been ebuilds floating around the internet that partially disable the protection that Gentoo puts on /etc . These ebuilds have never been distributed by Gentoo. You might want to check the CONFIG_PROTECT_MASK variable in the output of the following command:

root #emerge info | grep "CONFIG_PROTECT_MASK"
CONFIG_PROTECT_MASK="/etc/gconf /etc/terminfo /etc/texmf/web2c /etc/env.d"

Though nothing in this ebuild would touch the files in /etc/afs , upgrading will cause the removal of your older OpenAFS installation. Files in CONFIG_PROTECT_MASK that belong to the older installation will be removed as well.

It should be clear to the experienced user that in the case he has tweaked their system by manually adding soft links (e.g. /usr/afs/etc to /etc/openafs ), the new installation may run fine while still using the old configuration files. In this case, there has been no real transition, and cleaning up the old installation will result in a broken OpenAFS config.

Now that you know what doesn't happen, you may want to know what does:

  • /usr/afs/etc is copied to /etc/openafs/server
  • /usr/vice/etc is copied to /etc/openafs
  • /usr/afs/local is copied to /var/lib/openafs
  • /usr/afs/local/BosConfig is copied to /etc/openafs/BosConfig , while replacing occurrences of /usr/afs/bin/ with /usr/libexec/openafs , /usr/afs/etc with /etc/openafs/server and /usr/afs/bin (without the / as previously) with /usr/bin
  • /usr/afs/db is copied to /var/lib/openafs/db
  • The configuration file /etc/conf.d/afs is copied to /etc/conf.d/openafs-client , as all known old options were destined for client usage only.

The upgrade itself

So you haven't got an OpenAFS server setup? Or maybe you do, the previous sections have informed you about what is going to happen, and you're still ready for it?

Давайте тогда это сделаем!

Если ваш сервер запущен, остановите его.

root #/etc/init.d/afs stop

А затем само обновление.

root #emerge --ask openafs

Перезапуск OpenAFS

Если у вас был запущен сервер OpenAFS, вам не нужно было его выключать. Сейчас пришло время это сделать.

root #/etc/init.d/afs stop

Если вы хотите свести простой к минимуму, вы можете перезапустить ваш сервер OpenAFS прямо сейчас.

root #/etc/init.d/openafs-server start

Вы можете проверить, работает ли он должным образом с помощью следующей команды:

root #/usr/bin/bos status localhost -localauth

Перед запуском клиента OpenAFS, потратьте некоторое время на изучение настроек кэша. Они определяются файлом /etc/openafs/cacheinfo. Чтобы перезапустить клиента OpenAFS, введите следующее:

root #/etc/init.d/openafs-client start

Очистка после обновления

Before cleaning up, please make really sure that everything runs smoothly and that you have restarted after the upgrade (otherwise, you may still be running your old installation).

Важно
Please make sure you're not using /usr/vice/cache for disk cache if you are deleting /usr/vice !!

The following directories may be safely removed from the system:

  • /etc/afs
  • /usr/vice
  • /usr/afs
  • /usr/afsws

The following files are also unnecessary:

  • /etc/init.d/afs
  • /etc/conf.d/afs
root #tar czf /root/oldafs-backup.tgz /etc/afs /usr/vice /usr/afs /usr/afsws
root #rm -R /etc/afs /usr/vice /usr/afs /usr/afsws
root #rm /etc/init.d/afs /etc/conf.d/afs

In case you've previously used ebuilds =openafs-1.2.13 or =openafs-1.3.85, you may also have some other unnecessary files:

  • /etc/init.d/afs-client
  • /etc/init.d/afs-server
  • /etc/conf.d/afs-client
  • /etc/conf.d/afs-server

Init Script changes

Now most people would have their systems configured to automatically start the OpenAFS client and server on startup. Those who don't can safely skip this section. If you had your system configured to start them automatically, you will need to re-enable this, because the names of the init scripts have changed.

root #rc-update del afs default
root #rc-update add openafs-client default
root #rc-update add openafs-server default

If you had =openafs-1.2.13 or =openafs-1.3.85 , you should remove afs-client and afs-server from the default runlevel, instead of afs .

Решение проблем: что если автоматическое обновление не выполнится

Не паникуйте. Вы не потеряли какие-либо данные или файлы настроек. Поэтому давайте проанализируем ситуацию. В любом случае напишите об ошибке на bugs.gentoo.org, желательно с наибольшим возможным количеством информации.

Если у вас проблемы с запуском клиента, то это должно помочь вам продиагностировать проблему:

  • Run dmesg . The client normally sends error messages there.
  • Check /etc/openafs/cacheinfo . It should be of the form: /afs:{path to disk cache}:{number of blocks for disk cache}. Normally, your disk cache will be located at /var/cache/openafs .
  • Check the output of lsmod . You will want to see a line beginning with the word openafs.
  • pgrep afsd will tell you whether afsd is running or not
  • cat /proc/mounts should reveal whether /afs has been mounted.

If you're having problems starting the server, then these hints may be useful:

  • pgrep bosserver tells you whether the overseer is running or not. If you have more than one overseer running, then something has gone wrong. In that case, you should try a graceful OpenAFS server shutdown with bos shutdown localhost -localauth -wait , check the result with bos status localhost -localauth , kill all remaining overseer processes and then finally check whether any server processes are still running ( ls /usr/libexec/openafs to get a list of them). Afterwards, do rc-service openafs-server zap to reset the status of the server and rc-service openafs-server start to try launching it again.
  • If you're using OpenAFS' own logging system (which is the default setting), check out /var/lib/openafs/logs/* . If you're using the syslog service, go check out its logs for any useful information.

Документация

Получение документации по AFS

You can get the original IBM AFS Documentation. It is very well written and you really want read it if it is up to you to administer a AFS Server.

root #emerge --ask app-doc/afsdoc

You also have the option of using the documentation delivered with OpenAFS. It is installed when you have the USE flag doc enabled while emerging OpenAFS. It can be found in /usr/share/doc/openafs-*/ . At the time of writing, this documentation was a work in progress. It may however document newer features in OpenAFS that aren't described in the original IBM AFS Documentation.

Установка клиента

Сборка клиента

root #emerge --ask net-fs/openafs

После успешной компиляции вы готовы продолжать.

Простая установка глобального клиента

Если вы не являетесь частью определенной ячейки OpenAFS, к которой хотите получить доступ, а просто хотите просматривать глобальные ресурсы OpenAFS, то вы можете просто установить OpenAFS, вообще не трогать ее настройки, а просто запустить /etc/init.d/openafs-client.

Получение доступа к конкретной ячейке OpenAFS

If you need to access a specific cell, say your university's or company's own cell, then some adjustments to your configuration have to be made.

Firstly, you need to update /etc/openafs/CellServDB with the database servers for your cell. This information is normally provided by your administrator.

Secondly, in order to be able to log onto the OpenAFS cell, you need to specify its name in /etc/openafs/ThisCell .

КОД Adjusting CellServDB and ThisCell
CellServDB:
>netlabs        #Cell name
10.0.0.1        #storage
  
ThisCell:
netlabs


Предупреждение
Only use spaces inside the CellServDB file. The client will most likely fail if you use TABs.

CellServDB tells your client which server(s) it needs to contact for a specific cell. ThisCell should be quite obvious. Normally you use a name which is unique for your organisation. Your (official) domain might be a good choice.

For a quick start, you can now start openafs-client with rc-service and use kinit; aklog to authenticate yourself and start using your access to the cell. For automatic logons to you cell, you want to consult the appropriate section below.

Adjusting the cache

Заметка
Unfortunately the AFS Client needs a ext2/3 filesystem for its cache to run correctly. There are some issues when using other filesystems (using e.g. reiserfs is not a good idea).

You can house your cache on an existing filesystem (if it's ext2/3), or you may want to have a separate partition for that. The default location of the cache is /var/cache/openafs , but you can change that by editing /etc/openafs/cacheinfo . A standard size for your cache is 200MB, but more won't hurt.

Starting AFS on startup

The following command will create the appropriate links to start your afs client on system startup.

Предупреждение
Unless afsd is started with the -dynroot option, you should always have a running afs server in your domain when trying to start the afs client. Your system won't boot until it gets some timeout if your AFS server is down (and this is quite a long long time.)
root #rc-update add openafs-client default

Установка сервера

Установка сервера Kerberos

Для аутентификации OpenAFS требует Kerberos 5. Далее показано, как установить MIT Kerberos-сервер. Либо можно использовать реализацию Heimdal kerberos.

Важно
Kerberos требует синхронизации часов между сервером и клиентами Kerberos. Удостоверьтесь, что вы установили сервер ntpd.

Установите бинарные файлы сервера MIT Kerberos с помощью следующей команды:

root #emerge --ask mit-krb5

Edit the /etc/krb5.conf and /etc/kdc.conf configuration files. Replace the EXAMPLE.COM realm name with your realm name, and update the example hostnames with your actual hostnames.

Заметка
By convention, your Kerberos realm name should match your internet domain name, except the Kerberos realm name is in uppercase letters.

Create the Kerberos database like so:

root #mkdir /etc/krb5kdc
root #kdb5_util create -s

Building the Server

Заметка
All commands should be written in one line!! In this document they are sometimes wrapped to two lines to make them easier to read.

If you haven't already done so, the following command will install all necessary binaries for setting up an AFS Server and Client.

root #emerge --ask net-fs/openafs

Keying the Server

As of OpenAFS version 1.6.5, the OpenAFS servers support strong crypto (AES, etc.) for the service key, and will read the Kerberos keytab file directly. Create the Kerberos service key for OpenAFS and export it to a keytab for the OpenAFS server processes, before starting the OpenAFS services.

root #kadmin.local -q "addprinc -randkey afs/<cellname>"
root #kadmin.local -q "ktadd -k /etc/openafs/server/rxkad.keytab afs/<cellname>"
Важно
It is critical to keep the rxkad.keytab file confidential. The security of the files in your AFS cell depends on the service key it contains.

Запуск сервера AFS

You need to run the bosserver command to initialize the Basic OverSeer (BOS) Server, which monitors and controls other AFS server processes on its server machine. Think of it as init for the system.

Заметка
As of OpenAFS 1.6.0, it is no longer necessary to include the -noauth flag to disable authentication. This makes the setup more secure, since there is not a window in which the servers are running with authentication disabled. This also has the nice side effect of greatly simplifying the server setup procedure.

Start the OpenAFS bosserver.

root #/etc/init.d/openafs-server start

Ensure the OpenAFS servers start on reboot:

root #rc-update add openafs-server default

Verify that the BOS Server created /etc/openafs/server/CellServDB and /etc/openafs/server/ThisCell:

root #ls -al /etc/openafs/server/
-rw-r--r--    1 root     root           41 Jun  4 22:21 CellServDB
-rw-r--r--    1 root     root            7 Jun  4 22:21 ThisCell

Defining Cell Name for Server Processes

Now assign your cell's name.

Важно
There are some restrictions on the name format. Two of the most important restrictions are that the name cannot include uppercase letters or more than 64 characters. Remember that your cell name will show up under /afs , so you might want to choose a short one. If your AFS service is to be accessible over the internet, you should use a registered internet domain name for your cell's name. This avoids conflicts in the global AFS namespace.
Заметка
In the following and every instruction in this guide, for the SERVER_NAME argument substitute the full-qualified hostname (such as afs.gentoo.org ) of the machine you are installing. For the CELL_NAME argument substitute your cell's complete name (such as gentoo )

Run the bos setcellname command to set the cell name:

root #bos setcellname localhost CELL_NAME -localauth

Starting the Database Server Process

Next use the bos create command to create entries for the three database server processes in the /etc/openafs/BosConfig file. The three processes run on database server machines only.

Process Description
buserver The Backup Server maintains the Backup Database
ptserver The Protection Server maintains the Protection Database
vlserver The Volume Location Server maintains the Volume Location Database (VLDB). Very important :)
Заметка
OpenAFS includes an Kerberos 4 server, called kaserver. The kaserver is obsolete and should not be used for new installations.
root #bos create localhost buserver simple /usr/libexec/openafs/buserver -cell CELL_NAME -localauth
root #bos create localhost ptserver simple /usr/libexec/openafs/ptserver -cell CELL_NAME -localauth
root #bos create localhost vlserver simple /usr/libexec/openafs/vlserver -cell CELL_NAME -localauth

You can verify that all servers are running with the bos status command:

root #bos status localhost -localauth
Instance buserver, currently running normally.
Instance ptserver, currently running normally.
Instance vlserver, currently running normally.

Starting the first File Server, Volume Server and Salvager

Start the fs process, which consists of the File Server, Volume Server and Salvager (fileserver, volserver and salvager processes).

root #bos create localhost fs fs /usr/libexec/openafs/fileserver /usr/libexec/openafs/volserver /usr/libexec/openafs/salvager -localauth

Verify that all processes are running:

root #bos status localhost -long -localauth
  
Instance buserver, (type is simple) currently running normally.
Process last started at Mon Jun  4 21:07:17 2001 (2 proc starts)
Last exit at Mon Jun  4 21:07:17 2001
Command 1 is '/usr/libexec/openafs/buserver'
  
Instance ptserver, (type is simple) currently running normally.
Process last started at Mon Jun  4 21:07:17 2001 (2 proc starts)
Last exit at Mon Jun  4 21:07:17 2001
Command 1 is '/usr/libexec/openafs/ptserver'
  
Instance vlserver, (type is simple) currently running normally.
Process last started at Mon Jun  4 21:07:17 2001 (2 proc starts)
Last exit at Mon Jun  4 21:07:17 2001
Command 1 is '/usr/libexec/openafs/vlserver'
  
Instance fs, (type is fs) currently running normally.
Auxiliary status is: file server running.
Process last started at Mon Jun  4 21:09:30 2001 (2 proc starts)
Command 1 is '/usr/libexec/openafs/fileserver'
Command 2 is '/usr/libexec/openafs/volserver'
Command 3 is '/usr/libexec/openafs/salvager'

Your next action depends on whether you have ever run AFS file server machines in the cell.

If you are installing the first AFS Server ever in the cell, create the first AFS volume, root.afs

Заметка
For the partition name argument, substitute the name of one of the machine's AFS Server partitions. Any filesystem mounted under a directory called /vicepx , where x is in the range of a-z, will be considered and used as an AFS Server partition. Any unix filesystem will do (as opposed to the client's cache, which can only be ext2/3). Tip: the server checks for each /vicepx mount point whether a filesystem is mounted there. If not, the server will not attempt to use it. This behaviour can be overridden by putting a file named AlwaysAttach in this directory.
root #vos create localhost PARTITION_NAME root.afs -localauth

If there are existing AFS file server machines and volumes in the cell issue the vos sncvldb and vos syncserv commands to synchronize the VLDB (Volume Location Database) with the actual state of volumes on the local machine. This will copy all necessary data to your new server.

If the command fails with the message "partition /vicepa does not exist on the server", ensure that the partition is mounted before running OpenAFS servers, or mount the directory and restart the processes using bos restart localhost -all -cell CELL_NAME -localauth .

root #vos syncvldb localhost -verbose -localauth
root #vos syncserv localhost -verbose -localauth

Starting the Server Portion of the Update Server

root #bos create localhost upserver simple "/usr/libexec/openafs/upserver -crypt /etc/openafs/server -clear /usr/libexec/openafs" -localauth

Creating the first Administrative Account

An administrative account is needed to complete the cell setup and perform on going administration. The first account must be created directly on the servers. Additional accounts may then be created without direct ssh access to the servers.

Заметка
In the following descriptions and commands, substitute all instances of USERNAME with your actual user name.

Four tasks need to be done to create the first administrative account.

  • a Kerberos principal, by convention, in the form of USERNAME/admin
  • an AFS user, by convention, the form of USERNAME.admin
  • membership in the built-in AFS system::administrators group
  • membership in the OpenAFS superuser list
Заметка
Any name may be used for the administrator principal, for example, "admin", or "afsadmin". If you create an admin principal that does not follow the USERNAME/admin pattern, be sure to update the kerberos KDC access control list in the kadm5.acl configuration file.
Важно
The Kerberos principal contains as slash "/" separator, but unfortunately, AFS uses a dot "." separator. Be sure to mind the difference.

Create the Kerberos principal. Run this following command on the Kerberos server, as root:

root #kadmin.local -q "addprinc USERNAME/admin"

Create the AFS admin user. Run this command on the OpenAFS database server, as root:

root #pts createuser USERNAME.admin -localauth

Add the AFS admin user to the built-in admin group. Run this command on the OpenAFS database server, as root:

root #pts adduser USERNAME.admin system:administrators -localauth

Add the AFS admin user to the superuser list. Run this command on each OpenAFS server, as root:

root #bos adduser localhost USERNAME.admin -localauth
Заметка
If you have issues later, regarding insufficient permission, and your AFS Cell name is different from your Kerberos Realm name, this problem is re-mediated by putting your realm name in the /etc/openafs/server/krb.conf configuration file.

Configuring the Top Level of the AFS filespace

At this point the server configuration is complete. You will need a running AFS client to set up the top level directories in AFS and grant access rights to them. This client does not need to be installed on the OpenAFS server. You will need to obtain your administrative credentials. Root access is not required for the commands in this section.

First, obtain your administrative credentials:

user $kinit USERNAME/admin
Password for USERNAME/admin@REALM: ********
user $aklog
user $tokens
Tokens held by the Cache Manager:
 
User's (AFS ID 1) tokens for afs@mycellname.com [Expires Oct 21 20:26]
   --End of list--

First you need to set some ACLs, so that any user can lookup /afs .

Заметка
The default OpenAFS client configuration has dynroot enabled. This option turns /afs into a virtual directory composed of the contents of your /etc/openafs/CellServDB file. Fortunately, dynroot provides a way to access volumes by name using the "magic" /afs/.:mount/ directory. This obviates the need to disable dynroot and and client restarts.
user $fs setacl /afs/.:mount/CELL_NAME:root.afs/. system:anyuser rl

Then you need to create the root volume, mount it readonly on /afs/<cell name> and read/write on /afs/.<cell name> .

user $vos create SERVER_NAME PARTITION_NAME root.cell
user $fs mkmount /afs/.:mount/CELL_NAME:root.afs/CELL_NAME root.cell
user $fs setacl /afs/.:mount/CELL_NAME:root.afs/CELL_NAME system:anyuser rl
user $fs mkmount /afs/.:mount/CELL_NAME:root.afs/.CELL_NAME root.cell -rw


At this point, you can create volumes for your new AFS site and add them to the filespace. Users and groups should be created and directory ACLs setup to allow users to create files and directories. To create and mount a volume:

user $vos create SERVER_NAME PARTITION_NAME VOLUME_NAME
user $fs mkmount /afs/CELL_NAME/MOUNT_POINT VOLUME_NAME
user $fs mkmount /afs/CELL_NAME/.MOUNT_POINT VOLUME_NAME -rw
user $fs setquota /afs/CELL_NAME/.MOUNT_POINT -max QUOTUM

Finally you're done!!! You should now have a working AFS file server on your local network. Time to get a big cup of coffee and print out the AFS documentation!!!

Заметка
It is very important for the AFS server to function properly, that all system clocks are synchronized. This is best accomplished by installing a ntp server on one machine (e.g. the AFS server) and synchronize all client clocks with the ntp client. This can also be done by the AFS client.

Basic Administration

Disclaimer

OpenAFS is an extensive technology. Please read the AFS documentation for more information. We only list a few administrative tasks in this chapter.

Configuring PAM to Acquire an AFS Token on Login

To use AFS you need to authenticate against the Kerberos 5 KDC (MIT, Heimdal, ShiShi Kerberos 5, or Microsoft Active Directory). However in order to login to a machine you will also need a user account, this can be local in /etc/passwd , NIS, LDAP (OpenLDAP), or a Hesiod database. PAM allows Gentoo to tie the authentication against AFS and login to the user account.

Заметка
This section is out of date. See Enabling AFS Login on Linux Systems

You will need to update /etc/pam.d/system-auth which is used by the other configurations. "use_first_pass" indicates it will be checked first against the user login, and "ignore_root" stops the local superuser being checked so as to order to allow login if AFS or the network fails.

ФАЙЛ /etc/pam.d/system-auth
auth       required     pam_env.so
auth       sufficient   pam_unix.so likeauth nullok
auth       sufficient   pam_afs.so.1 use_first_pass ignore_root
auth       required     pam_deny.so
  
account    required     pam_unix.so
  
password   required     pam_cracklib.so retry=3
password   sufficient   pam_unix.so nullok md5 shadow use_authtok
password   required     pam_deny.so
  
session    required     pam_limits.so
session    required     pam_unix.so

In order for sudo to keep the real user's token and to prevent local users gaining AFS access change /etc/pam.d/su as follows:

ФАЙЛ /etc/pam.d/su
# Here, users with uid > 100 are considered to belong to AFS and users with
# uid <= 100 are ignored by pam_afs.
auth       sufficient   pam_afs.so.1 ignore_uid 100
  
auth       sufficient   pam_rootok.so
  
# If you want to restrict users begin allowed to su even more,
# create /etc/security/suauth.allow (or to that matter) that is only
# writable by root, and add users that are allowed to su to that
# file, one per line.
#auth       required     pam_listfile.so item=ruser \
#       sense=allow onerr=fail file=/etc/security/suauth.allow
  
# Uncomment this to allow users in the wheel group to su without
# entering a passwd.
#auth       sufficient   pam_wheel.so use_uid trust
  
# Alternatively to above, you can implement a list of users that do
# not need to supply a passwd with a list.
#auth       sufficient   pam_listfile.so item=ruser \
#       sense=allow onerr=fail file=/etc/security/suauth.nopass
  
# Comment this to allow any user, even those not in the 'wheel'
# group to su
auth       required     pam_wheel.so use_uid
  
auth       required     pam_stack.so service=system-auth
  
account    required     pam_stack.so service=system-auth
  
password   required     pam_stack.so service=system-auth
  
session    required     pam_stack.so service=system-auth
session    optional     pam_xauth.so
  
# Here we prevent the real user id's token from being dropped
session    optional     pam_afs.so.1 no_unlog

This page is based on a document formerly found on our main website gentoo.org.
The following people contributed to the original document: Stefaan De Roeck, Holger Brueckner, Benny Chuang, Tiemo Kieft, Steven McCoy, Shyam Mani
They are listed here because wiki history does not allow for any external attribution. If you edit the wiki article, please do not add yourself here; your contributions are recorded on each article's associated history page.