CHAPTER 4: Installing Additional Client Machines . . . . . . . . . . Summary of Procedures
4.1 . . . . Loading Client Files onto the Local Disk
4.1.1 . . . . . . Loading Files for a Machine of an Existing System Type
4.1.2 . . . . . . . Loading Files for a Machine of a New System Type
4.1.2.1 . . . . . Loading Files Using a Local Tape Drive
4.1.2.2 . . . . . Loading Files Using from a Remote Machine
4.2 . . . . Incorporating AFS Modifications into the Kernel
4.2.1 . . . . . . Using the Kernel Extension Facility on AIX Systems
4.2.2 . . . . . . Incorporating AFS into the Kernel on Digital UNIX Systems
4.2.3 . . . . . . Incorporating AFS into the Kernel on HP-UX Systems
4.2.4 . . . . . . Incorporating AFS into the Kernel on IRIX Systems
4.2.5 . . . . . . Incorporating AFS into the Kernel on NCR UNIX Systems
4.2.6 . . . . . . Incorporating AFS into the Kernel on Solaris Systems
4.2.7 . . . . . . Incorporating AFS into the Kernel on SunOS Systems
4.2.8 . . . . . . Incorporating AFS into the Kernel on Ultrix Systems
4.3 . . . . Defining the Machine's Cell Membership and Creating CellServDB
4.4 . . . . Setting Up the Cache
4.4.1 . . . . . . Setting Up a Disk Cache
4.4.2 . . . . . . Setting Up a Memory Cache
4.5 . . . . Creating /afs and Starting the Cache Manager
4.6 . . . . Setting Up Volumes and Loading Binaries into AFS
4.6.1 . . . . . . Linking /usr/afsws on an Existing System Type
4.6.2 . . . . . . Creating Binary Volumes for a New System Type
4.7 . . . . Enabling AFS login
4.7.1 . . . . . . Enabling AFS login on AIX 3.2 Systems
4.7.2 . . . . . . Enabling AFS login on AIX 4.1 Systems
4.7.3 . . . . . . Enabling AFS login on IRIX Systems
4.7.4 . . . . . . Enabling AFS login on Other System Types
4.8 . . . . Altering File System Clean-Up Scripts on Sun Systems


 4. INSTALLING ADDITIONAL CLIENT MACHINES

This chapter describes how to install any AFS client machine after the first AFS
machine (which should already have been installed following the instructions in
Chapter 2).

Some parts of client installation differ, depending on whether or not the new
client is of the same AFS system type (uses the same AFS binaries) as a
previously installed client machine.  Determine if the client you are installing
is the same system type as an existing client, and follow the appropriate
procedures.

Summary of Procedures

     1. Load files that must reside on the local disk.

     2. Incorporate AFS into the machine's kernel, either dynamically or by
        replacing /vmunix or its equivalent.

     3. Define the machine's cell membership.

     4. Define cache location and size.

     5. Create the /usr/vice/etc/CellServDB file to determine which cells
        the client can contact.

     6. Create the /afs directory and start the Cache Manager.

     7. Set up volumes (necessary only for clients of a new system type).

     8. Load client binaries into AFS (necessary only for clients of a new
        system type).

     9. Create a link from the local /usr/afsws to the AFS directory housing
        the binaries.

    10. Replace the standard login binary with a version that both
        authenticates with AFS and logs into the local UNIX file system.

 4.1. LOADING CLIENT FILES ONTO THE LOCAL DISK

The first step in installing a client is to load AFS client binaries onto the
local diskMnamely, the afsd binary and files needed to incorporate AFS using a
dynamic kernel loader (dkload or its equivalent).  These files must reside on
the local disk of every client machine.

It is assumed that you have previously installed at least one machine as an AFS
client (such as the first machine in your cell).  How you load files on this
machine depends on whether it is the same system type as an existing client, or
a new type.

If the machine being installed is the same system type as an existing client,
the files you need to load onto the local disk should already reside in the AFS
directory /afs/cellname/sysname/usr/afsws.  In this case, you copy the files
over the new client using a network transfer program.  See Section 4.1.1.  (Use
of this directory name assumes you followed the naming recommendations in
section 2.31 and section 4.6 when installing previous machines.  If you substituted
different directory names, use them throughout this section.)

If the machine being installed is a new system type, the necessary client files
do not reside in AFS.  You must load them from the AFS Binary Distribution Tape,
following the instructions appropriate for your machine configuration.

There are two ways to approach loading the client binaries for a new system
type:

 - load the files onto the local disk.  See Section 4.1.2 and then proceed
through this chapter in order.

 - first create and mount volumes to house the binaries for the new system type
and load the appropriate files into the volumes, working on an existing client
machine (of any type).  This effectively transforms this system type into an
existing one.  To do this, skip now to Section 4.6.2 and execute the
instructions there.  When finished, return to this section and follow the
instructions for installing a machine of an existing system type.

 4.1.1. LOADING FILES FOR A MACHINE OF AN EXISTING SYSTEM TYPE

The following instructions assume that the necessary client files for this
system type already reside in /afs/cellname/sysname/usr/afsws (as recommended in
Sections 2.31 and 4.6).  They copy the afsd and dynamic kernel loader files
found in the root.client/usr/vice/etc directory (and its subdirectories, if any)
into /usr/vice/etc on the local disk.  These files must reside on the local disk
of every client machine.

Step 1: Create /usr/vice/etc on the local disk.

	-------------------------
	# mkdir /usr/vice     
	# mkdir /usr/vice/etc 
	-------------------------

Step 2: Using ftp, NFS, or another network file transfer program, access
the files stored in

/afs/cellname/sysname/usr/afsws/root.client/usr/vice/etc

(and its subdirectories, if any) via an existing client machine of any system
type, and copy them into /usr/vice/etc on the client machine being installed.
Substitute the system type name of the machine being installed for sysname.

 4.1.2. LOADING FILES FOR A MACHINE OF A NEW SYSTEM TYPE

If you are installing an AFS client machine of a new system type, you must load
the needed client files onto the local disk.  Follow the instructions
appropriate for your machine configuration.

 4.1.2.1. LOADING FILES USING A LOCAL TAPE DRIVE

The fourth tar set on the AFS Binary Distribution Tape contains the
/usr/vice/etc directory, consisting of afsd and the files needed to incorporate
AFS using the dynamic kernel loader for this system type (dkload or equivalent).
The following steps load it into /usr/vice/etc.

Step 1: Create the /usr/vice/etc directory.

	-------------------------
	# mkdir /usr/vice     
	# mkdir /usr/vice/etc 
	-------------------------

Step 2: Mount the Binary Distribution Tape and load fourth tar set into
/usr/vice/etc.

-------------------------------------------------------------------------------
On AIX systems: Before reading the tape, verify that block size is set to 0
(meaning variable block size); if necessary, use SMIT to set block size to 0.
Also, substitute tctl for mt.

On HP-UX systems: Substitute mt -t for mt -f.                                 

On all system types: For <device>, substitute the name of the tape device for 
your system that does not rewind after each operation.                        

	# cd /usr/vice/etc
	# mt -f /dev/<device> rewind
	# mt -f /dev/<device> fsf 3
	# tar xvf /dev/<device>
-------------------------------------------------------------------------------

 4.1.2.2. LOADING FILES FROM A REMOTE MACHINE

If the local machine does not have a tape drive, you must load the files found
in the Binary Distribution Tape's fifth tar set into the /usr/afsws directory on
a remote machine that has a tape drive.  You then copy the appropriate files to
the local machine's disk using ftp, NFS, or another file transfer method.

Step 1: Working on the remote machine, create the /usr/afsws directory.

	----------------------
	# mkdir /usr/afsws 
	----------------------

Step 2: Mount the AFS Binary Distribution Tape on the remote machine and
load the fifth tar set into /usr/afsws.

---------------------------------------------------------------------------------
On AIX systems: Before reading the tape, verify that block size is set to 0
(meaning variable block size); if necessary, use SMIT to set block size to 0.
Also, substitute tctl for mt.

On HP-UX systems: Substitute mt -t for mt -f.                                 

On all system types: For <device>, substitute the name of the tape device for
your system that does not rewind after each operation.

	# cd /usr/afsws
	# mt -f /dev/<device> rewind
	# mt -f /dev/<device> fsf 4
	# tar xvf /dev/<device>
---------------------------------------------------------------------------------

Step 3: Working on the local machine, remove any old version of /usr/vice
that may exist and create the /usr/vice/etc directory.

	-------------------------
	# rmdir /usr/vice     
	# mkdir /usr/vice     
	# mkdir /usr/vice/etc 
	-------------------------

Step 4: Using the appropriate (local or remote) file transfer program,
copy the files from /usr/afsws/root.client/usr/vice/etc (and its subdirectories,
if any) to /usr/vice/etc on the local machine.

 4.2. INCORPORATING AFS MODIFICATIONS INTO THE KERNEL

Every AFS client machine's kernel must incorporate the modifications that make
up the Cache Manager.  There are two ways to add AFS into the kernel: dynamic
loading or kernel building.

If dynamic loading is possible for your system type, it is the recommended
method, as it is significantly quicker and easier than kernel building.  A
dynamic loader adds the AFS modifications to the memory version of the kernel
created at each reboot, without altering the disk version (/vmunix or its
equivalent).

Sections 4.2.1 through 4.2.8 explain how to incorporate AFS into the kernel on
each system type.  Follow the instructions appropriate for your system type.
When you have finished, proceed to Section 4.3 (page 4-35).

	For AIX, see Section 4.2.1 on page 4-8.

	For Digital UNIX, see Section 4.2.2 on page 4-11.

	For HP-UX, see Section 4.2.3 on page 4-13.

	For IRIX, see Section 4.2.4 on page 4-16.

	For NCR UNIX, see Section 4.2.5 on page 4-22.

	For Solaris, see Section 4.2.6 on page 4-24.

	For SunOS, see Section 4.2.7 on page 4-27.

	For Ultrix, see Section 4.2.8 on page 4-32.

 4.2.1. USING THE KERNEL EXTENSION FACILITY ON AIX SYSTEMS

The AIX kernel extension facility is a dynamic kernel loader provided by IBM
Corporation for AIX.  Transarc's dkload program is not available for this system
type, nor is it possible to add AFS during a kernel build.

For this machine to remain an AFS machine, the kernel extension facility must
run each time the machine reboots.  You can invoke the facility automatically in
the machine's initialization files, as explained in Step 3 below.

To invoke the kernel extension facility:

Step 1: Verify that

 - the /usr/vice/etc/dkload directory on the local disk contains: afs.ext,
cfgafs, cfgexport, export.ext, and export.ext.nonfs.

 - NFS is already in the kernel, if you wish NFS to run on this machine; it must
be running for the machine to function as an NFS/AFS translator machine.  For
systems running AIX 3.2.2 or lower, this requires that you have loaded nfs.ext;
for version 3.2.3 and later, NFS loads automatically.

Step 2: Invoke cfgexport and cfgafs.

If this machine is to act as an NFS/AFS translator machine, you must make a
substitution in this step.  For details, consult the section entitled "Setting
Up an NFS/AFS Translator Machine" in the NFS/AFS Translator Supplement to the
AFS System Administrator's Guide.

----------------------------------------------------------------
	# cd /usr/vice/etc/dkload                                    

If the machine's kernel does not support NFS server functionality,
issue the following commands. The machine cannot function as an
NFS/AFS translator: 

	# ./cfgexport -a export.ext.nonfs                            
	# ./cfgafs -a afs.ext                                        

If the machine's kernel supports NFS server functionality, issue
the following commands. If the machine is to act as an NFS/AFS
translator machine, you must make the substitution specified in
the NFS/AFS Translator Supplement.

	# ./cfgexport -a export.ext                                  
	# .cfgafs -a afs.ext
----------------------------------------------------------------

Step 3: IBM delivers several function-specific initialization files for
AIX systems, rather than the single file used on some other systems.  If you
want the kernel extension facility to run each time the machine reboots, verify
that it is invoked in the appropriate place in these initialization files.  An
easy way to add the needed commands is to copy the contents of
/usr/vice/etc/dkload/rc.dkload, which appear in Section 5.11.

The following list summarizes the order in which the commands must appear in
initialization files for the machine to function properly (you will add some of
the commands in later sections). 

 - NFS commands, if appropriate (for example, if the machine will act as an
NFS/AFS translator). For AIX version 3.2.2 or lower, commands loading the NFS
kernel extensions (nfs.ext) should ppear here; with AIX version 3.2.3 and
higher, NFS is already loaded into the kernel. Then invoke nfsd if the machine
is to be an NFS server.


 Note particularly that you should not invoke
nfsd at the same place in the initialization files as the commands that load
nfs.ext; nfsd must follow the call to afsd.

 - the contents of rc.dkload, to invoke the kernel extension facility.  If the
machine will act as an NFS/AFS translator machine, be sure to make the same
substitution as you made when you issued the cfgexport and cfgafs commands in
the previous step.

 - afsd (you will be instructed to add this command in Section 2.27)

Step 4: Proceed to Section 4.3 (page 4-35).

 4.2.2. INCORPORATING AFS INTO THE KERNEL ON Digital UNIX SYSTEMS

On Digital UNIX systems, you must build AFS modifications into a new kernel
(dynamic loading is not possible).  If you already built a kernel during the
installation of a previous Digital UNIX machine, you can start skip the first
step below. If this is the first Digital UNIX machine installed in your cell,
perform all of the steps.

For the sake of consistency with other system types (on which both loading and
building are possible), the complete instructions for kernel building appear in
Chapter 5.

For this machine to remain an AFS machine, its initialization script must be
invoked each time it reboots.  Step 5 below explains how to install the script.

Step 1: If this is the first Digital UNIX machine installed in your cell,
follow the instructions in Section 5.2 (page 5-7) to build AFS modifications
into a new Digital UNIX kernel.

Step 2: Move the existing kernel on the local machine to a safe location.

	-------------------------------
	# mv  /vmunix  /vmunix_save 
	-------------------------------

Step 3: Use a copying program (either cp or a remote program such as ftp
or NFS) to copy the AFS-modified kernel to the appropriate location.

Step 4: Reboot the machine to start using the new kernel.

	------------------
	# shutdown -r now
	------------------

Step 5: Copy the afs.rc initialization script from /usr/vice/etc/dkload
to the initialization files directory (standardly, /sbin/init.d), make sure it
is executable, and link it to the two locations where Digital UNIX expects to
find it.

	---------------------------------------------
	# cd  /sbin/init.d                        
	# cp  /usr/vice/etc/dkload/afs  afs       
	# chmod  555  afs                         
	# ln -s ../init.d/afs  /sbin/rc3.d/S99afs 
	# ln -s ../init.d/afs  /sbin/rc0.d/K66afs 
	---------------------------------------------

Step 6: Proceed to Section 4.3 (page 4-35).

 4.2.3. INCORPORATING AFS INTO THE KERNEL ON HP-UX SYSTEMS

To load AFS into the kernel on HP-UX systems, choose one of:

 - dynamic loading using Transarc's dkload program. Proceed to Section 4.2.3.1.

 - installing a kernel that incorporates AFS changes.  Unless this is the first
HP-UX machine installed in your cell, a modified kernel may already exist, built
during the installation of a previous machine.  Proceed to Section 4.2.3.2.

 4.2.3.1. USING DKLOAD ON HP-UX SYSTEMS

The dkload program is the dynamic kernel loader provided by Transarc for HP-UX
systems. For this machine to remain an AFS machine, dkload must run each time
the machine reboots.  You can invoke dkload automatically in the machine's
initialization file (/etc/rc or equivalent), as explained in Step 3.

The files containing the AFS kernel modifications are libafs.a and
libafs.nonfs.a (the latter is appropriate if this machine's kernel does not
include support for NFS server functionality).

To invoke dkload:

Step 1: Verify that

 - there is at least one spare megabyte of space in /tmp for temporary files
created as dkload runs

 - the following are in /bin on the local disk (not as symbolic links): as, ld,
and nm

 - the /usr/vice/etc/dkload directory on the local disk contains:
dkload (the binary), libafs.a, libafs.nonfs.a, libcommon.a, and kalloc.o

Step 2: Invoke dkload.

------------------------------------------------------------------------------
If the machine's kernel does not include support for NFS server functionality,
you must substitute libafs.nonfs.a for libafs.a.  Either use the mv command to
replace libafs.a with libafs.nonfs.a in the /usr/vice/etc/dkload directory
(before issuing these commands), or make the substitution on the command line.

	# cd /usr/vice/etc/dkload
	# ./dkload libafs.a                                                            
------------------------------------------------------------------------------

Step 3: Modify the machine's initialization file (/etc/rc or equivalent)
to invoke dkload by copying the contents of /usr/vice/etc/dkload/rc.dkload (the
contents appear in full in Section 5.10).  Place the commands after the commands
that mount the file systems.  If the machine's kernel does not include support
for NFS server functionality, remember to substitute libafs.nonfs.a for
libafs.a.

Step 4: Proceed to Section 4.3 (page 4-35).

4.2.3.2. Building AFS into the Kernel on HP-UX Systems

For the sake of consistency with other system types, the complete instructions
for kernel building appear in Chapter 5.

Step 1: If AFS modifications were not built into a kernel during the
installation of a previous HP-UX machine in your cell, follow the kernel
building instructions in Section 5.4 (page 5-16) for HP 700 systems or in
Section 5.5 (page 5-20) for HP 800 systems.

Step 2: Move the existing kernel on the local machine to a safe location.

	-----------------------------
	# mv  /hp-ux  /hp-ux_save 
	-----------------------------

Step 3: Use a copying program (either cp or a remote program such as ftp
or NFS) to copy the AFS-modified kernel to /hp-ux.  A standard location for the
AFS-modified kernel is /etc/conf/hp-ux for HP Series 700 systems and
/etc/conf/<conf_name>/hp_ux for HP Series 800 systems.

Step 4: Reboot the machine to start using the new kernel.

	---------------
	# shutdown -r
	---------------

Step 5: Proceed to Section 4.3 (page 4-35).

 4.2.4. INCORPORATING AFS INTO THE KERNEL ON IRIX SYSTEMS

To load AFS into the kernel on IRIX systems, choose one of the following
methods:

 - dynamic loading using Silicon Graphics' ml program. Proceed to Section
4.2.4.1.

 - installing a kernel that incorporates AFS changes.  Unless this is the first
IRIX machine installed in your cell, a modified kernel may already exist, built
during the installation of a previous machine.  Proceed to Section 4.2.4.2.

In either case, you must install the IRIX initialization script as detailed in
Section 4.2.4.3.

 4.2.4.1. USING ML ON IRIX SYSTEMS

The ml program is the dynamic kernel loader provided by Silicon Graphics, Inc.
for IRIX systems. For this machine to remain an AFS machine, either ml must run
each time the machine reboots or a prebuilt kernel with AFS modifications must
be used.  To ensure this, you must install the IRIX initialization script as
detailed in Section 4.2.4.3.

On sgi_53 machines, before running ml you must run the afs_rtsymtab.pl script
located in the /usr/vice/etc/sgiload directory.  As distributed by Silicon
Graphics, the IRIX 5.3 kernel does not expose certain kernel symbols in the way
that ml requires for loading AFS. The afs_rtsymtab.pl script alters the
/var/sysgen/master.d/rtsymtab file, which contains a list of kernel symbols, in
the manner required by AFS.  Running autoconfig incorporates the amended list
into the kernel, and rebooting loads the new kernel.  You need to run the script
only once per sgi_53 machine, not each time ml runs.

To invoke ml:

Step 1: Verify that the /usr/vice/etc/sgiload directory on the local disk
contains: afs, afs.rc, afs.sm, and afsd, in addition to the library files listed
in the next step.

Step 2: On sgi_53 machines only, run the afs_rtsymtab.pl script, issue
the autoconfig command, and reboot the machine.

	--------------------------------------------
	# /usr/vice/etc/sgiload/afs_rtsymtab.pl -run
	# autoconfig -v
	# shutdown -i6
	--------------------------------------------


Step 3: Issue the ml command, replacing <library file> with the name of
the appropriate library file. Select R3000 versus R4000 processor, no NFS
support versus NFS support, and single processor (SP) versus multiprocessor
(MP).

If you do not know which processor your machine has, issue IRIX's
hinv command and check the line in the output that begins "CPU."

-------------------------------------------------------------------------------
Issue the ml command, replacing <library file> with the name of the         
appropriate library file.                                                   

In each case below, read "without NFS support" to mean that the kernel does 
not include support for NFS server functionality.                           

 - libafs.MP.R3000.o for R3000 multiprocessor with NFS support            
 - libafs.MP.R3000.nonfs.o for R3000 multiprocessor without NFS support   
 - libafs.MP.R4000.o for R4000 Multiprocessor with NFS support            
 - libafs.MP.R4000.nonfs.o for R4000 multiprocessor without NFS support   
 - libafs.SP.R3000.o for R3000 single processor with NFS support          
 - libafs.SP.R3000.nonfs.o for R3000 single processor without NFS support 
 - libafs.SP.R4000.o for R4000 single processor with NFS support          
 - libafs.SP.R4000.nonfs.o for R4000 single processor without NFS support 

   # ml  ld  -v  -j  /usr/vice/etc/sgiload/<library file>  -p  afs_  -a  afs 
-------------------------------------------------------------------------------

Step 4: Proceed to Section 4.2.4.3 to install the initialization script
provided by Transarc for IRIX systems; it automatically invokes ml at reboot, if
appropriate.

 4.2.4.2. BUILDING AFS INTO THE KERNEL ON IRIX SYSTEMS

For the sake of consistency with other system types, the complete instructions
for kernel building appear in Chapter 5.

Step 1: If AFS modifications were not built into a kernel during the
installation of a previous IRIX machine in your cell, follow the kernel building
instructions in Section 5.6 (page 5-23).

Step 2: Copy the existing kernel on the local machine to a safe location.
Note that /unix will be overwritten by /unix.install the next time the machine
is rebooted.

	---------------------------
	# cp  /unix  /unix_save 
	---------------------------

Step 3: Reboot the machine to start using the new kernel.

	---------------
	# shutdown -i6
	---------------

 4.2.4.3. INSTALLING THE INSTALLATION SCRIPT ON IRIX SYSTEMS

On System V-based machines such as IRIX, you must install the initialization
script and ensure that it is invoked properly at reboot, whether you have built
AFS into the kernel or used a dynamic loader such as ml.  The script includes
automatic tests for whether the machine has the R3000 or R4000 processor, NFS
support or no NFS support, and single processor (SP) or multiprocessor (MP).

The chkconfig commands you issue in the second step tell IRIX whether or not it
should run the afsml script to invokes ml, and that it should run the afsclient
script that invokes afsd.

Step 1: Verify that the local /usr/vice/etc/sgiload directory contains
afs.rc.

Step 2: Copy the afs.rc initialization script from /usr/vice/etc/sgiload
to the IRIX initialization files directory (standardly, /etc/init.d), make sure
it is executable, link it to the two locations where IRIX expects to find it,
and issue the appropriate chkconfig commands.

------------------------------------------------------------------------
Note the removal of the .rc extension as you copy the initialization 
file to the /etc/init.d directory.                                   

	# cd  /etc/init.d                                                    
	# cp  /usr/vice/etc/sgiload/afs.rc  afs                              
	# chmod  555  afs                                                    
	# ln -s ../init.d/afs  /etc/rc0.d/K35afs                             
	# ln -s ../init.d/afs  /etc/rc2.d/S35afs                             
	# cd /etc/config                                                     
	# /etc/chkconfig -f afsclient on
	
If you are using ml:                                                 

	# /etc/chkconfig  -f  afsml  on                                      

If you are using an AFS-modified kernel:                             
	
	# /etc/chkconfig  -f  afsml  off                                     
	# /etc/chkconfig  -f  afsclient  on                                  
------------------------------------------------------------------------

Step 3: Proceed to Section 4.3 (page 4-35).


 4.2.5. INCORPORATING AFS INTO THE KERNEL ON NCR UNIX SYSTEMS

On NCR UNIX systems, you must build AFS modifications into a new kernel
(dynamic loading is not possible).  If you already built a kernel during the
installation of a previous NCR UNIX machine, you can skip the first step below.
If this is the first NCR UNIX machine installed in your cell, perform all of
the steps.

For the sake of consistency with other system types (on which both loading and
building are possible), the complete instructions for kernel building appear in
Chapter 5.

For this machine to remain an AFS machine, its initialization script must be
invoked each time it reboots.  Step 5 below explains how to install the script.

Step 1: Follow the instructions in Section 5.7 (page 5-26) to build AFS
modifications into a new NCR UNIX kernel.

Step 2: Move the existing kernel on the local machine to a safe location.

	---------------------
	# mv /unix /unix.save
	---------------------

Step 3: Use a copying program (either cp or a remote program such as ftp or NFS)
to copy the AFS-modified kernel to the appropriate location.

Step 4: Reboot the machine to start using the new kernel.

	------------------
	# shutdown -i6
	------------------

Step 5: Copy the initialization script that Transarc provides for NCR UNIX
systems as /usr/vice/etc/modload/afs.rc to the /etc/init.d directory, make sure
it is executable, and link it to the two locations where NCR UNIX expects to
find it.

	---------------------------------------
	# cd /etc/init.d
	# cp /usr/vice/etc/modload/afs afs
	# chmod 555 afs
	# ln -s ../init.d/afs /etc/rc3.d/S14afs
	# ln -s ../init.d/afs /etc/rc2.d/K66afs
	---------------------------------------

 4.2.6. INCORPORATING AFS INTO THE KERNEL ON SOLARIS SYSTEMS

The modload program is the dynamic kernel loader provided by Sun Microsystems
for Solaris systems. Transarc's dkload program is not available for this system
type, nor is it possible to add AFS during a kernel build.

For this machine to remain an AFS machine, modload must run each time the
machine reboots.  You can invoke the facility automatically in the machine's
initialization files, as explained in Step 6.

To invoke modload:

Step 1: Verify that

 - the modload binary is available on the local disk (standard location is
/usr/sbin)

 - the /usr/vice/etc/modload directory on the local disk contains libafs.o and
libafs.nonfs.o

Step 2: Create the file /kernel/fs/afs as a copy of the appropriate AFS
library file.

------------------------------------------------------------
	# cd /usr/vice/etc/modload                               

If the machine's kernel supports NFS server functionality:
	
	# cp libafs.o  /kernel/fs/afs                            

If the machine's kernel does not support NFS server functionality:

	# cp libafs.nonfs.o  /kernel/fs/afs                      

------------------------------------------------------------

Step 3: Create an entry for AFS in the /etc/name_to_sysnum file to allow
the kernel to make AFS system calls.

------------------------------------------------------------------------------
In the file /etc/name_to_sysnum, create an "afs" entry in slot 105 (the slot
just before the "nfs" entry) so that the file looks like:

reexit          1                                                            
fork            2                                                            
.              .                                                            
.              .                                                            
.              .                                                            
sigpending      99                                                           
setcontext      100                                                          
statvfs         103                                                          
fstatvfs        104                                                          
afs             105                                                          
nfs             106                                                          
-------------------------------------------------------------------------------

Step 4: If you are running a Solaris 2.4 system, reboot the machine.

	------------------------
	# /usr/sbin/shutdown -i6
	------------------------

Step 5: Invoke modload.

	---------------------------------------
	# /usr/sbin/modload  /kernel/fs/afs 
	---------------------------------------

If you wish to verify that AFS loaded correctly, use the modinfo
command.

	-------------------------------
	# /usr/sbin/modinfo | egrep afs
	-------------------------------

The appearance of two lines that mention afs in the output indicate that AFS
loaded successfully, as in the following example (the exact value of the numbers
in the first five columns is not relevant):

69 fc71f000 4bc15 105   1  afs (afs syscall interface)
69 fc71f000 4bc15  15   1  afs (afs file system)

Step 6: Copy the initialization script that Transarc provides for Solaris
systems as /usr/vice/etc/modload/afs.rc to the /etc/init.d directory, make sure
it is executable, and link it to the two locations where Solaris expects to find
it.

	---------------------------------------
	# cd  /etc/init.d                        
	# cp  /usr/vice/etc/modload/afs.rc  afs     
	# chmod  555  afs                        
	# ln -s ../init.d/afs  /etc/rc3.d/S14afs 
	# ln -s ../init.d/afs  /etc/rc2.d/K66afs 
	----------------------------------------

Step 7: Proceed to Section 4.3 (page 4-35).

 4.2.7. INCORPORATING AFS INTO THE KERNEL ON SUNOS SYSTEMS

To load AFS into the kernel on SunOS systems, choose one of the following
methods:

 - dynamic loading using Transarc's dkload program (proceed to Section 4.2.7.1)

 - dynamic loading using Sun's modload program (proceed to Section 4.2.7.2, page
4-29)

 - installing a kernel that incorporates AFS changes.  Unless this is the first
SunOS machine installed in your cell, a modified kernel may already exist, built
during the installation of a previous machine.  Proceed to Section 4.2.7.3, page
4-31.

 4.2.7.1. USING DKLOAD ON SUNOS SYSTEMS

The dkload program is the dynamic kernel loader provided by Transarc for SunOS
systems.  For this machine to remain an AFS machine, dkload must run each time
the machine reboots.  You can invoke dkload automatically in the machine's
initialization file (/etc/rc or equivalent), as explained in Step 3.

The files containing the AFS kernel modifications are libafs.a and
libafs.nonfs.a (the latter is appropriate if this machine's kernel does not
include support for NFS server functionality).

To invoke dkload:

Step 1: Verify that

 - there is at least one spare megabyte of space in /tmp for temporary files
created as dkload runs

 - the following are in /bin on the local disk (not as symbolic links): as, ld,
and nm.

 - the /usr/vice/etc/dkload directory on the local disk contains: dkload (the
binary), libafs.a, libafs.nonfs.a, libcommon.a, and kalloc.o

Step 2: Invoke dkload after running ranlib.

-------------------------------------------------------------------------------
If the machine's kernel does not include support for NFS server functionality,
you must substitute libafs.nonfs.a for libafs.a.  Either use the mv command to
replace libafs.a with libafs.nonfs.a in the /usr/vice/etc/dkload directory
(before issuing these commands), or make the substitution on the command line.

	# cd /usr/vice/etc/dkload	
	# ranlib libafs.a
	# ranlib libcommon.a
	# ./dkload libafs.a
	---------------------------

Step 3: Modify the machine's initialization file (/etc/rc or equivalent)
to invoke dkload by copying the contents of /usr/vice/etc/dkload/rc.dkload (the
contents appear in full in Section 5.10).  Place the commands after the commands
that mount the file systems.  If the machine's kernel does not include support
for NFS server functionality, remember to substitute libafs.nonfs.a for
libafs.a.

Step 4: Proceed to Section 4.3 (page 4-35).

 4.2.7.2. USING MODLOAD ON SUNOS SYSTEMS

The modload program is the dynamic kernel loader provided by Sun Microsystems
for SunOS systems. For this machine to remain an AFS machine, modload must run
each time the machine reboots.  You can invoke modload automatically in the
machine's initialization file (/etc/rc or equivalent), as explained in Step 3.

To invoke modload:

Step 1: Verify that

 - the /usr/vice/etc/modload directory on the local disk contains libafs.o and
libafs.nonfs.o

 - the modload binary is available on the local disk (standard location is
/usr/etc)

Step 2: Invoke modload.

--------------------------------------------------------------------------------
	# cd /usr/vice/etc/modload

If the machine's kernel supports NFS server functionality:

	# /usr/etc/modload ./libafs.o

If the machine's kernel does not include support for NFS server functionality:

	# /usr/etc/modload ./libafs.nonfs.o
--------------------------------------------------------------------------------

Step 3: Modify the machine's initialization file (/etc/rc or equivalent)
to invoke modload, by copying in the contents of
/usr/vice/etc/modload/rc.modload (the contents appear in full in section 5.12).
Place the commands after the commands that mount the file systems.  If the
machine's kernel does not include support for NFS server functionality, remember
to substitute libafs.nonfs.o for libafs.o.

Step 4: Proceed to Section 4.3 (page 4-35).

 4.2.7.3. BUILDING AFS INTO THE KERNEL ON SUNOS SYSTEMS

For the sake of consistency with other system types, the complete instructions
for kernel building appear in Chapter 5.

Step 1: If AFS modifications were not built into a kernel during the
installation of a previous SunOS machine in your cell, follow the kernel
building instructions in Section 5.8 (page 5-28).

Step 2: Move the existing kernel on the local machine to a safe location.

	-------------------------------
	# mv  /vmunix  /vmunix_save 
	-------------------------------

Step 3: Use a copying program (either cp or a remote program such as ftp
or NFS) to copy the AFS-modified kernel to the appropriate location.

Step 4: Reboot the machine to start using the new kernel.

	-----------------
	# shutdown -r now
	-----------------

Step 5: Proceed to Section 4.3 (page 4-35).

 4.2.8. INCORPORATING AFS INTO THE KERNEL ON ULTRIX SYSTEMS

To load AFS into the kernel on Ultrix systems choose one of:

 - dynamic loading using Transarc's dkload program (proceed to Section 4.2.8.1)

 - installing a kernel that incorporates AFS changes.  Unless this is the first
Ultrix machine installed in your cell, a modified kernel may already exist,
built during the installation of a previous machine. You must have an Ultrix
source license to build a new kernel.  Proceed to Section 4.2.8.2, page 4-34.

 4.2.8.1. USING DKLOAD ON ULTRIX SYSTEMS

The dkload program is the dynamic kernel loader provided by Transarc for Ultrix
systems.  For this machine to remain an AFS machine, dkload must run each time
the machine reboots.  You can invoke dkload automatically in the machine's
initialization file (/etc/rc or equivalent), as explained in Step 3.

The files containing the AFS kernel modifications are libafs.a and
libafs.nonfs.a (the latter is appropriate if this machine's kernel does not
include support for NFS server functionality).

To invoke dkload:

Step 1: Verify that

 - there is at least one spare megabyte of space in /tmp for temporary files
created as dkload runs

 - the following are in /bin on the local disk (not as symbolic links): as, ld,
and nm.

 - the /usr/vice/etc/dkload directory on the local disk contains: dkload (the
binary), libafs.a, libafs.nonfs.a, libcommon.a, and kalloc.o

Step 2: Invoke dkload after running ranlib.

------------------------------------------------------------------------------
If the machine's kernel does not include support for NFS server functionality,
you must substitute libafs.nonfs.a for libafs.a.  Either use the mv command to
replace libafs.a with libafs.nonfs.a in the /usr/vice/etc/dkload directory
(before issuing these commands), or make the substitution on the command line.

	# cd /usr/vice/etc/dkload
	# ranlib libafs.a
	# ranlib libcommon.a
	# ./dkload libafs.a
	--------------------------

Step 3: Modify the machine's initialization file (/etc/rc or equivalent)
to invoke dkload by copying the contents of /usr/vice/etc/dkload/rc.dkload (the
contents appear in full in Section 5.10).  Place the commands after the commands
that mount the file systems.  If the machine's kernel does not include support
for NFS server functionality, remember to substitute libafs.nonfs.a for
libafs.a.

Step 4: Proceed to Section 4.3 (page 4-35).

 4.2.8.2. BUILDING AFS INTO THE KERNEL ON ULTRIX SYSTEMS

For the sake of consistency with other system types, the complete instructions
for kernel building appear in Chapter 5.

Step 1: If AFS modifications were not built into a kernel during the
installation of a previous Ultrix machine in your cell, follow the kernel
building instructions in Section 5.9 (page 5-35).

Step 2: Move the existing kernel on the local machine to a safe location.

	-------------------------------
	# mv  /vmunix  /vmunix_save 
	-------------------------------

Step 3: Use a copying program (either cp or a remote program such as ftp
or NFS) to copy the AFS-modified kernel to the appropriate location.

Step 4: Reboot the machine to start using the new kernel.

	-----------------
	# shutdown -r now
	-----------------

 4.3. DEFINING THE MACHINE'S CELL MEMBERSHIP AND CREATING CELLSERVDB

Every client machine must have the /usr/vice/etc/ThisCell file on its local disk
to define which cell the machine belongs to.  Among other functions, this file
determines:

 - the cell in which users authenticate when they log into this machine

 - which cell's file server processes this machine's AFS command interpreters
contact by default

Every client machine's /usr/vice/etc/CellServDB file lists the database server
machines in each cell that the local Cache Manager can contact.  If a cell is
not listed in this file, or its list of database server machines is wrong, then
users working on this machine will be unable to access that cell's file tree.
Your cell's client version of CellServDB should already exist; it was created
during the installation of your cell's first machine (Section 2.25).

Remember that the Cache Manager consults /usr/vice/etc/CellServDB only at
reboot, when it copies the information into the kernel.  For the Cache Manager
to perform properly, the CellServDB file must be accurate at all times. Refer to
the AFS System Administrator's Guide for instructions on updating this file,
with or without rebooting.

Step 1: Place the cell name into ThisCell; surround it with quotes on the
command line, but not with the angle brackets:

	--------------------------------------------------
	# echo "<cellname> " >  /usr/vice/etc/ThisCell 
	--------------------------------------------------

Step 2:Create /usr/vice/etc/CellServDB on this machine.

---------------------------------------------------------------------------------
If your cell maintains a central source copy of CellServDB (the standard name is 
/afs/cellname/common/etc/CellServDB), use a network file transfer program such   
as ftp or NFS to copy the file to /usr/vice/etc/CellServDB on this machine.      

Otherwise, use a network file transfer program to copy the file over from an     
existing AFS client machine's /usr/vice/etc/CellServDB.                          
---------------------------------------------------------------------------------

 4.4. SETTING UP THE CACHE

Every AFS client must have a cache in which to store local copies of files
brought over from file server machines.  The Cache Manager can cache either on
disk or in machine memory.

For both types of caching, afsd consults the /usr/vice/etc/cacheinfo file as it
initializes the Cache Manager and cache to learn the defaults for cache size and
where to mount AFS locally. For disk caches, it also consults the file to learn
cache location.  You must create this file for both types of caching.

The file has three fields:

1. The first field specifies where to mount AFS on the local disk.  The standard
choice is /afs.

2. The second field defines the local disk directory to be used for caching, in
the case of a disk cache.  The standard choice is /usr/vice/cache, but you could
specify a different directory to take advantage of more space on other
partitions.  Something must appear in this field even if the machine uses memory
caching.

3. The third field defines cache size as a number of kilobyte (1024 byte)
blocks.  Make it as large as possible, but do not make the cache larger than 90%
to 95% of the space available on the partition housing /usr/vice/cache or in
memory: the cache implementation itself requires a small amount of space.  For
AIX systems using a disk cache, cache size cannot exceed 85% of the disk
capacity reported by the df command.  This difference between AIX and other
systems is because the output of AIX df shows actual disk capacity and use,
whereas most other versions "hide" about 10% of disk capacity to allow for over
usage.

Violating this restriction on cache size can cause errors or worsened
performance.

Transarc recommends using an AFS cache size up to 1 GB.  Although it is possible
to have an AFS cache size as large as the size of the underlying file system,
Transarc does not recommend caches this large for routine use.

Disk caches smaller than 5 megabytes do not generally perform well, and you may
find the performance of caches smaller than 10 megabytes unsatisfactory,
particularly on system types that have large binary files.  Deciding on a
suitable upper limit is more difficult.  The point at which enlarging the cache
does not really improve performance depends on the number of users on the
machine, the size of the files they are accessing, and other factors.  A cache
larger than 40 megabytes is probably unnecessary on a machine serving only a few
users accessing files that are not huge.  Machines serving multiple users may
perform better with a cache of at least 60 to 70 megabytes.

Memory caches smaller than 1 megabyte are nonfunctional, and most users find the
performance of caches smaller than 5 megabytes to be unsatisfactory.  Again,
this depends on the number of users working on the machine and the number of
processes running.  Machines running only a few processes may be able to use a
smaller memory cache.

The afsd program also sets other cache configuration parameters as it
initializes, and starts up several daemons that improve performance.  The AFS
Command Reference Manual description of afsd details these parameters and
daemons, and explains how to use afsd's arguments to override the default
settings for parameters if desired.  As discussed in Section 4.5, AFS also
provides initialization scripts that set certain afsd parameters appropriately
for machines of different sizes.

 4.4.1. SETTING UP A DISK CACHE

This section explains how to configure a disk cache.

Step 1: Create the cache directory.  This example instruction shows the
standard location, /usr/vice/cache.

	--------------------------------------------
	# mkdir /usr/vice/cache                  
	--------------------------------------------

Step 2: Create the cacheinfo file to define the boot-time defaults
discussed above.  This example instruction shows the standard mount location,
/afs, and the standard cache location, /usr/vice/cache.

	---------------------------------------------------------------------
	# echo "/afs:/usr/vice/cache:<#blocks>" > /usr/vice/etc/cacheinfo 
	---------------------------------------------------------------------

For example, to devote 10000 one-kilobyte blocks to the cache directory on this
machine, type:

	# echo "/afs:/usr/vice/cache:10000" > /usr/vice/etc/cacheinfo

 4.4.2. SETTING UP A MEMORY CACHE

This section explains how to configure a memory cache.

Step 1: Create the cacheinfo file to define the boot-time defaults
discussed above.  This example instruction shows the standard mount location,
/afs, and the standard cache location, /usr/vice/cache.  The location specified
is irrelevant for a memory cache, but a value must be provided.

	---------------------------------------------------------------------
	# echo "/afs:/usr/vice/cache:<#blocks>" > /usr/vice/etc/cacheinfo 
	---------------------------------------------------------------------

For example, to devote 10000 kilobytes of memory to caching on this client
machine, type:

# echo "/afs:/usr/vice/cache:10000" > /usr/vice/etc/cacheinfo

 4.5. CREATING /AFS AND STARTING THE CACHE MANAGER

As mentioned previously, the Cache Manager mounts AFS at the local /afs
directory.  In this section you create that directory and then run afsd to
initialize the Cache Manager.

You should also add afsd to the machine's initialization file (/etc/rc or its
equivalent), so that it runs automatically at each reboot.  If afsd does not run
at each reboot, the Cache Manager will not exist on this machine, and it will
not function as an AFS client.

The afsd program sets several cache configuration parameters as it initializes,
and starts up daemons that improve performance.  As described completely in the
AFS Command Reference Manual, you can use the afsd command's arguments to alter
these parameters and/or the number of daemons.  Depending on the machine's cache
size, its amount of RAM, and how many people work on it, you may be able to
improve its performance as a client by overriding default values.

AFS also provides a simpler alternative to setting afsd's arguments
individually.  You can set groups of parameters based on the size (small,
medium, and large) of the client machine.  These groups are defined in scripts,
the names of which depend upon the client machine's system type.  For system
types other than Digital UNIX, IRIX, NCR UNIX and Solaris, the parameter
settings are specified in three initialization scripts distributed in
/usr/vice/etc/dkload.  The scripts are appropriate only for machines with a disk
cache.  Both the AFS Command Reference Manual description of afsd and Chapter 13
of the AFS System Administrator's Guide discuss these scripts in more detail.
The scripts are:

 - rc.afsd.small, which configures the Cache Manager appropriately for a "small"
machine with a single user, about 8 megabytes of RAM and a 20-megabyte cache.
It sets -stat to 300, -dcache to 100, -daemons to 2, and -volumes to 50.

 - rc.afsd.medium, which configures the Cache Manager appropriately for a
"medium" machine with 2 to 6 users, about 16 megabytes of RAM and a 40-megabyte
cache.  It sets -stat to 2000, -dcache to 800, -daemons to 3, and -volumes to
70.

 - rc.afsd.large, which configures the Cache Manager appropriately for "large"
machine with 5 to 10 users, about 32 megabytes of RAM and a 100-megabyte cache.
It sets -stat to 2800, -dcache to 2400, -daemons to 5, and -volumes to 128.

For Digital UNIX, IRIX, NCR UNIX and Solaris systems, the parameter settings are
defined in the initialization script that you installed as the final part of
incorporating AFS into the machine's kernel.  The script defines LARGE, MEDIUM,
and SMALL values for the OPTIONS variable, which is then included on the afsd
command line in the script.  The script is distributed with OPTIONS set to
$MEDIUM, but you may change this as desired.

Step 1: Create the /afs directory.  If it already exists, verify that the
directory is empty.

	-------------
	# mkdir /afs                  
	-------------

Step 2: Invoke afsd.

With a disk cache, starting up afsd for the first time on a machine can take up
to ten minutes, because the Cache Manager has to create all of the structures
needed for caching (V files).  Starting up afsd at future reboots does not take
nearly this long, since the structures already exist.

For a memory cache, use the -memcache flag to indicate that the cache should be
in memory rather than on disk.  With a memory cache, memory structures must be
allocated at each reboot, but the process is equally quick each time.

Because of the potentially long start up, you may wish to put the following
commands in the background.  Even if you do, afsd must initialize completely
before you continue to the next step.  Console messages will trace the progress
of the initialization and indicate when it is complete.

For a disk cache:
------------------

Invoke afsd.                                                                    

On system types other than Digital UNIX, IRIX, NCR UNIX and Solaris, you can
substitute one of the configuration scripts (such as rc.afsd.medium) for afsd in
the following command if desired, but you still must type the -verbose flag.

On Digital UNIX, IRIX, and Solaris systems, you must type on the command line
any additional configuration parameters you wish to set, since the three
configuration scripts are not available.

	# /usr/vice/etc/afsd -verbose &
	--------------------------------

For a memory cache:
-------------------

Invoke afsd with the -memcache flag.  You may specify values for additional
parameters if desired.

	# /usr/vice/etc/afsd -memcache -verbose &
	-----------------------------------------

Step 3: Invoke afsd in the machine's initialization file.

For a disk cache:
------------------

On system types other than Digital UNIX, IRIX, Digital UNIX or Solaris, add the
following command to the initialization file (/etc/rc or equivalent) after the
commands that invoke a dynamic kernel loader but before any NFS daemons (nfsd)
are started. 

You may specify additional configuration parameters or substitute for afsd one
of the configuration scripts (such as rc.afsd.medium) that are described in the
introduction to this section.

	/usr/vice/etc/afsd > /dev/console                                             

On Digital UNIX, IRIX, NCR UNIX and Solaris systems, verify that the OPTIONS
variable in the initialization script is set to the appropriate value; as
distributed, it is $MEDIUM.

For a memory cache:
------------------- 

On system types other than Digital UNIX, IRIX, or Solaris, add the following
command to the initialization file (/etc/rc or equivalent) after the commands
that invoke a dynamic kernel loader. You may specify additional configuration
parameters, but remember that the "large," "medium," and "small" scripts cannot
be used with a memory cache.

	/usr/vice/etc/afsd -memcache > /dev/console                                        

On Digital UNIX, IRIX, and Solaris systems, verify that the OPTIONS variable in
the initialization script is not set to any of $LARGE, $MEDIUM, or $SMALL; these
values cannot be used with a memory cache.

For system types other than Digital UNIX, IRIX, NCR UNIX and Solaris, the
following should now appear in the machine's initialization file(s) in the
indicated order. (Digital UNIX, IRIX, NCR UNIX and Solaris systems use an
"init.d" initialization file that is organized differently.)

 - NFS commands, if appropriate (for example, if the machine will act as an
NFS/AFS translator). For AIX version 3.2.2 or lower, commands loading the NFS
kernel extensions (nfs.ext) should appear here; with AIX version 3.2.3 and
higher, NFS is already loaded into the kernel. Then invoke nfsd if the machine
is to be an NFS server.

 - dynamic kernel loader command(s), unless AFS was built into the kernel

 - afsd

 4.6. SETTING UP VOLUMES AND LOADING BINARIES INTO AFS

In this section, you link /usr/afsws on the local disk to the directory in AFS
that houses AFS binaries for this system type.  The standard name for the AFS
directory is /afs/cellname/sysname/usr/afsws.

If this machine is an existing system type, you can simply create a link from
its /usr/afsws directory to the appropriate AFS directory, which should already
exist.  Follow the instructions in Section 4.6.1.

If this machine is a new system type (there are no file server or client
machines of this type in your cell), you must first create and mount volumes to
store its AFS binaries, and then create the link from /usr/afsws to the new
directory.  Follow the instructions in Section 4.6.2.

It is also recommended that you store UNIX system binaries (such as /bin, /etc,
and /lib) under /afs/cellname/sysname. Section 2.31 and
section 2.32 provide
guidelines; this section does not include explicit instructions.

 4.6.1. LINKING /USR/AFSWS ON AN EXISTING SYSTEM TYPE

If this client machine is an existing system type, then its client binaries
should already reside in an AFS directory.

Step 1: Create a symbolic link from /usr/afsws (a local directory) to
/afs/cellname/@sys/usr/afsws.  You could also substitute the machine's Transarc
system name for @sys (make the link to /afs/cellname/sysname/usr/afsws). The
advantage of using @sys is that it automatically adjusts in case you upgrade
this machine to a different system type.

	----------------------------------------------------
	# ln  -s  /afs/<cellname>/@sys/usr/afsws  /usr/afsws 
	----------------------------------------------------

You should include /usr/afsws/bin and /usr/afsws/etc in the PATH variable for
each user account so that users can issue commands from the AFS suites (such as
fs).


 4.6.2. CREATING BINARY VOLUMES FOR A NEW SYSTEM TYPE

If this client machine is a new system type, you must create and mount volumes
for its binaries before you can link the local /usr/afsws to an AFS directory.

You must work on a machine that is already an AFS client or file server machine,
because the commands used to create and mount volumes (from the vos and fs
suites) are not yet available on this machine (it is a new system type and you
have not yet extracted the needed command suites from the Binary Distribution).

The following procedure creates and mounts volumes to contain the AFS client
binaries for this system type, loads the binaries from the Binary Distribution,
and links the local /usr/afsws to the appropriate AFS directory.  The remote
machine you use should have a tape drive attached to simplify the extraction
from tape in Step 6.

Step 1: On an existing AFS client or file server machine with a tape
drive.  Authenticate as the admin account created in Section 2.15.

----------------------------------------
On a file server machine:    

	# /usr/afs/bin/klog admin    
	Password:  

On a client machine:         

	# /usr/afsws/bin/klog admin  
	Password:  
----------------------------------------

Step 2: On the existing AFS machine.  Create volumes to store AFS files
and binaries for the new system type. The following example instructions create
three volumes called "sysname," "sysname.usr," and "sysname.usr.afsws," enough
to permit loading AFS client binaries into /afs/cellname/sysname/usr/afsws.  You
may also wish to create volumes for UNIX and other system binaries, as outlined
in the table in Section 2.32.

	------------------------------------------------------
	# vos create <machine name> <partition name> <sysname>           
	# vos create <machine name> <partition name> <sysname>.usr       
	# vos create <machine name> <partition name> <sysname>.usr.afsws 
	----------------------------------------------------------------

Step 3: On the existing AFS machine.  Mount the newly created volumes at
the indicated place in the AFS file tree.  Because root.cell is now replicated,
you must make the mount points in its ReadWrite version, by preceding cellname
with a period as shown.  You then issue the vos release command to release new
replicas of root.cell, and the fs checkvolumes command to force the local Cache
Manager to access them.

	-----------------------------------------------------
	# fs  mkmount  /afs/.<cellname>/<sysname>  <sysname>                     
	# fs  mkmount  /afs/.<cellname>/<sysname>/usr  <sysname>.usr             
	# fs  mkmount  /afs/.<cellname>/<sysname>/usr/afsws  <sysname>.usr.afsws 
	# vos  release  root.cell                                                
	# fs  checkvolumes                                                       
	-------------------

Step 4: On the existing AFS machine.  Set the ACL on the newly created
mount points to grant the READ and LOOKUP rights to system:anyuser.

	--------------------------------
	# cd  /afs/.<cellname>/<sysname>                                    
	# fs  setacl  -dir  .  ./usr  ./usr/afsws  -acl  system:anyuser  rl 
	-------------------------------------------------------------------

Step 5: On the existing AFS machine.  Set the quota on
/afs/cellname/sysname/usr/afsws according to the following chart.  The values
include a safety margin.

Operating system         Quota in kilobyte blocks

AIX                      30000

Digital UNIX             40000

HP-UX                    35000

IRIX                     60000

NCR UNIX                 40000

Solaris                  35000

SunOS                    25000

Ultrix                   45000

----------------------------------------------------------------------------
# /usr/afs/bin/fs setquota /afs/.<cellname>/<sysname>/usr/afsws  <quota> 
----------------------------------------------------------------------------

Step 6: On the existing AFS machine.  Mount the Binary Distribution Tape
and load the fifth tar set into /afs/cellname/sysname/usr/afsws.

The appropriate subdirectories are created automatically and have their ACL set
to match that on /afs/cellname/sysname/usr/afsws (which at this point grants all
rights to system:administrators and READ and LOOKUP rights to system:anyuser).

------------------------------------------------------------------------------
On AIX systems: Before reading the tape, verify that block size is set to 0   
(meaning variable block size); if necessary, use SMIT to set block size to 0. 
Also, substitute tctl for mt.                                                 

On HP-UX systems: Substitute mt -t for mt -f.                                 

On all system types: For <device>, substitute the name of the tape device for 
your system that does not rewind after each operation.                        

	# cd /afs/<cellname>/<sysname>/usr/afsws
	# mt -f /dev/<device> rewind
	# mt -f /dev/<device> fsf 4
	# tar xvf /dev/<device>
	-----------------------

Step 7: You may make AFS software available to users only in accordance
with the terms of your AFS License agreement.  To prevent access by unauthorized
users, you should change the ACL on some of the subdirectories of
/afs/cellname/sysname/usr/afsws, granting the READ and LOOKUP rights to
system:authuser instead of system:anyuser.  This way, only users who are
authenticated in your cell can access AFS binaries.  The ACL on the bin
subdirectory must continue to grant the READ and LOOKUP rights to
system:anyuser, because unauthenticated users must be able to access the klog
binary stored there.

To be sure that unauthorized users are not accessing AFS software, you should
periodically check that the ACL on these directories is set properly.

------------------------------------------------------------------------------
To limit access to AFS binaries to users authenticated in your cell, issue the 
following commands.  The ACL on the bin subdirectory must continue to grant    
the READ and LOOKUP rights to system:anyuser.                                  

	# cd  /afs/.<cellname>/<sysname>/usr/afsws
	# fs  setacl  -dir  ./*  -acl  system:authuser rl
	# fs  setacl  -dir  bin  -acl  system:anyuser rl
	-------------------------------------------------

Step 8: On the new client machine.  Create a symbolic link from
/usr/afsws (a local directory) to /afs/cellname/@sys/usr/afsws.  You could also
substitute the machine's Transarc system name for @sys (make the link to
/afs/cellname/sysname/usr/afsws). The advantage of using @sys is that it
automatically adjusts in case you upgrade this machine to a different system
type.

	---------------------------------------------------------
	# ln  -s  /afs/.<cellname>/@sys/usr/afsws  /usr/afsws 
	---------------------------------------------------------

You should include /usr/afsws/bin and /usr/afsws/etc in the PATH variable for
each user account so that users can issue commands from the AFS suites (such as
fs).

 4.7. ENABLING LOGIN

Transarc provides a version of login that both authenticates the issuer with AFS
and logs him or her in to the local UNIX file system.  It is strongly
recommended that you replace standard login with the AFS-authenticating version
so that your cell's users automatically receive PAG-based tokens when they log
in.  If you do not replace the standard login, then your users must use the
two-step login procedure (log in to the local UNIX file system followed by pagsh
and klog to authenticate with AFS).  For more details, see the Section titled
"Login and Authentication in AFS" in Chapter 2 of the AFS System Administrator's
Guide.

Note: AIX 4.1 does not require that you replace the login program with the
Transarc version.  Instead, you can configure the AIX 4.1 login program so that
it calls the AFS authentication program, allowing users to authenticate with AFS
and log in to AIX in the same step.

If you are using Kerberos authentication rather than AFS's protocols, you must
install AFS's login.krb instead of regular AFS login.  Contact AFS Product
Support for further details.

You can tell you are running AFS login if the following banner appears after you
provide your password:

AFS 3.4  login

To enable AFS login, follow the instructions below that are appropriate for your
system type:

 - For AIX 3.2 systems, see Section 4.7.1.

 - For AIX 4.1 systems, see Section 4.7.2.

 - For IRIX systems, see Section 4.7.3.

 - For all other system types, see Section 4.7.4.

 4.7.1. ENABLING LOGIN ON AIX 3.2 SYSTEMS

Follow the instructions in this section to replace login on AIX 3.2 systems.

For this system type, Transarc supplies both login.noafs, which is invoked when

AFS is not running on the machine, and login.afs, which is invoked when AFS is
running.  If you followed the instructions for loading the AFS rs_aix32 binaries
into an AFS directory and creating a local disk link to it, these files are
found in /usr/afsws/bin Note that standard AIX login is normally installed as
/usr/sbin/login, with links to /etc/tsm, /etc/getty, and /bin/login.  You will
install the replacement AFS binaries into the /bin directory.

Step 1: Replace the link to standard login in /bin with login.noafs.

	------------------------------------------------
	# mv  /bin/login  /bin/login.orig            
	# cp  /usr/afsws/bin/login.noafs  /bin/login 
	------------------------------------------------

Step 2: Replace the links from /etc/getty and /etc/tsm to standard login
with links to /bin/login.

	---------------------------------
	# mv  /etc/getty  /etc/getty.orig 
	# mv  /etc/tsm  /etc/tsm.orig     
	# ln -s  /bin/login  /etc/getty   
	# ln -s  /bin/login  /etc/tsm     
	---------------------------------

Step 3: Install login.afs into /bin and create a symbolic link to
/etc/afsok.

	--------------------------------------------------
	# cp  /usr/afsws/bin/login.afs  /bin/login.afs 
	# ln -s  /bin/login.afs  /etc/afsok            
	--------------------------------------------------

 4.7.2. ENABLING LOGIN ON AIX 4.1 SYSTEMS

Follow the instructions in this section to configure login on AIX 4.1 systems.
Before beginning, verify that the afs_dynamic_auth program has been
installed in the local /usr/vice/etc directory.

Step 1: Set the registry variable in the /etc/security/user to DCE on the
local client machine.  Note that you must set this variable to DCE (not AFS).

	--------------
	registry = DCE 
	--------------

Step 2: Set the registry variable for the user root to files in the same
file (/etc/security/user) on the local client machine.  This allows the user
root to authenticate by using the local password "files" on the local machine.

	-----------------
	root:                   
	registry = files 
	-----------------

Step 3: Set the SYSTEM variable in the same file (/etc/security/user).
The setting depends upon whether the machine is an AFS client only or both an
AFS and a DCE client.

------------------------------------------------------------
If the machine is an AFS client only, set SYSTEM to be:     

	SYSTEM = "AFS OR AFS [UNAVAIL] AND compat [SUCCESS]"        

If the machine is both an AFS and a DCE client, set SYSTEM: 

	SYSTEM = "DCE OR DCE [UNAVAIL] OR AFS OR AFS
	[UNAVAIL] AND compat [SUCCESS]"                                       
------------------------------------------------------------

Step 4: Define DCE in the /etc/security/login.cfg file on the local
client machine. In this definition and the following one for AFS , the program
attribute specivies the path of the program to be invoked.

	-----------------------------------------
	DCE:                                          
	program = /usr/vice/etc/afs_dynamic_auth 
	-----------------------------------------

Step 5: Define the AFS authentication program in the
/etc/security/login.cfg file on the local client machine as follows:

	----------------------------------------
	AFS:                                          
	program = /usr/vice/etc/afs_dynamic_auth 
	----------------------------------------

 4.7.3. ENABLING AFS LOGIN ON IRIX SYSTEMS

For IRIX systems, you do not need to replace the login binary.  Silicon
Graphics, Inc. has modified IRIX login to operate the same as AFS login when the
machine's kernel includes AFS.  However, you do need to verify that the local
/usr/vice/etc directory contains the two libraries provided with AFS and
required by IRIX login, afsauthlib.so and afskauthlib.so.

-----------------------------------------------------------
Output should include afsauthlib.so and afskauthlib.so. 

	# ls  /usr/vice/etc                                     
-----------------------------------------------------------

 4.7.4. ENABLING AFS LOGIN ON OTHER SYSTEM TYPES

For system types other than AIX and IRIX, the replacement AFS login binary
resides in /usr/afsws/bin, if you followed the instructions for loading the AFS
binaries into an AFS directory and creating a local disk link to it. Install the
AFS login as /bin/login.

Step 1: Replace standard login with AFS login.

	------------------------------------------
	# mv  /bin/login  /bin/login.orig      
	# cp  /usr/afsws/bin/login  /bin/login 
	------------------------------------------

 4.8. ALTERING FILE SYSTEM CLEAN-UP SCRIPTS ON SUN SYSTEMS

Many SunOS and Solaris systems are distributed with a crontab file that contains
a command for removing unneeded files from the file system (it usually begins
with the find(1) command).  The standard location for the file on SunOS systems
is /usr/spool/cron/crontabs/root, and on Solaris systems is
/usr/lib/fs/nfs/nfsfind.

Once this machine is an AFS client, you must modify the pathname specification
in this cron command to exclude /afs.  Otherwise, the command will traverse the
entire portion of the AFS tree accessible from this machine, which includes
every cell whose database server machines appear in the machine's kernel list
(derived from /usr/vice/etc/CellServDB).  The traversal could take many hours.

Use care in altering the pathname specification, so that you do not accidentally
exclude directories that you wish to be searched.  The following may be suitable
alterations, but are suggestions onlyMyou must verify that they are appropriate
for your system.

The first possible alteration requires that you list all file system directories
to be searched.

On SunOS systems, use:

find / /usr /<other partitions> -xdev remainder of existing command

On Solaris systems, add the -local flag to the existing command in
/usr/lib/fs/nfs/nfsfind, so that it looks like:

find $dir -local -name .nfs\* +7 -mount -exec rm -f {} \;

Another possibility for either system type excludes any directories whose names
begin with "a" or a non-alphabetic character.

find /[A-Zb-z]*  remainder of existing command

Note that you should not use the following, because it still searches under
/afs, looking for a subdirectory of type "4.2".

find / -fstype 4.2     /* do not use */