CHAPTER 3: INSTALLING ADDITIONAL SERVERS
. . . . . . . . . . Assumptions
3.1 . . . . Installing an Additional File Server Machine
3.1.1 . . . . . . Loading Files to the Local Disk
3.1.2 . . . . . . Performing System-Specific Tasks
3.1.2.1 . . . . . . . Getting Started on AIX Systems
3.1.2.2 . . . . . . . Getting Started on Digital UNIX Systems
3.1.2.3 . . . . . . . Getting Started on HP-UX Systems
3.1.2.4 . . . . . . . Getting Started on IRIX Systems
3.1.2.5 . . . . . . . Getting Started on NCR UNIX Systems
3.1.2.6 . . . . . . . Getting Started on SOLARIS Systems
3.1.2.7 . . . . . . . Getting Started on SUN OS Systems
3.1.2.8 . . . . . . . Getting Started on ULTRIX Systems
3.1.3 . . . . Initialize Server Programs
3.1.4 . . . . . . Completing the Installation
3.2 . . . . Installing Database Server Functionality on an Existing Server Machine
3.2.1 . . . . . . Procedure Overview
3.2.2 . . . . . . Instructions
3.3 . . . . Removing Database Server Functionality
3.3.1 . . . . . . Procedure Overview
3.3.2 . . . . . . Instructions


 3. INSTALLING ADDITIONAL SERVERS

This chapter explains how to

 - install an additional file server machine of either the same AFS system type 
(one that uses the same AFS binaries) as an existing file server machine, or of
a new system type.  See Section 3.1.

 - add database server functionality to an existing file server machine.  See
Section 3.2.

 - remove database server functionality from a machine.  See Section 3.3.

 Assumptions

These instructions assume that:

 - you have already installed your cell's first file server machine by following
the instructions in Chapter 2

 - you are typing the instructions at the console of the machine you are
installing

 - you are logged in to the local UNIX file system as "root"

 - you have already installed a standard UNIX kernel on the machine being
installed

 3.1. INSTALLING AN ADDITIONAL FILE SERVER MACHINE

The procedure for installing a new file server machine is similar to installing
the first file server machine in your cell.  There are a few parts of the
installation that differ depending on whether the machine is the same AFS system
type as an existing file server machine, or is the first file server machine of
its system type in your cell.  The differences mostly concern the source for the
needed binaries and files, and what portions of the Update Server you install:

 - on a new system type, you must load file and binaries from the Binary
Distribution Tape (either locally or onto a remote machine).  You install the
server portion of the Update Server, making this machine the binary distribution
machine for its system type.

 - on an existing system type, you can copy over files and binaries from a
previously installed file server machine, rather than from tape.  You install
the client portion of the Update Server to accept updates of binaries, because a
previously installed machine of this type was installed as the binary
distribution machine.

These instructions are brief; refer to the corresponding steps in Chapter 2 for
more detailed information.  If you are copying files from a remote file server
machine, the instructions assume they reside in /usr/afsws on that machine.

To install a new file server machine, you will

    1. copy needed binaries and files onto this machine's local disk

    2. replace the standard kernel with one that includes AFS modifications
       (either using a dynamic kernel loader or with a previously built
       kernel)

    3. configure partitions to house AFS volumes

    4. replace the standard fsck with the AFS version of fsck

    5. start the Basic OverSeer (BOS) Server

    6. start client portion(s) of the Update Server, and the server portion
       on a machine of a new system type

    7. start the controller for the NTP Daemon

    8. start the process that binds the File Server, Volume Server and Salvager

    9. continue to Section 3.2 if this machine will be a database server machine

  3.1.1. Loading Files to the Local Disk

Begin by creating directories for storing AFS binaries and files, and copying
over the files from an existing file server machine.

In deciding which directories to create and files to copy, consider how you will
incorporate AFS modifications into the machine's kernel, a decision you also
made when you installed the first file server machine:

 - on AIX systems, your only choice is to use IBM's dynamic loader, the kernel
extension facility

 - on Digital UNIX systems, your only choice is to build AFS modifications into
a new kernel, but you may have done this already for a previously installed DEC
OSF/1 machine

 - on HP-UX systems, you can either use Transarc's dynamic loader, dkload, or
build AFS modifications into a new kernel (which may already exist from the
previous installation of an HP-UX machine)

 - on NCR UNIX systems, your only choice is to build AFS modifications into a
new kernel, but you may have done this already for a previously installed NCR
UNIX machine

 - on IRIX systems, you can either use SGI's dynamic loader, ml, or build AFS
modifications into a new kernel (which may already exist from the previous
installation of an IRIX machine)

 - on Solaris systems, your only choice is to use Sun's dynamic loader, modload

 - on SunOS systems, you can either dynamically load using Transarc's dkload or
Sun's modload, or can build AFS modifications into a new kernel (which may
already exist from the previous installation of a SunOS machine)

 - on Ultrix systems, you can either use Transarc's dynamic loader, dkload, or
build AFS modifications into a new kernel (which may already exist from the
previous installation of an Ultrix machine)

Step 1: Create the following directories as appropriate.

------------------------------------------------------------------------------
	# mkdir /usr/afs
	# mkdir /usr/afs/bin
	# mkdir /usr/afsws
	# mkdir /usr/vice/etc
	# mkdir /usr/vice/etc/sgiload (if you will use ml on an IRIX system)
	# mkdir /usr/vice/etc/modload (if you will use modload on a Solaris or
	        SunOS system)                                    
	# mkdir /usr/vice/etc/dkload (if you will use a dynamic kernel loader on
	        AIX, HP	or Ultrix systems, or dkload on SunOS systems)     
-------------------------------------------------------------------------------

Step 2: Copy the contents of /usr/afs/etc from an existing file server
machine to the local disk.

Using ftp, NFS, or another file transfer method, copy everything from the system
control machine's /usr/afs/etc directory to the local /usr/afs/etc. If you use
the international edition of AFS, copy from any existing file server machine
(you don't have a system control machine).
--------------------------------------------------------------------------------

Step 3: Copy file server and client binaries and files to the local disk.

On a machine of an existing system type, you can choose to load from the Binary
Distribution Tape, as shown in the instructions below for machines of a new
system type, but it is probably simpler to copy files from an existing file
server machine, as shown here:

On an existing system type, use ftp, NFS or another network transfer program to
copy over the following from an existing machine of the same system type.

 - Copy everything from /usr/afs/bin.
 - Copy everything (client files, dynamic kernel loader files and initializatio
  scripts) from /usr/vice/etc and its subdirectories, if any.

If you will build a kernel and have not previously done so for this system type,
perform the appropriate part of the following instructions for machines of a new
system type.

 - On a machine of a new system type, you must extract the specified tar sets
from the Binary Distribution tape, either directly onto the local disk (if the
machine is attached to a tape drive), or into a remote machine's /usr/afsws
directory.

On a new system type attached to a tape drive, load the contents of the
specified tar sets into the indicated directories (detailed instructions appear
in Section 2.3.1):

 - Load the third tar set into /usr/afs.  The bin subdirectory is created      
automatically; it contains file server process binaries.                        

 - Load the fourth tar set into /usr/vice/etc.  The directories for dynamic
loader files and initialization scripts (dkload, modload or sgiload) are created
automatically as appropriate.

 - If building AFS into a new kernel for this system type, load the second tar 
set into /usr/afs/sys.                                                          

	OR

If the new system type is not attached to a tape drive, you must load files from
the AFS Binary Distribution into /usr/afsws on a remote machine, so that you can
copy them from there onto the local disk using a network transfer program.
Detailed instructions for loading the fifth tar set into /usr/afsws appear in
Section 1.4.2.2.

After you have loaded the files onto the remote machine, use ftp, NFS or another
network transfer program to copy over the following from the remote machine:

 - Copy the file server process binaries and configuration files from
/usr/afsws/root.server/usr/afs/bin into the local /usr/afs/bin directory.

 - Copy the client files, dynamic loader files and initialization scripts from
/usr/afsws/root.client/usr/vice/etc and its subdirectories, if any, into the
local /usr/vice/etc directory

 - If you will build AFS into a new kernel for this system type, copy the
contents of /usr/afsws/root.client/bin into the local /usr/afs/sys directory.
--------------------------------------------------------------------------------

 3.1.2. PERFORMING SYSTEM-SPECIFIC TASKS

As on the first file server machine in your cell, there are three initial tasks
that you perform differently on each system type:

 - incorporating AFS modifications into the machine's kernel, either using a
dynamic loader or by installing a previously built kernel that incorporates AFS
modifications.  On all system types except Digital UNIX, dynamic loading is
preferred as the less complicated alternative (Digital UNIX has no dynamic
loader).

 - configuring partitions to house AFS volumes

 - replacing the standard fsck program with an AFS-safe version

Because the tasks differ considerably on each system type, the following
instructions are divided by system type.  Proceed to the section appropriate for
your system type.  When you have finished the procedures for your system type,
continue on to Section 3.1.3.

	For AIX, see Section 3.1.2.1 on page 3-10.

	For Digital UNIX, see Section 3.1.2.2 on page 3-13.

	For HP-UX, see Section 3.1.2.3 on page 3-17.

	For IRIX, see Section 3.1.2.4 on page 3-21.

	For NCR UNIX, see Section 3.1.2.5 on page 3-27.

	For Solaris, see Section 3.1.2.6 on page 3-31.

	For SunOS, see Section 3.1.2.7 on page 3-36.

	For Ultrix, see Section 3.1.2.8 on page 3-41.

 3.1.2.1. GETTING STARTED ON AIX SYSTEMS

On AIX systems, use IBM's kernel extension facility to load AFS modifications
into the kernel.  Then configure partitions and replace fsck.

Step 1: Verify that the machine's local disk houses the needed files and
directories, as listed in Section 2.4.1.

Step 2: Invoke cfgexport and cfgafs.

If this machine is to act as an NFS/AFS translator machine, you must make a
substitution in this step.  For details, consult the section entitled "Setting
Up an NFS/AFS Translator Machine" in the NFS/AFS Translator Supplement to the
AFS System Administrator's Guide.

----------------------------------------------------------------
	# cd /usr/vice/etc/dkload                                    

If the machine's kernel does not support NFS server functionality, issue the
following commands.  The machine cannot function as an NFS/AFS translator
machine in this case.

	# ./cfgexport -a export.ext.nonfs                            
	# ./cfgafs -a afs.ext                                        

If the machine's kernel supports NFS server functionality, issue the following commands.  If the machine is to act as an NFS/AFS translator machine, you must make the substitution specified in the NFS/AFS Translator Supplement.  

	# ./cfgexport -a export.ext                                  
	# .cfgafs -a afs.ext
----------------------------------------------------------------

Step 3: IBM delivers several function-specific initialization files for
AIX systems, rather than the single file used on some other systems.  If you
want the kernel extension facility to run each time the machine reboots, verify
that it is invoked in the appropriate place in these initialization files.  An
easy way to add the needed commands is to copy the contents of
/usr/vice/etc/dkload/rc.dkload, which appear in Section 5.11.

The following list summarizes the order in which the commands must appear in
initialization files for the machine to function properly (you will add some of
the commands in later sections).


 - NFS commands, if appropriate (for example, if the machine will act as an
NFS/AFS translator). For AIX version 3.2.2 or lower, commands loading the NFS
kernel extensions (nfs.ext) should appear here; with AIX version 3.2.3 and
higher, NFS is already loaded into the kernel. Then invoke nfsd if the machine
is to be an NFS server.

 - the contents of rc.dkload, to invoke the kernel extension facility.  If the
machine will act as an NFS/AFS translator machine, be sure to make the same
substitution as you made when you issued the cfgexport and cfgafs commands in
the previous step.

 - bosserver (you will be instructed to add this command in Section 3.1.4)

 - afsd (you will be instructed to add this command in Section 3.1.4)

Step 4: Create a /vicepx directory for each partition that will house AFS
volumes.  The example instruction creates three directories.

	----------------------------------
	# mkdir /vicepa                
	# mkdir /vicepb                
	# mkdir /vicepc                
   	    and so on                  
	----------------------------------

Step 5: Use the SMIT program to create a journalled file system and mount
it on the appropriate /vicepx partition.  Consult the operating system
documentation for syntax.

Step 6: It is recommended that you also add the following line to
/etc/vfs at this point:

afs     4     none     none

If you do not add this line, you will receive an error message from the mount
command when you use it to list the mounted file systems (but note that the
mount command is working properly even if you receive the error message).

Step 7: Move the standard fsck program helper to a save file and install
the AFS-modified helper (/usr/afs/bin/v3fshelper) in the standard location.

	---------------------------------------------
	# cd  /sbin/helpers                       
	# mv  v3fshelper  v3fshelper.noafs        
	# cp  /usr/afs/bin/v3fshelper  v3fshelper 
	---------------------------------------------

Step 8: Proceed to Section 3.1.3 (page 3-45).

 3.1.2.2. GETTING STARTED ON Digital UNIX SYSTEMS

On Digital UNIX systems, install a kernel built with AFS modifications.  Then
install the machine's initialization script, configure partitions, and replace
fsck.

Step 1: If you have not previously built AFS modifications into a Digital
UNIX kernel (during installation of a previous machine), follow the
instructions in Section 5.2 (page 5-7) or Section 5.3 (page 5-11).

Step 2: Move the existing kernel on the local machine to a safe location.

	-------------------------------
	# mv  /vmunix  /vmunix_save 
	-------------------------------

Step 3: Use a copying program (either cp or a remote program such as ftp
or NFS) to copy the AFS-modified kernel to the appropriate location.

Step 4: Reboot the machine to start using the new kernel.  This example
instruction shows the rebooting command appropriate for most system
types.

	---------------------------------
	# shutdown -r now
	---------------------------------

Step 5: Copy the afs.rc initialization script from /usr/vice/etc/dkload
to the initialization files directory (standardly, /sbin/init.d), make sure it
is executable, and link it to the two locations where Digital UNIX expects to
find it.

	---------------------------------------------
	# cd  /sbin/init.d                        
	# cp  /usr/vice/etc/dkload/afs  afs       
	# chmod  555  afs                         
	# ln -s ../init.d/afs  /sbin/rc3.d/S99afs 
	# ln -s ../init.d/afs  /sbin/rc0.d/K66afs 
	---------------------------------------------

Step 6: Create a /vicepx directory for each partition that will house AFS
volumes.  The example instruction creates three directories.

	----------------------------------
	# mkdir /vicepa                
	# mkdir /vicepb                
	# mkdir /vicepc                
		and so on                  
	----------------------------------

Step 7: For each /vicep directory just created, add a line to /etc/fstab,
the "file systems registry" file.

-------------------------------------------------------------------
Add the following line to /etc/fstab for each /vicep directory. 

	/dev/<disk> /vicep<x> ufs rw 0 2                                

For example,                                                    

	/dev/rz3a /vicepa ufs rw 0 2                                    
-------------------------------------------------------------------

Step 8: Choose appropriate disk partitions for each AFS partition you
need and create a file system on each partition.  The command shown should be
suitable, but consult the Digital UNIX documentation for more information.

------------------------------------------------------------------
Repeat this command to create a file system on each partition. 

	# newfs  -v  /dev/rz<xx>                                       
------------------------------------------------------------------

Step 9: Mount the partition(s), using either the mount -a command to
mount all at once or the mount command to mount each partition in turn.

Step 10: Move the distributed fsck binaries to save files, install the
AFS-modified fsck ("vfsck") in the standard locations, and link the standard
fsck programs to it. Do not replace the driver programs /sbin/fsck and
/usr/sbin/fsck. See Section 2.5.3 for details

	-----------------------------------------------------
	# mv  /sbin/ufs_fsck  /sbin/ufs_fsck.orig         
	# mv  /usr/sbin/ufs_fsck  /usr/sbin/ufs_fsck.orig 
	# cp  /usr/afs/bin/vfsck  /sbin/vfsck             
	# cp  /usr/afs/bin/vfsck  /usr/sbin/vfsck         
	# ln  -s  /sbin/vfsck  /sbin/ufs_fsck             
	# ln  -s  /usr/sbin/vfsck  /usr/sbin/ufs_fsck     
	-----------------------------------------------------

Step 11: Proceed to Section 3.1.3 (page 3-45).

 3.1.2.3. GETTING STARTED ON HP-UX SYSTEMS

On HP-UX systems, either use dkload to load the kernel dynamically, or install
an AFS-modified kernel previously built for an HP-UX machine.  Then configure
partitions and replace fsck.

Step 1: Incorporate AFS into the kernel, either using dkload or
installing a previously built kernel.

 - If using dkload:

1. Verify that the machine's local disk houses the needed files and
directories, as listed in Section 2.6.1.

2. Invoke dkload.

------------------------------------------------------------------------------
If the machine's kernel does not include support for NFS server functionality, 
you must substitute libafs.nonfs.a for libafs.a.  Either use the mv command    
to replace libafs.a with libafs.nonfs.a in the /usr/vice/etc/dkload directory  
(before issuing these commands), or make the substitution on the command       
line.                                                                          

	# cd /usr/vice/etc/dkload 
	# ./dkload libafs.a
------------------------------------------------------------------------------

3. Modify the machine's initialization file (/etc/rc or equivalent) to invoke
dkload by copying the contents of /usr/vice/etc/dkload/rc.dkload (the contents
appear in full in Section 5.10).  Place the commands after the commands that
mount the file systems.  If the machine's kernel does not include support for
NFS server functionality, remember to substitute libafs.nonfs.a for libafs.a.

 - Or, if installing a kernel built with AFS modifications:

1. If you did not build AFS modifications into a kernel during the installation
of a previous HP-UX machine, follow the instructions in Section 5.4 (page 5-16)
for HP 700 systems, or in Section 5.5 (page 5-20) for HP 800 systems.

2. Move the existing kernel on the local machine to a safe location.

	-----------------------------
	# mv  /hp-ux  /hp-ux_save 
	-----------------------------

3. Use a copying program (either cp or a remote program such as ftp or NFS) to
copy the AFS-modified kernel to /hp-ux.  A standard location for the
AFS-modified kernel is /etc/conf/hp-ux for Series 700 and
/etc/conf/<conf_name>/hp_ux for Series 800 systems.

4. Reboot the machine to start using the new kernel.
	---------------------------------
	# shutdown -r
	---------------------------------

Step 2: Create a /vicepx directory for each partition that will house AFS
volumes.  The example instruction creates three directories.

Note that AFS supports disk striping for the hp700_ux90 system type.  The
hp800_ux90 system type uses logical volumes rather than disk striping.

	----------------------------------
	# mkdir /vicepa                
	# mkdir /vicepb                
	# mkdir /vicepc                
		and so on                  
	----------------------------------

Step 3: For each /vicep directory just created, create a file system on
the associated partition.

On Series 700 systems and Series 800 systems that do not use logical volumes:

Add the following line to /etc/checklist, the "file systems registry" file, for 
each /vicep directory.                                                          

	/dev/dsk/<disk> /vicep<x> hfs defaults 0 2

Then use the newfs or makefs command to create a file system on each
/dev/dsk/<disk> partition mentioned above.  Consult the operating system
documentation for syntax.

	An example of an /etc/checklist entry:

	/dev/dsk/1s0 /vicepa hfs defaults 0 2	

On HP Series 800 systems that use logical volumes:

-------------------------------------------------------------------------------
Use the SAM program to create a file system on each partition.  Consult the 
operating system documentation for syntax.                                  
-------------------------------------------------------------------------------

Step 4: Mount the partition(s), using either the mount -a command to
mount all at once or the mount command to mount each partition in turn.  Note
that SAM automatically mounts the partition on some HP Series 800systems that
use logical volumes.

Step 5: Move standard fsck to a save file, install the AFS-modified fsck
("vfsck") to the standard location and link standard fsck to it.

	--------------------------------------
	# mv /etc/fsck /etc/fsck.orig      
	# cp /usr/afs/bin/vfsck /etc/vfsck 
	# ln -s /etc/vfsck /etc/fsck       
	--------------------------------------

Step 6: Proceed to Step 3.1.3 (page 3-45).

 3.1.2.4. GETTING STARTED ON IRIX SYSTEMS

On IRIX systems, either use ml to load the kernel dynamically, or install an
AFS-modified kernel previously built for an IRIX machine.  Then install the
initialization script, and configure partitions. It is not necessary to replace
fsck on IRIX systems, because Silicon Graphics, Inc. has modified the IRIX fsck
program to handle AFS volumes properly.  Transarc does not provide a replacement
fsck program for this system type.

Step 1: Incorporate AFS into the kernel, either using ml or installing a
previously built kernel.

 - If using ml:

1.Verify that the /usr/vice/etc/sgiload directory on the local disk contains:
afs, afs.rc, and afs.sm, in addition to the "libafs"  library files.


2. On sgi_53 machines, before running ml you must run the afs_rtsymtab.pl script
located in the /usr/vice/etc/sgiload directory.  As distributed by Silicon
Graphics, the IRIX 5.3 kernel does not expose certain kernel symbols in the way
that ml requires for loading AFS. The afs_rtsymtab.pl script alters the
/var/sysgen/master.d/rtsymtab file, which contains a list of kernel symbols, in
the manner required by AFS.  Running autoconfig incorporates the amended list
into the kernel, and rebooting loads the new kernel.  You need to run the script
only once per sgi_53 machine, not each time ml runs.

On sgi_53 machines only, run the afs_rtsymtab.pl script, issue the autoconfig
command, and reboot the machine. 

	---------------------------------------------
	# /usr/vice/etc/sgiload/afs_rtsymtab.pl -run 
	# autoconfig -v
	# shutdown -i6
	---------------------------------------------

3. Issue the ml command, replacing <library file> with the name of the
appropriate library file. Select R3000 versus R4000 processor, no NFS support
versus NFS support, and single processor (SP) versus multiprocessor (MP).

If you do not know which processor your machine has, issue IRIX's hinv command
and check the line in the output that begins "CPU."

--------------------------------------------------------------------------------
Issue the ml command, replacing <library file> with the name of the          
appropriate library file.                                                    

In each case below, read "without NFS support" to mean that the kernel does  
not include support for NFS server functionality.                            

 -  libafs.MP.R3000.o for R3000 multiprocessor with NFS support            
 -  libafs.MP.R3000.nonfs.o for R3000 multiprocessor without NFS support   
 -  libafs.MP.R4000.o for R4000 Multiprocessor with NFS support            
 -  libafs.MP.R4000.nonfs.o for R4000 multiprocessor without NFS support   
 -  libafs.SP.R3000.o for R3000 single processor with NFS support          
 -  libafs.SP.R3000.nonfs.o for R3000 single processor without NFS support 
 -  libafs.SP.R4000.o for R4000 single processor with NFS support          
 -  libafs.SP.R4000.nonfs.o for R4000 single processor without NFS support 

# ml  ld  -v  -j  /usr/vice/etc/sgiload/<library file>  -p  afs_  -a  afs    
--------------------------------------------------------------------------------

 - Or, if installing a kernel built with AFS modifications:

1. If you did not build AFS modifications into a kernel during the
installation of a previous IRIX machine, follow the instructions
in Section 5.6 (page 5-23).

2. Copy the existing kernel on the local machine to a safe location.
Note that /unix will be overwritten by /unix.install the next time
the machine is rebooted.

	--------------------------
	# cp  /unix  /unix_save 
	---------------------------

3. Reboot the machine to start using the new kernel.

	---------------------------------
	# shutdown -i6
	---------------------------------

Step 2: Copy the afs.rc initialization script from /usr/vice/etc/sgiload
to the IRIX initialization files directory (standardly, /etc/init.d), make sure
it is executable, link it to the two locations where IRIX expects to find it,
and issue the appropriate chkconfig commands.

------------------------------------------------------------------------
Note the removal of the .rc extension as you copy the initialization 
file to the /etc/init.d directory.                                   

	# cd  /etc/init.d                                                    
	# cp  /usr/vice/etc/sgiload/afs.rc  afs                              
	# chmod  555  afs                                                    
	# ln -s ../init.d/afs  /etc/rc0.d/K35afs                             
	# ln -s ../init.d/afs  /etc/rc2.d/S35afs                             
	# cd /etc/config                                                     
	# /etc/chkconfig -f afsserver on

If you are using ml:                                                 

	# /etc/chkconfig  -f  afsml  on                                      

If you are using an AFS-modified kernel:                             

	# /etc/chkconfig  -f  afsml  off                                     
	# /etc/chkconfig  -f  afsserver  on                                  
------------------------------------------------------------------------

Step 3: Create a vicepx directory for each partition that will house AFS
volumes.  The example instruction creates three directories.

	-----------------
	# mkdir /vicepa                
	# mkdir /vicepb                
	# mkdir /vicepc                
		and so on                  
	-----------------

Step 4: For each /vicep directory just created, add a line to /etc/fstab,
the "file systems registry" file.

-------------------------------------------------------------------
Add the following line to /etc/fstab for each /vicep directory. 

	/dev/vicep<x> /vicep<x> efs rw,raw=/dev/rvicep<x> 0 0           

For example,                                                    

	/dev/vicepa /vicepa efs rw,raw=/dev/rvicepa 0 0                 
-------------------------------------------------------------------

Step 5: Create a file system on each partition.  The syntax shown should
be appropriate, but consult the IRIX documentation for more information.

------------------------------------------------------------------
Repeat this command to create a file system on each partition. 

	# mkfs /dev/rvicep<x>                                          
------------------------------------------------------------------

Step 6: Mount the partition(s) by issuing either the mount -a command to
mount all partitions at once or the mount command to mount each partition in
turn.

Step 7: Proceed to Section 3.1.3 (page 3-45).


3.1.2.5. GETTING STARTED ON NCR UNIX SYSTEMS

On NCR UNIX systems, you must build AFS modifications into a new kernel (dynamic
loading is not possible).  Then continue by installing the initialization
script, creating partitions for storing AFS volumes, and replacing the standard
fsck program with an AFS-safe version.

Step 1: If you have not previously built AFS modifications into an NCR
UNIX kernel (during installation of a previous machine), follow the instructions
in Section 5.7 (page 5-26).

Step 2: Move the existing kernel on the local machine to a safe location.

	---------------------
	# mv /unix /unix.save
	---------------------

Step 3: Use a copying program (either cp or a remote program such as ftp
or NFS) to copy the AFS-modified kernel to the appropriate location.

Step 4: Reboot the machine to start using the new kernel.

	--------------
	# shutdown -i6
	--------------

Step 5: Copy the initialization script that Transarc provides for NCR
UNIX systems as /usr/vice/etc/modload/afs.rc to the /etc/init.d directory, make
sure it is executable, and link it to the two locations where NCR UNIX expects
to find it.

	 --------------------------------------
	# cd /etc/init.d
	# cp /usr/vice/etc/modload/afs afs
	# chmod 555 afs
	# ln -s ../init.d/afs /etc/rc3.d/S14afs
	# ln -s ../init.d/afs /etc/rc2.d/K66afs
	 ---------------------------------------

Step 6: Create a /vicepx directory for each partition that will house AFS
volumes.  The example instruction creates three directories.

	-----------------
	# mkdir /vicepa
	# mkdir /vicepb
	# mkdir /vicepc
	   and so on
	-----------------

Step 7: For each /vicep directory just created, add a line to
/etc/vfstab, the "file systems registry" file.

	----------------------------------------------------------------
	Add the following line to /etc/vfstab for each /vicep directory.

	/dev/dsk/<disk> /dev/rdsk/<disk> /vicep<x> ufs  yes

	For example,

		/dev/dsk/c0t6d0s1 /dev/rdsk/c0t6d0s1 /vicepa ufs 3 yes
	----------------------------------------------------------------

Step 8: Create a file system on each partition.  The syntax shown should
be appropriate, but consult the operating system documentation for more
information. 

	------------------------------------------------------------------
	Repeat this command to create a file system on each partition.

	# mkfs -v /dev/rdsk/<xxxxxxxx>
	------------------------------------------------------------------

Step 9: Mount the partition(s) by issuing the mountall command to mount
all partitions at once.

Step 10: Move the distributed fsck to a save file, install the
AFS-modified fsck ("vfsck") to the standard location and link the distributed
fsck to it.  Do not replace the driver programs /etc/fsck and /usr/sbin/fsck.
See Section 2.8.3 for details.

	-------------------------------------------
	# mv /etc/fs/ufs/fsck /etc/fs/ufs/fsck.orig
	# cp /usr/afs/bin/vfsck /etc/fs/ufs/vfsck
	# ln -s /etc/fs/ufs/vfsck /etc/fs/ufs/fsck
	--------------------------------------------

Step 11: Proceed to Section 3.1.3 (page 3-45).

 3.1.2.6. GETTING STARTED ON SOLARIS SYSTEMS

On Solaris systems, use Sun's modload program to load AFS modifications into the
kernel.  Then configure partitions and replace fsck.

Step 1: Verify that the machine's local disk houses the needed files and
directories, as listed in Section 2.9.1.

Step 2: Create the file /kernel/fs/afs as a copy of the appropriate AFS
library file.

------------------------------------------------------------
	# cd /usr/vice/etc/modload                               

If the machine's kernel supports NFS server functionality:

	# cp libafs.o  /kernel/fs/afs                            

If the machine's kernel does not support NFS server functionality:

	# cp libafs.nonfs.o  /kernel/fs/afs                      
------------------------------------------------------------

Step 3: Create an entry for AFS in the /etc/name_to_sysnum file to allow
the kernel to make AFS system calls.

-------------------------------------------------------------------------------
In the file /etc/name_to_sysnum, create an "afs" entry in slot 105 (the slot
just before the "nfs" entry) so that the file looks like:

reexit          1                                                            
fork            2                                                            
 .              .                                                            
 .              .                                                            
 .              .                                                            
sigpending      99                                                           
setcontext      100                                                          
statvfs         103                                                          
fstatvfs        104                                                          
afs             105                                                          
nfs             106                                                          
-------------------------------------------------------------------------------

Step 4: If you are running a Solaris 2.4 system, reboot the machine.

	------------------------
	# /usr/sbin/shutdown -i6
	------------------------

Step 5: Invoke modload.

	  -----------------------------------
	  # /usr/sbin/modload  /kernel/fs/afs 
	  -----------------------------------

If you wish to verify that AFS loaded correctly, use the modinfo command.

	# /usr/sbin/modinfo | egrep afs

The appearance of two lines that mention afs in the output indicate that AFS
loaded successfully, as in the following example (the exact value of the numbers
in the first five columns is not relevant):

	69 fc71f000 4bc15 105   1  afs (afs syscall interface)
	69 fc71f000 4bc15  15   1  afs (afs file system)

Step 6: Copy the initialization script that Transarc provides for Solaris
systems as /usr/vice/etc/modload/afs.rc to the /etc/init.d directory, make sure
it is executable, and link it to the two locations where Solaris expects to find
it.

	-----------------------------------------
	# cd  /etc/init.d                        
	# cp  /usr/vice/etc/modload/afs  afs     
	# chmod  555  afs                        
	# ln -s ../init.d/afs  /etc/rc3.d/S14afs 
	# ln -s ../init.d/afs  /etc/rc2.d/K66afs 
	-----------------------------------------

Step 7: Create a vicepx directory for each partition that will house AFS
volumes.  The example instruction creates three directories.

	-----------------
	# mkdir /vicepa                
	# mkdir /vicepb                
	# mkdir /vicepc                
	   and so on
	-----------------

Step 8: For each /vicep directory just created, add a line to
/etc/vfstab, the "file systems registry" file.

--------------------------------------------------------------------
Add the following line to /etc/vfstab for each /vicep directory. 

	/dev/dsk/<disk> /dev/rdsk/<disk> /vicep<x> ufs  yes  

For example,                                                     

	/dev/dsk/c0t6d0s1 /dev/rdsk/c0t6d0s1 /vicepa ufs 3 yes           
--------------------------------------------------------------------

Step 9: Create a file system on each partition.  The syntax shown should
be appropriate, but consult the Solaris documentation for more information.

------------------------------------------------------------------
Repeat this command to create a file system on each partition. 

	# newfs -v /dev/rdsk/<xxxxxxxx>                                
------------------------------------------------------------------

Step 10: Mount the partition(s) by issuing the mountall command to mount
all partitions at once.

Step 10: Move the distributed fsck to a save file, install the
AFS-modified fsck ("vfsck") to the standard location and link the distributed
fsck to it. Do not replace the driver programs /etc/fsck and /usr/sbin/fsck.
See Section 2.9.3 for details.

	---------------------------------------------------------
	# mv  /usr/lib/fs/ufs/fsck  /usr/lib/fs/ufs/fsck.orig 
	# cp  /usr/afs/bin/vfsck  /usr/lib/fs/ufs/vfsck       
	# ln  -s  /usr/lib/fs/ufs/vfsck  /usr/lib/fs/ufs/fsck 
	---------------------------------------------------------

Step 12: Proceed to Section 3.1.3 (page 3-45).

 3.1.2.7. GETTING STARTED ON SUNOS SYSTEMS

On SunOS systems, either load AFS modifications into the kernel dynamically
using dkload or modload, or install an AFS-modified kernel previously built for
a SunOS machine.  Then configure partitions and replace fsck.

Step 1: Incorporate AFS into the kernel, either dynamically using dkload
or modload, or by installing a previously built kernel.

 - If using dkload:

1. Verify that the machine's local disk houses the needed files and
directories, as listed in Section 2.10.1.

2. Invoke dkload after running ranlib.

-------------------------------------------------------------------------------
If the machine's kernel does not include support for NFS server functionality,
you must substitute libafs.nonfs.a for libafs.a.  Either use the mv command to
replace libafs.a with libafs.nonfs.a in the /usr/vice/etc/dkload directory
(before issuing these commands), or make the substitution on the command line.

	# cd /usr/vice/etc/dkload

	# ranlib libafs.a

	# ranlib libcommon.a

	# ./dkload libafs.a
-------------------------------------------------------------------------------

3. Modify the machine's initialization file (/etc/rc or equivalent) to invoke
dkload by copying the contents of /usr/vice/etc/dkload/rc.dkload (the contents
appear in full in Section 5.10).  Place the commands after the commands that
mount the file systems.  If the machine's kernel does not include support for
NFS server functionality, remember to substitute libafs.nonfs.a for libafs.a.

 - Or, if using modload:

1. Verify that the machine's local disk houses the needed files and directories,
as listed in Section 2.10.2.

2. Invoke modload.
------------------------------------------------------------------------------
	# cd /usr/vice/etc/modload

If the machine supports NFS:

	# /usr/etc/modload ./libafs.o

If the machine's kernel does not include support for NFS server functionality:

	# /usr/etc/modload ./libafs.nonfs.o
------------------------------------------------------------------------------

3. Modify the machine's initialization file (/etc/rc or equivalent) to invoke
modload, by copying in the contents of /usr/vice/etc/modload/rc.modload (the
contents appear in full in section 5.12).  Place the commands after the commands
that mount the file systems.  If the machine's kernel does not include support
for NFS server functionality, remember to substitute libafs.nonfs.o for
libafs.o.

 - Or, if installing a kernel built with AFS modifications:

1. If you did not build AFS modifications into a kernel during the installation
of a previous SunOS machine, follow the instructions in Section 5.8 (page 5-28).

2. Move the existing kernel on the local machine to a safe location.

	----------------------------
	# mv  /vmunix  /vmunix_save 
	----------------------------

3. Use a copying program (either cp or a remote program such as ftp or NFS) to
copy the AFS-modified kernel to the appropriate location.

4. Reboot the machine to start using the new kernel.

	---------------------------------
	# shutdown -r now
	---------------------------------

Step 2: Create a /vicepx directory for each partition that will house AFS
volumes.  The example instruction creates three directories.

	----------------------------------
	# mkdir /vicepa                
	# mkdir /vicepb                
	# mkdir /vicepc                
  	   and so on                  
	----------------------------------

Step 3: For each /vicep directory just created, add a line to /etc/fstab,
the "file systems registry" file.

-------------------------------------------------------------------
Add the following line to /etc/fstab for each /vicep directory. 

	/dev/<disk> /vicep<x> 4.2 rw 1 2                                

For example,                                                    

	/dev/sd0g /vicepa 4.2 rw 1 2                                    
-------------------------------------------------------------------

Step 4: Create a file system on each partition.  The syntax shown should
be appropriate, but consult the SunOS documentation for more information.

------------------------------------------------------------------
Repeat this command to create a file system on each partition. 

	# newfs -v /dev/rsd<xx>                                        
------------------------------------------------------------------

Step 5: Mount the partition(s) by issuing either the mount -a command to
mount all partitions at once the mount command to mount each
partition in turn.

Step 6: Move standard fsck to a save file, install the AFS-modified fsck
("vfsck") to the standard location and link standard fsck to it.

	--------------------------------------------
	# mv  /usr/etc/fsck  /usr/etc/fsck.orig  
	# cp  /usr/afs/bin/vfsck  /usr/etc/vfsck 
	# rm  /etc/fsck                          
	# ln  -s  /usr/etc/vfsck  /etc/fsck      
	# ln  -s  /usr/etc/vfsck  /usr/etc/fsck  
	--------------------------------------------

Step 7: Proceed to Section 3.1.3 (page 3-45).

 3.1.2.8. GETTING STARTED ON ULTRIX SYSTEMS

On Ultrix systems, either load AFS modifications into the kernel dynamically
using dkload, or install an AFS-modified kernel previously built for a Ultrix
machine.  Then configure partitions and replace fsck.

Step 1: Incorporate AFS into the kernel, either dynamically using dkload,
or by installing a previously built kernel.

 - If using dkload:

1. Verify that the machine's local disk houses the needed files and
directories, as listed in Section 2.11.1.

2. Invoke dkload after running ranlib.

-------------------------------------------------------------------------------
If the machine's kernel does not include support for NFS server functionality,
you must substitute libafs.nonfs.a for libafs.a.  Either use the mv command to
replace libafs.a with libafs.nonfs.a in the /usr/vice/etc/dkload directory
(before issuing these commands), or make the substitution on the command line.

	# cd /usr/vice/etc/dkload
	# ranlib libafs.a
	# ranlib libcommon.a
	# ./dkload libafs.a
	-------------------------

3. Modify the machine's initialization file (/etc/rc or equivalent) to invoke
dkload by copying the contents of /usr/vice/etc/dkload/rc.dkload (the contents
appear in full in Section 5.10).  Place the commands after the commands that
mount the file systems.  If the machine's kernel does not include support for
NFS server functionality, remember to substitute libafs.nonfs.a for libafs.a.

 - Or, if installing a kernel built with AFS modifications:

1. If you did not build AFS modifications into a kernel during the installation
of a previous Ultrix machine, follow the instructions in Section 5.9 (page
5-35).

2. Move the existing kernel on the local machine to a safe location.

	-------------------------------
	# mv  /vmunix  /vmunix_save 
	-------------------------------

3. Use a copying program (either cp or a remote program such as ftp or NFS) to
copy the AFS-modified kernel to the appropriate location.

4. Reboot the machine to start using the new kernel.

	---------------------------------
	# shutdown -r now
	---------------------------------

Step 2: Create a /vicepx directory for each partition that will house AFS
volumes.  The example instruction creates two directories.

	------------------
	# mkdir /vicepa                
	# mkdir /vicepb                
		and so on                  
	------------------

Step 3: For each /vicep directory just created, add a line to /etc/fstab,
the "file systems registry" file.

-------------------------------------------------------------------
Add the following line to /etc/fstab for each /vicep directory. 

	/dev/<disk>:/vicep<x>:rw:1:2:ufs::                              

For example,                                                    

	/dev/rz4a:/vicepa:rw:1:2:ufs::                                  
-------------------------------------------------------------------

Step 4: Create a file system on each partition.  The syntax shown should
be appropriate, but consult the Ultrix documentation for more
information.

------------------------------------------------------------------
Repeat this command to create a file system on each partition. 

	# newfs -v /dev/rhd<xx> <disk type>                            
------------------------------------------------------------------

Step 5: Mount the partition(s) by issuing either the mount -a command to
mount all partitions at once or the mount command to mount each
partition in turn.

Step 6: Move standard fsck to a save file, install the AFS-modified fsck
("vfsck") to the standard location and link standard fsck to it.

	----------------------------------------
	# mv  /bin/fsck  /bin/fsck.orig      
	# cp  /usr/afs/bin/vfsck  /bin/vfsck 
	# rm  /etc/fsck                      
	# ln  -s  /bin/vfsck  /etc/fsck      
	----------------------------------------

Step 7: Proceed to Section 3.1.3 (page 3-45).

 3.1.3. INITIALIZE SERVER PROGRAMS

In this section you initialize the BOS Server, Update Server, runntp and File
Server processes.

Step 1: Start the BOS Server (bosserver), using the -noauth flag to
prevent the AFS processes from performing authorization checking.  Remember that
this is a grave compromise of security; finish the remaining instructions in
this section in an uninterrupted pass.

	--------------------------------------
	# /usr/afs/bin/bosserver -noauth & 
	--------------------------------------

Verify that the /usr/afs/etc/License file exists.

Step 2: If using the United States edition of AFS, create the upclientetc
process as an instance of the client portion of the Update Server (upclient).
It accepts updates of the common configuration files stored in the system
control machine's /usr/afs/etc directory from the upserver process (server
portion of the Update Server) running on the system control machine.  You should
already have followed the instructions in Chapter 2 to install your cell's first
file server machine as the system control machine.

In the United States edition of AFS, upclient requests that upserver transfer
files in encrypted form.  This is appropriate for /usr/afs/etc.

Do not issue this command with the international edition of AFS, because
encryption of user-level data is not possible.  The Update Server cannot encrypt
/usr/afs/etc, and its contents are too sensitive to cross the network
unencrypted.  You will have to update the contents of /usr/afs/etc by hand on
each file server machine.

By default, the Update Server performs updates every 300 seconds (five minutes).
Use the -t argument to specify a different number of seconds.

-------------------------------------------------------------------------------
If using the United States edition of AFS, type the following on a single line.
Substitute this machine's name for machine name and this cell's name for cell
name, in this and all remaining commands in this section.

Do not issue this command if using the international edition of AFS.

	# /usr/afs/bin/bos create <machine name> upclientetc simple "/usr/afs/bin/upclient <system control machine> -t <time> /usr/afs/etc" -cell <cellname>
-------------------------------------------------------------------------------

Step 3: Create an instance of the Update Server to handle distribution of
the file server binaries in /usr/afs/bin:

 - If this is the first file server machine of its AFS system type, create the
upserver process as an instance of the server portion of the Update Server.  It
will distribute its copy of the file server process binaries to the other file
server machines of this system type that you may install in future.  Creating
this process makes this machine the binary distribution machine for its type.

-----------------------------------------------------------------------
	On a new system type, type the following on a single line.          
  									
	# /usr/afs/bin/bos create <machine name> upserver               
	  simple "/usr/afs/bin/upserver -clear /usr/afs/bin" 		
	  -cell <cellname>                                   		
-----------------------------------------------------------------------

 - If this machine is an existing system type, create the upclientbin process as
an instance of the client portion of the Update Server.  It will accept updates
of the AFS binaries from the upserver process running on the binary distribution
machine for this machine's system type.  For distribution to work properly, you
must already have installed the upserver process on that machine.

In the United States edition of AFS, instances of the upclient process by
default requests that upserver encrypt files before transferring them across the
network.  This is appropriate for /usr/afs/etc, because its contents are
sensitive.  It is not appropriate for /usr/afs/bin, because they are not
sensitive and encrypting binaries takes a long time.  You will use the -clear
flag on /usr/afs/bin so that upclientbin does not request encryption for it.

In the international edition of AFS, the upserver cannot access the
routines it needs to encrypt user-level data.  The upclient process
therefore must request that upserver send files in unencrypted form.
This is acceptable for /usr/afs/bin, since its contents are not
sensitive.

By default, the Update Server performs updates every 300 seconds
(five minutes).  Use the -t argument to specify an different number
of seconds.

-------------------------------------------------------------------------------
On an existing system type, type the command on a single line.  Substitute this
machine's name for machine name and this cell's name for cellname, in this and
all remaining commands in this section.

	# /usr/afs/bin/bos create <machine name> upclientbin simple "/usr/afs/bin/upclient <binary distribution machine> [-t <time>] -clear /usr/afs/bin" -cell <cellname>           
-------------------------------------------------------------------------------

Step 4: Start the runntp process, which controls the Network Time
Protocol Daemon (NTPD), after verifying that the ntpd and ntpdc binaries exist
in /usr/afs/bin.  NTPD keeps the internal clock on this machine synchronized
with the clock on other file server machines in the cell.

Note: Do not perform this step if ntpd is already running on this machine;
attempting to run multiple instances of ntpd causes an error.  Similarly, you
can skip this step if some other time synchronization protocol is running on
this machine; running ntpd does not cause an error in this case, but is
unnecessary.

-------------------------------------------------------------------
Type each command on a single line.                             

	# ls /usr/afs/bin                                               

	# /usr/afs/bin/bos create <machine name> runntp simple          
	  /usr/afs/bin/runntp -cell <cell name> 
-------------------------------------------------------------------

Step 5: Start the fs process, which binds together the File Server,
Volume Server and Salvager.

-------------------------------------------------------------
Type the following command on a single line.                              

	# /usr/afs/bin/bos create <machine name>  fs  fs
	  /usr/afs/bin/fileserver /usr/afs/bin/volserver  
	  /usr/afs/bin/salvager  -cell <cellname>         

-------------------------------------------------------------

Step 6: Verify that the machine's initialization file invokes bosserver,
so that the BOS Server starts automatically at each file server reboot.

On Digital UNIX, IRIX, NCR UNIX and Solaris systems, no action is necessary.
The "init.d" initialization script includes tests that result in automatic BOS
Server start up if appropriate.

For system types other than Digital UNIX, IRIX NCR UNIX or Solaris:

---------------------------------------------------------------------------
Add the following lines to /etc/rc or equivalent, after the lines that configure
the network, mount all file systems, and invoke a kernel dynamic loader.

	if [ -f /usr/afs/bin/bosserver ]; then                                  
	echo 'Starting bosserver' > /dev/console                             
	/usr/afs/bin/bosserver &                                          
	fi                                                                      
---------------------------------------------------------------------------

 3.1.4. COMPLETING THE INSTALLATION

The commands in several of the following steps (klog, bos setauth, and bos
restart) must be issued on an AFS client machine.  You must either

 - make this machine a client before continuing by following the instructions in
Sections 2.23 through 2.27.

 - issue the remaining commands on an existing AFS client machine (of any system
type), either at the console or over a remote connection.

If this is an additional file server machine of an existing type, the AFS
binaries for this system type presumably already reside in AFS, as recommended
in Section 2.31.  If they do not, before continuing you should follow the
instructions in that section too.  If you decide to work on another client
machine, remember to perform the final step in Section 2.31 linking the local
directory /usr/afsws to the appropriate location in the AFS file treeMon the new
file server machine itself.

If this is the first file server machine of its type, you should also follow the
instructions in Section 2.31 to copy the AFS binaries for this system type into
an AFS volume.  Whether or not you perform the steps on another client machine,
remember to perform the final step in Section 2.31 linking the local directory
/usr/afsws to the appropriate location in the AFS file treeMon this machine (the
new file server machine).  You may also wish to create AFS volumes to house UNIX
system binaries for the new system type, as discussed in Section 2.32.

For system types other than Digital UNIX, NCR UNIX IRIX, and Solaris, the
following should now appear in the machine's initialization file(s) in the
indicated order.  (Digital UNIX, IRIX, NCR UNIX and Solaris systems use an
"init.d" initialization file that is organized differently.)

 - NFS commands, if appropriate (for example, if the machine will act as an
NFS/AFS translator). For AIX version 3.2.2 or lower, commands loading the NFS
kernel extensions (nfs.ext) should appear here; with AIX version 3.2.3 and
higher, NFS is already loaded into the kernel. Then invoke nfsd if the machine
is to be an NFS server.


 - dynamic kernel loader command(s), unless AFS was built into the kernel

 - bosserver

 - afsd (if the machine will remain a client after you complete this
installation)

Step 1: If this machine will remain an AFS client after you complete this
installation, replace standard login with the AFS version.


 - For AIX 3.2 systems:

For this system type, Transarc supplies both login.noafs, which is invoked when
AFS is not running on the machine, and login.afs, which is invoked when AFS is
running.  If you followed the instructions for loading the AFS rs_aix32 binaries
into an AFS directory and creating a local disk link to it, these files are
found in /usr/afsws/bin.  Note that standard AIX login is normally installed as
/usr/sbin/login, with links to /etc/tsm, /etc/getty, and /bin/login.  You will
install the replacement AFS binaries into the /bin directory.

1. Replace the link to standard login in /bin with login.noafs.

	--------------------------------------------------
	# mv  /bin/login  /bin/login.orig              
	# cp  /usr/afsws/bin/login.noafs  /bin/login   
	--------------------------------------------------

2. Replace the links from /etc/getty and /etc/tsm to standard login with links
to /bin/login.

	----------------------------------
	# mv  /etc/getty  /etc/getty.orig              
	# mv  /etc/tsm  /etc/tsm.orig                  
	# ln -s  /bin/login  /etc/getty                
	# ln -s  /bin/login  /etc/tsm                  
	-----------------------------------

3. Install login.afs into /bin and create a symbolic link to /etc/afsok.

	--------------------------------------------------
	# cp  /usr/afsws/bin/login.afs  /bin/login.afs 
	# ln -s  /bin/login.afs  /etc/afsok            
	--------------------------------------------------

 - For AIX 4.1 systems:

Before beginning, verify that the afs_dynamic_auth program has been installed in the local /usr/vice/etc directory.

1. Set the registry variable in the /etc/security/user to DCE on the local
client machine.  Note that you must set this variable to DCE (not AFS).

	------------------
	registry = DCE
	------------------

2. Set the registry variable for the user root to files in the same file
(/etc/security/user) on the local client machine.  This allows the user root to
authenticate by using the local password "files" on the local machine.

	------------------------
	root: 
	        registry = files
	------------------------

3. Set the SYSTEM variable in the same file (/etc/security/user).
The setting depends upon whether the machine is an AFS client only or both an
AFS and a DCE client.

-----------------------------------------------------------------------------
If the machine is an AFS client only, set the SYSTEM variable to be:

	SYSTEM = "AFS OR AFS [UNAVAIL] AND compat [SUCCESS]"

If the machine is both an AFS and a DCE client, set the SYSTEM variable to be:

	SYSTEM = "DCE OR DCE [UNAVAIL] OR AFS OR AFS

	[UNAVAIL] AND compat [SUCCESS]"
-----------------------------------------------------------------------------

4. Define DCE in the /etc/security/login.cfg file on the client machine. In this
definition and the following one for AFS, the program attribute specifies the path of the program to be invoked.

	---------------------------------------------
	DCE:
	     program = /usr/vice/etc/afs_dynamic_auth
	---------------------------------------------

5. Define the AFS authentication program in the /etc/security/login.cfg file on
the local client machine as follows:

	---------------------------------------------
	AFS:
	     program = /usr/vice/etc/afs_dynamic_auth
	---------------------------------------------

 - For IRIX systems, you do not need to replace the login binary: Silicon
Graphics, Inc. has modified IRIX login to operate the same as AFS login when the
machine's kernel includes AFS.  However, you do need to verify that the local
/usr/vice/etc directory contains the two libraries provided with AFS and
required by IRIX login, afsauthlib.so and afskauthlib.so.


	-------------------------------------------------------
	Output should include afsauthlib.so and afskauthlib.so.

	# ls /usr/vice/etc
	-------------------------------------------------------

 - For system types other than AIX and IRIX, the replacement AFS login binary
resides in /usr/afsws/bin, if you followed the instructions for loading the AFS
binaries into an AFS directory and creating a local disk link to it.  Install
the AFS login as /bin/login.

1. Replace standard login with AFS login.

	------------------------------------------
	# mv  /bin/login  /bin/login.orig      
	# cp  /usr/afsws/bin/login  /bin/login 
	------------------------------------------

Step 2: Authenticate as admin (or whatever name you assigned the
administrative account in Section 2.15).

	----------------
	# klog admin 
	Password:    
	----------------

Step 3: Turn on authorization checking.

-------------------------------------------------------------------
Substitute the new file server machine's name for machine name. 

	# /usr/afs/bin/bos setauth <machine name> on  -cell <cellname>  
-------------------------------------------------------------------

Step 4: Verify that /usr/afs and its subdirectories on the new file
server machine meet the ownership and mode bit requirements outlined in 
Section 2.35.3.  If necessary, use the chmod commands shown there to correct the mode
bits.

Step 5: If you made this machine a client and now wish to remove the
client functionality, see Section 2.38.

If this machine is a Solaris or SunOS system and it will remain a client, then
follow the instructions in Section 2.37 to alter its file system clean-up files.

Step 6: Restart the BOS Server, which will start all processes you
created with bos create commands in this section.

-------------------------------------------------------------------------
Substitute the new file server machine's name for machine name.       

	# /usr/afs/bin/bos restart <machine name> -bosserver -cell <cellname> 
-------------------------------------------------------------------------

Step 7: If you want this machine be a database server machine, proceed to
Section 3.2.

 3.2. INSTALLING DATABASE SERVER FUNCTIONALITY ON AN
EXISTING FILE SERVER MACHINE

Read the following information before you install a database server machine:

 - Database server machines are unique because they

 .. run the Authentication Server, Protection Server, and Volume Location
Server. They also run the Backup Server if the cell uses the AFS Backup System,
as is assumed in these instructions.

 .. must appear in the CellServDB file of every machine in the cell (and of
client machines in foreign cells, if they are to access files in this cell)

 - It is recommended, but not required, that you configure your database server
machines as AFS file server machines.  If you choose not to configure a database
server machine as a file server machine, then the kernel does not have to
incorporate AFS modifications, but the local disk must include most of the files
and directories found under /usr/afs on a file server machine.

The instructions in this section assume that the machine on which you are
installing database server functionality is already a file server machine.
Contact AFS Product Support to learn how to install database server
functionality on a non-file server machine.

 - During the installation of database server functionality, you must restart
all of the database server machines to force the election of a new Ubik
coordinator (sync site) for each database server process.  AFS access is
impossible throughout your cell during the election, which usually takes less
than 5 minutes.  You may want to schedule database server installation for times
when activity is light.  (Chapter 2 of the AFS System Administrator's Guide
discusses Ubik's election procedures in some detail.)

 - Updating the in-kernel list of database server machines on each of your
cell's client machines is generally the most time- and labor-intensive part of
installing a new database server machine, but it is arguably the most crucial
for correct functioning in your cell.  Incorrect knowledge of your cell's
database server machines can prevent your users from authenticating, accessing
files, and issuing kas, pts, and vos commands.

You update a client's in-kernel list either by changing CellServDB and
rebooting, or by issuing the fs newcell command; see Chapter 15 of the AFS
System Administrator's Guide for instructions.

The point at which you need to update your clients' knowledge of database server
machines depends on which of the database server machines has the lowest IP
address:

 .. If the new database server machine has a lower IP address than any the
existing database server machines, then you should update every client before
you restart the database server processes.  If you do not, users may become
unable to update (write to) any of the AFS databases.  This is because the
machine with the lowest IP address is elected Ubik coordinator (sync site) under
normal circumstances, and database writes are possible only at the sync site's
copy.  Clients that you have not updated will not be able to contact the new
sync site.  (Be aware that if clients contact the new database server machine
before it is actually in service, they will experience a timeout before
contacting another database server machine.  This is a minor, and temporary,
problem compared to being unable to write to the database.)

 .. If the new database server machine does not have the lowest IP address of
any database server machine, then it is better to update clients after
restarting the database server processes.  Clients will not start using the new
database server machine until you update their in-kernel list, but that does not
cause timeouts or update problems (because the new machine is not likely to
become the sync site).

The following instructions indicate the appropriate place to update your clients
in either case.

 3.2.1. PROCEDURE OVERVIEW

To install a database server machine, you will

1. install the bos suite of commands locally, as a precaution

2. add the new machine to /usr/afs/etc/CellServDB on existing file
server machines

3. update the CellServDB and/or in-kernel list on every client machine

4. start the Authentication Server

5. start the Protection Server

6. start the Volume Location Server

7. start the Backup Server

8. restart the database server processes on every database server
machine

9. inform Transarc Corporation of the new database server machine

 3.2.2. INSTRUCTIONS

Step 1: As a precaution, make sure that the bos command suite binaries
are available on the local disk.  Having a local copy of bos commands is
necessary in case of an error during installation, since the following
procedures make the file system inaccessible.

 - If you are working on a file server machine, the bos command binary already
resides in /usr/afs/bin.  No action is necessary.

 - If you are working on a client workstation, the bos binary usually resides in
/usr/afsws/bin, which is normally a symbolic link into AFS. If so, copy the bos
binary to /tmp.

	------------------------------
	# cp  /usr/afsws/bin/bos  /tmp 
	------------------------------

Step 2: Add the new database server machine to the
/usr/afs/etc/CellServDB file on existing file server machines.

If using the United States edition of AFS, issue the following command to update
CellServDB on your cell's system control machine.  Its server portion of the
Update Server distributes the updated file to all other file server machines.

If using the international edition of AFS, you should not be using the Update
Server to distribute files from /usr/afs/etc.  Instead, update CellServDB on
each file server machine individually.

-------------------------------------------------------------------------------
In the bos addhost command, provide the full Internet-style machine name
(including cell extension) for both machine name to update and new db-server
machine.

If using the United States edition of AFS, substitute the system control
machine's name for machine name to update.

If using the international edition of AFS, repeat the bos addhost command once
for each server machine in your cell, substituting each one's name for machine
name to update in turn.

	# cd /usr/afs/bin
	# bos addhost <machine name to update> <new db-server machine>
-------------------------------------------------------------------------------

If using the United States edition, wait several minutes to allow
the Update Server to distribute the new CellServDB to all other AFS
file server machines in the cell. By default this takes a maximum of
five minutes. If using the international edition, attempt to issue
all of the bos addhost commands within five minutes.

Step 3: Verify that the new database server machine appears in CellServDB
on all file server machines.

--------------------------------------------------------
Note: Repeat this command on every file server machine. 

	# bos listhosts <machine name>                          
--------------------------------------------------------

Step 4: Add the new database server machine to your cell's central update
source for the client machine version of CellServDB, if there is one.  The
standard location for the central file is /afs/cellname/common/etc/CellServDB.

If you are willing to advertise your cell to other cells, add the new database
server machine to the file that foreign cells can consult to learn about your
database server machines.  Transarc recommends
/afs/cellname/service/etc/CellServDB.local as the standard location.

Step 5: If this machine's IP address is lower than any existing database
server machine's, then update every client machine's /usr/vice/etc/CellServDB
file and in-kernel list to include this machine.  (If this machine's IP address
is not the lowest, it is acceptable to wait until Step 11.)

There are several ways to update CellServDB on client machines, as detailed in
Chapter 15 of the AFS System Administrator's Guide.  You might, for instance,
copy over the central update source (which you updated in Step 4), with or
without the package program.  To update the machine's in-kernel list, you can
either reboot the machine after changing CellServDB, or issue the fs newcell
command.

Step 6: Start the Authentication Server, kaserver.

---------------------------------------------------------------------
     # bos create <machine name> kaserver simple /usr/afs/bin/kaserver 
---------------------------------------------------------------------

Step 7: Start the Protection Server, ptserver.

---------------------------------------------------------------------
     # bos create <machine name> ptserver simple /usr/afs/bin/ptserver 
---------------------------------------------------------------------

Step 8: Start the Volume Location (VL) Server, vlserver.

---------------------------------------------------------------------
     # bos create <machine name> vlserver simple /usr/afs/bin/vlserver 
---------------------------------------------------------------------

Step 9: Start the Backup Server, buserver.  The chapter entitled "Backing
Up the System" in the AFS System Administrator's Guide details the other
instructions you must perform before actually using the Backup System.

---------------------------------------------------------------------
     # bos create <machine name> buserver simple /usr/afs/bin/buserver 
---------------------------------------------------------------------

Step 10: Restart the Authentication, Protection, and Volume Location
Servers on every database server machine in the cell, including the new server.
This forces an election of a new Ubik coordinator site for each of the
processes; the new machine votes in the election and is considered as a
potential new coordinator.

A cell-wide service outage is possible during the election of a new Ubik
coordinator for the VL Server, but should last less than five minutes.  Such an
outage is particularly likely if you are installing your cell's second database
server machine.  Messages tracing the progress of the election should appear on
the console.

-------------------------------------------------------------------------------
Note: Repeat this command, substituting each of your cell's database server
machines for database server machine in quick succession.

   # bos restart <database server machine> kaserver ptserver vlserver buserver 
-------------------------------------------------------------------------------

If an error occurs (perhaps because you accidentally omit one of the processes
on one machine), you should restart all server processes on the database server
machines, either by

 - issuing bos restart with the -bosserver flag for each database server machine

 - rebooting each database server machine, either using bos exec or at their
consoles

Step 11: If you have not already updated CellServDB on client machines in
Step 5, do so now.

There are several ways to update CellServDB on client machines, as detailed in
Chapter 15 of the AFS System Administrator's Guide.  You might, for instance,
copy over the central update source (which you updated in Step 4), with or
without the package program.

To have a client machine's Cache Manager recognize the new database machine
right away, add it to the machine's in-kernel list.  Either reboot the machine
after updating CellServDB, or issue the fs newcell command.

Step 12: Inform AFS Product Support of the name and Internet address of
the new database server machine.

Transarc maintains /afs/transarc.com/service/etc/CellServDB.export, a national
CellServDB file that is available to all AFS sites.  Your new database server
will be added to this file.

If you do not want other AFS sites to know about this database
server, specify that you do not want your CellServDB exported to
other cells.  Transarc will place your cell in a private CellServDB.

 3.3. REMOVING DATABASE SERVER FUNCTIONALITY

The process of removing database server machine functionality is nearly the
reverse of installing it.

 3.3.1. PROCEDURE OVERVIEW

To remove database server functionality from a machine, you will

1. install the bos suite of commands locally, as a precaution

2. inform Transarc Corporation of the removal of the database server
machine

3. update your cell's central update source for CellServDB and the file
you make available to foreign users

4. update CellServDB on every client machine

5. remove the machine from /usr/afs/etc/CellServDB on the system
control machine

6. stop the database server processes on the machine

7. remove the database server processes from the machine's BosConfig

8. restart the database server processes on the remaining database
server machines

 3.3.2. INSTRUCTIONS

Step 1: As a precaution, make sure that the bos command suite binaries
are available on the local disk.  Having a local copy of bos commands is
necessary in case of an error during installation, since the following
procedures make the file system inaccessible.

 - If you are working on a file server machine, the bos command binary already
resides in /usr/afs/bin.  No action is necessary.

 - If you are working on a client workstation, the bos binary usually resides in
/usr/afsws/bin, which is normally a symbolic link into AFS. If so, copy the bos
binary to /tmp.

	---------------------------------
	# cp /usr/afsws/bin/bos  /tmp 
	---------------------------------

Step 2: Inform AFS Product Support which database server machine you are
removing from service.

This step is particularly important if your cell is included in the national
CellServDB file (/afs/transarc.com/service/etc/CellServDB.export) that Transarc
makes available to all AFS sites.  If other cells do not know about the change
and thus do not remove the machine from CellServDB on their client machines,
users in those cells may experience delays while requests to the non-existent
database server machine time out.

Step 3: Remove the database server machine from your cell's central
update source for the client machine version of CellServDB, if there is one.
The standard location for the central file is
/afs/cellname/common/etc/CellServDB.

If you maintain a file that foreign cells can consult to learn about your
database server machines, remove the database server machine from it.  Transarc
recommends /afs/cellname/service/etc/CellServDB.local as the standard location.

Step 4: Remove the database server machine from the
/usr/vice/etc/CellServDB file on every client machine in your cell.  There are
several ways to update CellServDB on client machines, as detailed in Chapter 15
of the AFS System Administrator's Guide.  You might, for instance, copy over the
central update source (which you updated in the previous step), with or without
the package program.

You remove the database server machine from CellServDB before stopping the
actual database server processes to avoid delays in database server access.  If
a process on a client machine sends a request to a non-existent database server
machine (because it is still listed in CellServDB), then users on the machine
experience a delay while the request times out and is forwarded to another
database server machine.

Step 5: Remove the former database server machine from the
/usr/afs/etc/CellServDB file on file server machines.

If using the United States edition of AFS, using the following command to update
CellServDB on your cell's system control machine.  Its server portion of the
Update Server distributes the updated file to all other file server machines.

If using the international edition of AFS, you should not be using the Update
Server to distribute files from /usr/afs/etc.  Instead, update CellServDB on
each file server machine individually.

----------------------------------------------------------------------------
In the bos removehost command, provide the full Internet-style machine name
(including cell extension) for both machine name to update and new db-server
machine.

If using the United States edition of AFS, substitute the system control
machine's name for machine name to update.

If using the international edition of AFS, repeat the bos removehost command
once for each server machine in your cell, substituting each one's name for
machine name to update in turn.

	# cd /usr/afs/bin
	# bos removehost <machine name to update> <former db-server machine>
-------------------------------------------------------------------------------

If using the United States edition, wait several minutes to allow the Update
Server to distribute the new CellServDB to all other AFS file server machines in
the cell. By default this takes a maximum of five minutes. If using the
international edition, attempt to issue all of the bos removehost commands
within five minutes.

Step 6: Verify that the former database server machine no longer appears
in CellServDB on all file server machines.

	-----------------------------------------------------------
	Note: Repeat this command on every file server machine. 

	# bos listhosts <machine name>                          
	-----------------------------------------------------------

Step 7: Stop the database server processes on the machine.  This command
changes the process' status in /usr/afs/local/BosConfig to NotRun, but does not
remove them from the file.

-----------------------------------------------------------------------------
   # bos stop <former db-server machine> kaserver ptserver vlserver buserver 
-----------------------------------------------------------------------------

Step 8: (Optional.)  Remove the entries for the database server processes
from BosConfig.  One reason not to execute this command is if you plan to
re-install the database server functionality on this machine soon.

-------------------------------------------------------------------------------
   # bos delete <former db-server machine> kaserver ptserver vlserver buserver 
-------------------------------------------------------------------------------

Step 9: Restart the database server processes on every remaining database
server machine in the cell.  This forces the election of a Ubik coordinator for
each process, ensuring that the remaining database server processes recognize
that the machine is no longer a database server and exclude it from the
election.

A cell-wide service outage is possible during the election of a new Ubik
coordinator for the Volume Location Server, but should last less than five
minutes.  Messages tracing the progress of the election should appear on the
console.

-------------------------------------------------------------------------------
Repeat on every remaining database server machine:                          
 
   # bos restart <database server machine> kaserver ptserver vlserver buserver 
-------------------------------------------------------------------------------

If an error occurs (perhaps because you accidentally omit one of the processes
on one machine), you should restart all server processes on the database server
machines, either by

 - issuing bos restart with the -bosserver flag for each database server machine

 - rebooting each database server machine, either using bos exec or at their
consoles