AFS 3.4a Release Notes


Table of Contents

 
Jump to the Index for this Document 
 

1. Summary of Changes

Back to Table of Contents

These release notes describe the changes and new features included in the AFS® 3.4a release. These changes are not documented in the AFS System Administrator's Guide, AFS Command Reference Manual, or AFS User's Guide, although in some cases you are referred to the AFS documentation set for more information. This information is included in the sections titled ``AFS 3.4a Changes.''

Note that this document also contains AFS 3.3 release note information that has not been incorporated into the AFS documentation set. This information is included in the sections titled ``AFS 3.3 Changes.''

AFS 3.4a supports multihomed file servers for the first time; refer to the section on multihomed file servers in Chapter 14 for more information. The Backup System includes many changes to enhance performance and produce clearer status and error messages; refer to the information on the AFS Backup System in Chapter 5. Many changes are introduced in AFS 3.4a. Several changes have been incorporated to fix bugs and improve performance of the Cache Manager, database servers, file servers, and the NFS/AFS Translator.

Note: Transarc provides backward compatibility to the previous release of AFS only. Therefore, except for the incompatibilities described in Section 3.3, AFS 3.4a is compatible with AFS 3.3; however, AFS 3.4a is not compatible with AFS 3.2.
The AFS 3.4a release includes the following interface and functional changes:

Chapter 2 -     Supported AFS Systems

AFS 3.4a provides support for the following systems:

  1. AIX 3.2 and 4.1 on an IBM RS/6000
  2. Digital 2.0, 3.0, and 3.2 (formerly known as DEC OSF/1) on a DEC AXP
  3. HP-UX 9.01, 9.03, and 9.05 on a Hewlett-Packard 9000 Series 700
  4. HP-UX 9.0, 9.02, and 9.04 on a Hewlett-Packard 9000 Series 800
  5. IRIX 5.2 and 5.3 on a Silicon Graphics system
  6. NCR 3.0 on an AT&T/NCR 3000 system
  7. Solaris 2.3 and 2.4 on a ``sun4'' (except SPARCstations)
  8. Solaris 2.3 and 2.4 on a Sun SPARCstation IPC (and other models with ``sun4c'' kernel architecture, excluding ``sun4'')
  9. Solaris 2.3 and 2.4 on a Sun SPARCstation 4, 10, 20, and SPARCclassic (and other models with ``sun4m'' kernel architecture)
  10. SunOS 4.1.1, 4.1.2, and 4.1.3 on a ``sun4'' (except SPARCstations)
  11. SunOS 4.1.1, 4.1.2, and 4.1.3 on a Sun SPARCstation IPC (and other models with ``sun4c'' kernel architecture, excluding ``sun4'')
  12. SunOS 4.1.2 and 4.1.3 on a Sun SPARCstation 4, 10, 20, and SPARCclassic (and other models with ``sun4m'' kernel architecture)
  13. Ultrix 4.3 on a DECstation 2100, 3100, or 5000 (single processor only)
  14. Ultrix 4.3a and 4.4 on a DECstation 2100, 3100, or 5000 (single processor only)
Chapter 3 -     Upgrading to AFS 3.4a

AFS 3.4a provides procedures and instructions for

  1. Upgrading your AFS cell from AFS 3.3 to AFS 3.4a
  2. Upgrading your AFS cell from AFS 3.4 to AFS 3.4a
  3. Upgrading your AFS cell from AFS 3.2 to AFS 3.4a
  4. Downgrading your AFS cell from AFS 3.4a to AFS 3.3a
Chapter 4 -     Authentication

AFS 3.4a provides changes to AFS authentication and login programs, including

  1. Kerberos support through UDP ports 88 and 750
  2. Support for the # and ! character entries in the /etc/passwd file with the AIX login program
  3. An alternative authentication program, afs_dynamic_auth, with the AIX 4.1 login program
  4. Several changes to the Digital login program, including support for the x character on password entries
  5. Support for the /etc/default/login file and variables in the Solaris login program
Chapter 5 -     The Backup System

AFS 3.4a provides several major and minor changes to the Backup System, including

  1. A new backup configuration file to provide jukebox and stacker tape support
  2. A new backup interactive command for placing you into interactive mode for all commands in the backup command suite
  3. A new backup volsetrestore command for restoring a chosen set of volumes to their original location or taking a list of volumes from a file and restoring them to a server and partition
  4. New -localauth flag and -cell arguments added on all backup commands
  5. Improved error messages and error handling
  6. Scanning of any tape by the butc command
  7. Support for permanent names on tapes with the backup labeltape and backup readlabel command
Chapter 6 -     The bos Commands

AFS 3.4a includes a change to the bos addkey command that prevents the misuse of an existing key version number and prompts you to enter the key a second time for verification.Chapter 7 -     The fs Commands

AFS 3.4a provides several major and minor changes to the fs command suite, including

  1. A new fs storebehind command that sets asynchronous writes to file servers on a per-cache manager basis
  2. The addition of VL server preferences to the fs getserverprefs and fs setserverprefs commands
  3. A new argument structure for the fs exportafs command so that command arguments now only support values of on and off
Chapter 8 -     The fstrace Commands

AFS 3.4a includes a new command suite, fstrace, consisting of eight commands that are used by the system administrator to diagnose problems within the AFS Cache Manager. The new fstrace command suite includes

  1. The fstrace setset command sets an event set's state.
  2. The fstrace setlog command changes the size of trace logs.
  3. The fstrace dump command dumps the contents of the trace logs.
  4. The fstrace lslog command lists information about trace logs.
  5. The fstrace lsset command lists information about event sets.
  6. The fstrace clear command clears trace logs.
Chapter 9 -     The kas Commands

AFS 3.4a provides several changes to the kas command suite, including

  1. An increase in the ticket lifetime associated with the ticket received from kas commands
  2. An update to the kas examine command to show the value of the kas setfields -reuse argument
Chapter 10 -     The package Command

AFS 3.4a provides several changes to the package command and configuration lines, including

  1. Support for relative pathnames on the ``L'' configuration line
  2. Support for hexadecimal, octal, and decimal numbers on the minor device argument
  3. Support for interpreting the owner argument as a user name or user ID
  4. Support for interpreting the group argument as a group name or group ID
Chapter 11 -     The uss Commands

AFS 3.4a includes a new flag, -pipe, in the uss bulk command to assist you in running batch jobs without displaying the password prompt.     Chapter 12 -     The vos Commands

AFS 3.4a provides several major and minor changes to the vos command suite, including

  1. A new -maxquota argument for the vos create command to allow you to create the volume and set the quota in the same step
  2. An enhancement to the vos release command to update up to half of the ReadOnly copies simultaneously and to account for files deleted from the ReadWrite volume by not replicating them
  3. Updates to the vos restore command to check if a specified volume exists on the specified partition and a new -overwrite argument for handling full and incremental restores to existing volumes
  4. An update to the vos syncserv command to check all servers and continue checking remaining servers if any of the servers are not contacted
Chapter 13 -     Miscellaneous AFS Commands

AFS 3.4a provides several major and minor changes to miscellaneous AFS commands, including

  1. Improvements to the afsd command, which now compares the cache size to the partition size upon starting the afsd command to ensure that the cache size is not greater than 95% of the partition size, and correctly interprets any extra white space found while reading the /usr/vice/etc/cacheinfo file, without failing
  2. Updates to the butc command to support the -localauth argument, which assigns tokens that never expire, and two new legal values for the -debuglevel argument
  3. A new -implicit argument for the fileserver command
  4. Two new arguments for the salvager command
  5. A new Volume Location Database (VLDB) conversion program for the vldb_convert command
  6. A new log file for the vlserver process
  7. A new -p argument for the volserver command
Chapter 14 -     Additional Functional Changes

AFS 3.4a incorporates several functional changes, including

  1. Support for multihomed file servers
  2. Support for unlinking open files
  3. Program for converting vice partitions between Digital version 2.0 and 3.X on DEC AXP machines
  4. Support for UFS partitions larger than 2 GB
  5. Support for 256 partitions per server
  6. Support for 8-bit international characters in filenames and directories
  7. Improved database access during Ubik quorum elections
  8. Detection of a FORCESALVAGE flag by the fileserver process
  9. Additional support for AIX 3.2
Chapter 15 -     Bug Fixes

Several comments describing bugs fixed in AFS 3.4a are included in this chapter.Chapter 16 -     Documentation Corrections

Several comments describing documentation corrections are included in this chapter.

2. Supported AFS Systems

Back to Table of Contents

AFS 3.4a supports a number of new systems while dropping support for some obsolete systems. A complete list of the supported systems appears in a table at the end of this chapter.

2.1. New AFS Systems

Back to Table of Contents

The following supported systems are new for AFS 3.4a:

     
  1. AT&T/NCR Series 3000 system running NCR 2.02
  2. DEC AXP running Digital 2.0
  3. DEC AXP running Digital 3.0
  4. DEC AXP running Digital 3.2
  5. IBM RS/6000 running AIX 4.1
  6. Silicon Graphics system running IRIX 5.2
  7. Silicon Graphics system running IRIX 5.3
  8. Sun 4 (except SPARCstations) running Solaris 2.4
  9. Sun SPARCstation IPC (and other models with ``sun4c'' kernel architecture) running Solaris 2.4
  10. Sun SPARCstation 4, 10, 20, and SPARCclassic (and other models with ``sun4m'' kernel architecture) running Solaris 2.4

2.2. Enhanced AFS Systems

Back to Table of Contents

The following supported system is enhanced for AFS 3.4a:

     
  1. Hewlett-Packard 9000 Series 800 MP (multi-processor) running HP-UX 9.0, 9.02, and 9.04

2.3. Obsolete AFS Systems

Back to Table of Contents

The following systems that were supported in AFS 3.3 or AFS 3.3a are not supported in AFS 3.4a:

     
  1. DEC AXP running Digital 1.0
  2. Hewlett-Packard 9000 Series 300/400 running HP-UX 9.0
  3. NeXT (68030 or 68040 systems) running NeXT OS Release 3.0
  4. IBM-RT/PC running AIX 2.2.1
  5. IBM-RT/PC running AOS Release 4
  6. Silicon Graphics system running IRIX 5.0.1 (client only)
  7. Silicon Graphics system running IRIX 5.1 or 5.1.1
  8. Sun 3 (68020 systems) running SunOS 4.1.1, 4.1.2, or 4.1.3
  9. Sun 3 (68030 systems) running SunOS 4.1.1, 4.1.2, or 4.1.3
  10. Sun 4 (except SPARCstations) running Solaris 2.2
  11. Sun SPARCstation IPC (and other models with ``sun4c'' kernel architecture) running Solaris 2.2
  12. Sun SPARCstation 4, 10, 20, and SPARCclassic (and other models with ``sun4m'' kernel architecture) running Solaris 2.2
  13. VAX systems running Ultrix 4.3

2.4. Supported AFS Systems

Back to Table of Contents

Table 2-1 lists all of the systems supported in AFS 3.4a. As in previous versions of AFS, use the system names shown in the table if you wish to use the @sys variable in pathnames (as discussed in Chapter 2 of the AFS System Administrator's Guide). The fs sysname command allows you to override the default value of the @sys variable. Supported AFS Systems
 

System Name  Machines  Operating Systems 
AT&T/NCR Machines 
ncrx86_30  AT&T/NCR System 3000  2.0.2 
Digital Machines 
alpha_osf20  DEC AXP  2.0 
alpha_osf30  DEC AXP  3.0 
alpha_osf32  DEC AXP  3.2 
pmax_ul43  DECstation 2100, 3100, or 5000 (single processor only)  Ultrix 4.3 
pmax_ul43a  DECstation 2100, 3100, or 5000 (single processor only)  Ultrix 4.3a or 4.4 
Hewlett-Packard Machines 
hp700_ux90  Hewlett-Packard 9000 Series 700  HP-UX 9.01, 9.03, or 9.05 
hp800_ux90  Hewlett-Packard 9000 Series 800  HP-UX 9.0, 9.02, or 9.04 
hp800_ux90  Hewlett-Packard 9000 Series 800 MP  HP-UX 9.0 
IBM Machines 
rs_aix32  IBM RS/6000  AIX 3.2 
rs_aix41  IBM RS/6000  AIX 4.1 
Silicon Graphics Machines 
sgi_52  Silicon Graphics  IRIX 5.2 
sgi_53  Silicon Graphics  IRIX 5.3 
Sun Machines 
sun4_411  Sun 4 (except SPARCstations)  SunOS 4.1.1, 4.1.2, or 4.1.3 
sun4c_411  Sun SPARCstation IPC (and other models with "sun4c" kernel architecture)  SunOS 4.1.1, 4.1.2, or 4.1.3 
sun4m_412  Sun SPARCstation 4, 10, 20, and SPARCclassic (and other models with "sun4m" kernel architecture)  SunOS 4.1.2 or 4.1.3 
sun4_53  Sun 4 (except SPARCstations)  Solaris 2.3 
sun4c_53  Sun SPARCstation IPC (and other models with "sun4c" kernel architecture)  Solaris 2.3 
sun4m_53  Sun SPARCstation 4, 10, 20, and SPARCclassic (and other models with "sun4m" kernel architecture)  Solaris 2.3 
sun4_54  Sun 4 (except SPARCstations)  Solaris 2.4 
sun4c_54  Sun SPARCstation IPC (and other models with "sun4c" kernel architecture)  Solaris 2.4 
sun4m_54  Sun SPARCstation 4, 10, 20, and SPARCclassic (and other models with "sun4m" kernel architecture)  Solaris 2.4 

2.5. System-Specific Warnings

Back to Table of Contents

2.5.1. Hewlett-Packard Systems

When specifying the chunk size on HP-UX systems, use the default value (64 kilobytes). The use of a chunk size larger than the default may cause HP-UX systems to hang.

2.5.2. SGI Systems

Do not run the SGI File System Reorganizer (fsr) on the /usr/vice/cache or /vicepx partitions. Running fsr on these partitions can cause corruption of the AFS cache.

3. Upgrading to AFS 3.4a

Back to Table of Contents

This chapter explains how to upgrade your cell to AFS 3.4a from a previous version of AFS. If you are installing AFS for the first time, skip this chapter and refer to the AFS Installation Guide. Before performing the upgrade, please read all of the introductory material in the following sections.

     
  1. Section 3.1, Warnings About Operating System Upgrades
  2. Section 3.2, Prerequisites for Upgrading
  3. Section 3.3, Notes on Upgrading to AFS 3.4a
Section 3.4 explains how to upgrade your cell to AFS 3.4a from AFS 3.3 or 3.3a (throughout the remainder of this chapter, ``AFS 3.3'' will refer to both versions 3.3 and 3.3a). It includes the following subsections:
  1. Section 3.4.1, Upgrading the Database Server Processes
  2. Section 3.4.2, Upgrading the non-Database Server Processes
  3. Section 3.4.3, Upgrading the Cache Manager on AFS Client-only Machines
For sites that have already upgraded (fully or partially) to AFS 3.4 Beta or the AFS 3.4 GA, section 3.5 explains how to upgrade to AFS 3.4a.

Section 3.6 explains how to upgrade your cell from AFS 3.2 to AFS 3.4a, and includes the following subsections:

     
  1. Section 3.6.1, Upgrading Servers
  2. Section 3.6.2, Upgrading the Cache Manager on AFS Clients
Section 3.7 explains how to downgrade your cell from AFS 3.4a to AFS 3.3a, and includes the following subsections:
     
  1. Section 3.7.1, Downgrading Servers
  2. Section 3.7.2, Downgrading the Cache Manager on AFS Clients
Note: Transarc provides backward compatibility to only the previous release of AFS. Therefore, except for the incompatibilities described in Section 3.3, AFS 3.4a is compatible with AFS 3.3; however, AFS 3.4a is not compatible with AFS 3.2.

3.1. Warnings About Operating System Upgrades

Back to Table of Contents

As Chapter 2 of this document makes clear, upgrading to AFS 3.4a from previous versions of AFS requires you to upgrade the operating system on several system types (for example, DEC machines to Ultrix 4.3, 4.3a, or 4.4; Hewlett-Packard machines to HP-UX 9.0, 9.01, or 9.03; and SGI machines to IRIX 5.2 or 5.3). If you need to upgrade an AFS machine to a new operating system version, you must take several actions to preserve AFS functionality before upgrading the operating system. These actions include:

     
  1. You must unmount the /vicepx partitions on all file server machines to prevent the vendor-supplied fsck program from running on them when you reboot the machine during installation of the new operating system. (You should make sure that the partitions are not mounted during reboot until the AFS-modified vfsck program has replaced the standard vendor-supplied fsck program. If you do not prevent the vendor-supplied fsck program from starting before the vfsck program, AFS data will be lost.)
  2. You must protect the AFS-modified versions of the ftpd, inetd, login, rcp, rlogind, rsh, and vfsck commands before upgrading the vendor's operating system, if you are not performing an immediate AFS upgrade. For example, if you are upgrading a system running IRIX 5.2 to IRIX 5.3, protect the AFS-modified versions of these commands supplied by the native operating system with their modified counterparts supplied with AFS 3.4a.

3.2. Prerequisites for Upgrading

Back to Table of Contents

Ensure the following before beginning any upgrade operations:

     
  1. You have access to the binary files for the AFS 3.4a release (on tape or over the network).
  2. You have a copy of the AFS Installation Guide.
  3. The partition that houses the /usr/afs/bin directory on each server machine has 18 megabytes of disk space on which to store the AFS server binaries.
  4. You can log in to all server and client machines as ``root.''
  5. You are listed in the /usr/afs/etc/UserList file and you can authenticate as a member of the group system:administrators.
  6. You can identify the network addresses of the database server machines.

3.3. Notes on Upgrading to AFS 3.4a

Back to Table of Contents

AFS 3.4a provides support for multihomed file server machines, which are machines that have multiple network interfaces and IP addresses. A multihomed file server can respond to a client RPC via a different network address than the one initially addressed by the client. By providing multiple paths through which a client's Cache Manager can communicate with it, a multihomed file server can increase the availability of computing resources and improve performance.

AFS 3.4a supports up to 16 addresses per multihomed file server machine. This enhancement requires a change in the way the Volume Location Database (VLDB) represents file server machines. In AFS 3.3 and earlier versions, the VLDB identified file server machines by a single network address. In AFS 3.4a, the VLDB uses a unique host identifier to identify each file server machine. The fileserver process on each file server machine generates this identifier automatically at startup and registers it with the vlserver process which maintains the VLDB. The identifier contains information about all of the machine's known network addresses and is updated at each restart of the fileserver process. A copy of the identifier is stored in each file server machine's /usr/afs/local/sysid file for possible use by the administrator. However, no intervention is required by administrators to generate the identifier or register it in the VLDB. Similarly, no action is required to update the VLDB to 3.4a format or version; the vlserver process performs the update automatically the first time an AFS 3.4a fileserver process registers its network addresses.

Notes: You cannot run the database server processes (that is, servers for the Authentication, Protection, Volume Location, and Backup Databases) on multihomed machines. If you have AFS 3.3 file server and database server processes running currently on the same machine, and you wish to use multihomed support, you must reconfigure these machines and move the database server functionality to another machine.

AFS 3.4a does not support multihomed clients or multihomed database server machines.

Each file server machine's /usr/afs/local/sysid identifier file is unique to it. Take care not to copy a machine's /usr/afs/local/sysid file to any other machine.

If you have already upgraded some machines in your cell to AFS 3.4, you must upgrade to AFS 3.4a in a different order than cells upgrading from AFS 3.3. See section 3.5.

When upgrading to AFS 3.4a, you must
     
  1. reconfigure any multihomed server machines that are currently running file server and database server processes, either by moving the database server processes to another (single-homed) machine or by disabling all but one interface. You cannot run database server processes on multihomed machines.
  2. ensure that no volume manipulation commands or operations that alter the VLDB are in progress when you upgrade the vlserver process to AFS 3.4a. Since only administrators listed in the /usr/afs/etc/UserList file can issue such commands, this requirement is not generally difficult to meet. The vos status command issued against each file server machine lists any active volume manipulations.
  3. install AFS 3.4a vlserver binaries and restart the VL Servers before upgrading the fs process suite (fileserver, volserver, salvager) on any file server machine-including database server machines-to AFS 3.4a.
After you have restarted the VL Servers and the VLDB format has been updated, any VL Servers earlier than AFS 3.4a will return errors. Therefore, you must upgrade all VL Servers before upgrading the fs process suite on any machine.

As mentioned previously, the AFS 3.4a vlserver process converts the VLDB to the new format automatically, whereas the conversion from AFS 3.2 to 3.3 required administrators to issue the vldb_convert command to convert the VLDB manually. Downgrading from AFS 3.4a to AFS 3.3a still requires a manual VLDB conversion using the vldb_convert command.

If upgrading from AFS 3.3 or 3.4, you do not need to bring your entire cell down during the upgrade. Restarting the vlserver and other database server processes causes a brief outage. Upgrading the fs process suite requires shutting it down, installing new kernel extensions and rebooting the machine, which interrupts file service; you can upgrade machines in the manner that least disrupts service, either one-by-one or simultaneously. Similarly, upgrading client machines requires installing new kernel extensions and rebooting, and can be done at your convenience.

AFS 3.4a makes the vos changeaddr command obsolete. File server machine addresses are registered automatically with the VL Server each time the File Server restarts.

3.3.1. Notes for AFS Cells Using HP-UX 9.0 and Digital UNIX 2.0

If you plan to upgrade a Hewlett-Packard 9000 Series 800 machine running HP-UX version 9.0 or a DEC AXP machine running Digital version 2.0 to AFS 3.4a, you must upgrade all AFS binaries to the 3.4a version. This includes all AFS file server, database server, and client binaries, as well as the kernel extensions. You must also relink any programs which were linked with libsys.a from a previous version of AFS.

The syscall slot number for AFS has been changed in AFS 3.4 so that AFS and DFS can coexist on the same machine. Previously, this syscall slot number conflicted with DFS.

3.3.2. Converting /vicepx Partitions on DEC AXP File Server Machines

AFS 3.4a Changes 

If as part of upgrading a DEC AXP file server machine to AFS 3.4a, you choose to upgrade from Digital version 2.0 to version 3 (from alpha_osf20 to alpha_osf30 or alpha_osf32), then you must also convert the data format of /vicepx partitions. AFS 3.4a includes the fs_conv_osf30 program to perform the conversion from version 2.0 data format to version 3 format (and back, if you choose to downgrade). You can run fs_conv_osf30 either before or after upgrading the operating system, but you must do both while the fs process suite is shutdown on the machine, so that no users can change data on the disk during the conversion.

The syntax for the fs_conv_osf30 conversion program follows:

fs_conv_osf30 [convert | unconvert] [-part <AFS partition name or device>+] [-verbose] [-help]

Description:

The fs_conv_osf30 program converts Digital data format in the AFS partitions specified by the -part argument on the file server machine on which the command is run. To convert the data format of all AFS partitions on the file server machine, omit the -part argument.

The fs_conv_osf30 program can perform two conversions:

     
  1. The convert subcommand converts Digital version 2.0 data in an AFS partition to the Digital version 3.0 data in an AFS partition to Digital version 2.0 data format.
Arguments:
 
convert
Converts Digital version 2.0 data in an AFS partition to a Digital version 3 data format.
unconvert
Converts Digital version 3 data in an AFS partition to Digital version 2.0 data format.
-part
Specifies the name or names of partitions that are to be converted. Omitting this option prompts conversion of the Digital data in all AFS partitions on the file server machine.
-verbose
Tells the command to report on what it is doing as it executes.
-help
Prints the online help for this command. Any other options specified along with this one are ignored.
Examples:
  1.  
  2. The following command converts Digital version 2.0 data in the /vicepa partition to Digital version 3 data on the file server machine on which the command is run. The -verbose flag is included to report on the status of the command.

    # fs_conv_osf30 /vicepa convert -verbose

    The following command converts Digital version 3 data in all AFS partitions to Digital version 2.0 data on the file server machine on which the command is run. The -verbose flag is included to report on the status of the command.

    # fs_conv_osf30 unconvert -verbose

Privilege Required:

The issuer must be ``root'' on the machine on which the command is issued.

3.4. Procedures for Upgrading from AFS 3.3 to AFS 3.4a

Back to Table of Contents
Note: If you have already upgraded some machines in your cell to AFS 3.4 Beta or GA, then the instructions in this section are not appropriate for you. See section 3.5 instead.
The following subsections contain instructions for upgrading to AFS 3.4a from AFS 3.3 (as previously mentioned, this also refers to AFS 3.3a). These upgrade instructions require you to have ``root'' permissions. If you have not done so already, you should read the instructions in Sections 3.1 through 3.3, which contain information that you should understand prior to performing the upgrade.

AFS 3.4a features a new version of the VLDB that supports multihomed file servers; see Chapter 14 for additional information on this feature. The first time an AFS 3.4a fileserver process starts in your cell and registers its unique host identifier in the VLDB, the vlserver process automatically converts the VLDB from version 3.3 format to version 3.4a format.

In the instructions that use the bos install and bos restart commands in the following subsections, you may use the -cell, -localauth, and -noauth arguments as appropriate.

You must perform the upgrade steps in this order:

  1. Upgrade the database server processes on all database server machines. See section 3.4.1.
  2. Upgrade the fs process suite and other basic server processes on each server machine at your convenience. Immediate upgrade is not required. If you run client functionality on a server machine, you will also upgrade it at this time. See section 3.4.2.
You may upgrade the Cache Manager on AFS client-only (non-file server) machines at any time during the cell upgrade, even before upgrading database server processes, if you wish. See section 3.4.3.
Note: As a reminder, you cannot run the database server processes on a multihomed machine. If you plan to make a current database server machine multihomed, then you must first use the bos stop command to stop the database server processes, changing their status in the BosConfig file to NotRun. Then issue the bos delete command on each machine to remove the database server processes completely from the BosConfig file. Remember also to change the CellServDB file on all server and client machines in your cell, and to register the changes with Transarc. If you are running a system control machine, the easiest way to alter CellServDB on all server machines is to issue the bos delhost command against the system control machine, which will propagate the changes.
It is recommended that you install the entire AFS 3.4a binary distribution into a volume for each system type in your AFS filespace (recommended location: /afs/cellname/sysname/usr/afsws), copying it either from the AFS Binary Distribution tape or by network from the Transarc AFS product directory, /afs/transarc.com/product/afs/3.4a/sysname. Then run the bos install command against each binary distribution machine to install the binaries to the local disk location of the existing binaries (standardly, /usr/afs/bin). When you restart the processes using the bos restart command, the BOS Server moves the AFS 3.3 binary to a .bak file after renaming any current .bak file to a .old file.

Refer to the section of the AFS Installation Guide entitled ``Setting Up Volumes to House AFS Binaries'' (in particular, to its subsection entitled ``Loading AFS Binaries into a Volume and Creating a Link to the Local Disk'') for detailed instructions on copying AFS binaries into volumes.

Note the following about upgrading to AFS 3.4a:

     
  1. You must upgrade to the AFS 3.4a vlserver process and VLDB on all database server machines. For the sake of simplicity, the instructions have you upgrade the other database server processes (buserver, kaserver and ptserver) at the same time.
  2. You must complete the vlserver upgrade before upgrading the fs process suite on any server machine, including the database server machines.
  3. You do not need to upgrade your clients when you upgrade your database servers and file servers, but can upgrade them at your convenience.
  4. On a per-machine basis, the conversion procedure for each server takes only the amount of time it takes to reboot the machine. In general, the time required to complete the upgrade depends on the number of servers in your cell.

3.4.1. Upgrading the Database Server Processes

    Change directories to your local cell's binary distribution directory or Transarc's product tree. The following example shows the recommended name for your local distribution location:

    # cd /afs/cellname/sysname/usr/afsws/root.server/usr/afs/bin

    where cellname specifies your cell name and sysname specifies the system type name.

    Use the bos install command to copy the database server process binaries into the /usr/afs/bin directory on each binary distribution machine in turn:

    # bos install -server binary distribution machine -file buserver kaserver ptserver vlserver -dir /usr/afs/bin

    where binary distribution machine is the name of the binary distribution machine for each system type.

  1. Repeat Steps 1 and 2 on each binary distribution machine. Wait five minutes for the upserver process to distribute the binaries to all server machines of the same system type. If you do not use the binary distribution mechanism, repeat the bos install command manually on each server machine.
  2. After ensuring that the binaries are installed on each of the server machines, use the bos restart command to restart the database server processes, beginning with the database server at the lowest network address:

    # bos restart -server database server machine buserver kaserver ptserver vlserver

    where database server machine is the name of each database server machine in turn (remember to start with the lowest-IP-addressed machine).

3.4.2. Upgrading the non-Database Server Processes

After you have upgraded the vlserver and other database server processes on the database server machines, you can proceed to upgrade the fs process suite and other basic server processes (bosserver, runntp, upclient and upserver) at your convenience. The machine is unable to serve files during the duration of this upgrade process, so you may wish to perform at the time and in the manner that will disturb your users least.

Remember to perform these steps on your database server machines, too (even if they don't run the fs process suite, you should still upgrade the BOS Server and other basic processes).

    Shut down the fs process suite to prevent it from accidentally restarting before you have a chance to load the AFS 3.4a kernel extensions.

    # bos shutdown machine name fs -wait

    where <machine name> the server machine you are upgrading.

    Change directories to your local cell's binary distribution directory or Transarc's product tree. The following example shows the recommended name for your local distribution location:

    # cd /afs/cellname/sysname/usr/afsws/root.server/usr/afs/bin

    where cellname specifies your cell name and sysname specifies the system type name.

    Use the bos install command to copy the server process binaries into the /usr/afs/bin directory on each binary distribution machine in turn:

    # bos install -server binary distribution machine -file bosserver fileserver runntp salvager upclient upserver volserver -dir /usr/afs/bin

    where binary distribution machine is the name of the binary distribution machine for each system type.

    If the machine you are upgrading is system type hp800_ux90 or alpha_osf20, remember to upgrade all AFS binaries at this point; see section 3.3.1 for details. If you are upgrading a DEC AXP machine from Digital UNIX version 2.0 to version 3.0 or 3.2, perform the upgrade at this point, remembering to run the fs_conv_osf30 program too; see section 3.3.2

    Copy the AFS kernel extensions (libafs.o or equivalent) to the local disk directory appropriate for dynamic loading (or kernel building, if you must build a kernel on this system type). If the machine actually runs client functionality (a Cache Manager), also copy the afsd binary to the local /usr/vice/etc directory. The following example command shows the recommended name for your local binary storage directory:

    # cp -r /afs/cellname/sysname/usr/afsws/root.client/usr/vice/etc /usr/vice/etc

    where cellname specifies your cell name and sysname specifies the system type name.

    For specifics on installing the files needed for dynamic loading or kernel building, consult the ``Getting Started'' section for this system type in chapter 2 of the AFS Installation Guide.

  1. Wait until five minutes have passed since you finished Step 3, to allow the (current) upserver process on each binary distribution machine to distribute the server binaries to all server machines of the same system type. If you do not use the binary distribution mechanism, repeat Step 3's bos install command manually on each server machine.
  2. Reboot the server machine. Assuming that the machine's initialization file includes the bosserver command, as recommended in the AFS Installation Guide, the BOS Server will start and start up the other AFS server processes.
  3. Once you are satisfied that your cell is running smoothly at AFS 3.4a, there is no need to retain the pre-AFS 3.4a versions of the server binaries in the /usr/afs/bin directory (you can always use bos install to reinstall them if it becomes necessary to downgrade). To reclaim the disk space occupied in the /usr/afs/bin directory by .bak and .old files, you can use the following command:

    # bos prune -server file server machine -bak -old

    where file server machine is the name of the machine on which you wish to remove .old and .bak versions of AFS binaries.

3.4.3. Upgrading the Cache Manager on AFS Clients

The following instructions assume an AFS client is to be upgraded to full AFS 3.4a functionality. Omit these steps if the AFS client will continue to use AFS 3.3 or AFS 3.3a software.
  1.  
  2. Copy the AFS kernel extensions (libafs.o or equivalent) to the local disk directory appropriate for dynamic loading (or kernel building, if you must build a kernel on this system type). Also copy the afsd binary file to the local /usr/vice/etc directory. The following example command shows the recommended name for your local binary storage directory:

    # cp -r /afs/cellname/sysname/usr/afsws/root.client/usr/vice/etc /usr/vice/etc

    where cellname specifies your cell name and sysname specifies the system type name.

    For specifics on installing the files needed for dynamic loading or kernel building, consult the ``Getting Started'' section for this system type in chapter 4 of the AFS Installation Guide. Chapter 13 of these release notes provides information on using the afsd command to configure the Cache Manager.

  3. You should use AFS 3.4a command binaries (login, tokens, fs suite, vos suite, etc.) on machines running the AFS 3.4a Cache Manager. Make sure that all AFS binary locations such as /usr/vice/etc or /usr/afsws are upgraded to AFS 3.4a command binaries. If necessary, load them into the appropriate local disk directories.
  4. Reboot the client machine.

3.5. Procedures for Upgrading from AFS 3.4 to AFS 3.4a 

Back to Table of Contents 

This section explains how to upgrade to AFS 3.4a from an earlier version of AFS 3.4 (either AFS 3.4 Beta or the original AFS 3.4 General Availability release; in the remainder of this section, ``AFS 3.4'' will refer to both Beta and the original GA). These upgrade instructions require you to have ``root'' permissions. If you have not done so already, you should read Sections 3.1 through 3.3.

If you are already running AFS 3.4 on any machines in your cell, then the order in which you upgrade the various types of machines is different than for cells still running AFS 3.2 or 3.3 only. Use the following to guide your upgrade to AFS 3.4a. You must perform the steps in the order indicated.

  1. If any client machines in your cell are running AFS 3.4, upgrade all of them to AFS 3.4a before you upgrade any server processes. Follow the instructions in section 3.4.3.
  2. If any server machines in your cell are running the AFS 3.4 version of the fs process suite, upgrade that process on all such machines next, before upgrading the vlserver and other database server processes. If you run client functionality on a server machine, you will also upgrade it at this time. Follow the instructions in section 3.4.2.
  3. Upgrade the database server processes on all of your cell's database server machines to AFS 3.4a. Follow the instructions in section 3.4.1.
  4. If any file server machines in your cell are still running the AFS 3.2 or 3.3 version of the fs process suite and other basic server processes, you may upgrade those processes at any point after step 3; immediate upgrade is not required. Follow the instructions in section 3.4.2 for AFS 3.3 or section 3.6.1 for AFS 3.2.
  5. If any client machines in your cell are still running AFS 3.2 or 3.3, you may upgrade them at any point after step 2; immediate upgrade is not required. Follow the instructions in section 3.4.3 for AFS 3.3 or section 3.6.2 for AFS 3.2.
  6. Once you are satisfied that your cell is running smoothly at AFS 3.4a, there is no need to retain the pre-AFS 3.4a versions of the server binaries in the /usr/afs/bin directory (you can always use bos install to reinstall them if it becomes necessary to downgrade). To reclaim the disk space occupied in the /usr/afs/bin directory by .bak and .old files, you can use the following command:

    # bos prune -server file server machine -bak -old

    where file server machine is the name of the machine on which you wish to remove .old and .bak versions of AFS binaries.

3.6. Procedures for Upgrading from AFS 3.2 to AFS 3.4a

Back to Table of Contents

The following subsections contain instructions for upgrading to AFS 3.4 from AFS 3.2, 3.2a or 3.2b. If you have not done so already, you should read Sections 3.1 through 3.3, which contain information that should be understood prior to performing the upgrade.

3.6.1. Upgrading Servers

The following instructions assume that all file servers and database servers are to be upgraded to full AFS 3.4a server functionality. Consider the following before upgrading your cell's servers:
     
  1. You must convert all database servers to use the AFS 3.4a VL Server and VLDB.
  2. Transarc recommends upgrading your AFS 3.2 Cache Managers when you upgrade your servers to AFS 3.4a.
  3. Because your entire cell is unavailable during the VLDB upgrade procedure, you should perform the conversion during periods of low system usage or regular network maintenance (such as overnight or over a weekend). On a per-machine basis, the conversion procedure takes only the amount of time it takes to reboot the machine. However, VLDB conversion adds a few extra minutes to the procedure. In general, the time required to complete the upgrade depends on the number of servers in your cell.
    On each server machine that runs the fs process suite, issue the bos shutdown command to shut it down:

    # bos shutdown -server file server machine -instance fs

    where file server machine is the name of the file server machine on which the fs process suite is to be shut down.

    On each database server machine, issue the following bos shutdown command to shut down the database server and Update Server processes:

    # bos shutdown -server database server machine -instance vlserver kaserver ptserver buserver upclient upclientbin upclientetc upserver

    where database server machine is the name of the database server machine on which the vlserver, kaserver, ptserver, buserver, upclient, upclientbin, upclientetc, and upserver processes are to be shut down.

    Change directories to your local cell's binary distribution directory or Transarc's product tree. The following example shows the recommended name for your local distribution location:

    # cd /afs/cellname/sysname/usr/afsws/root.server/usr/afs/bin

    where cellname specifies your cell name and sysname specifies the system type name.

    Use the bos install command to copy the server process binaries into the /usr/afs/bin directory on each binary distribution machine in turn:

    # bos install -server binary distribution machine -file *-dir /usr/afs/bin

    where binary distribution machine is the name of the binary distribution machine for each system type.

    If the machine you are upgrading is system type hp800_ux90 or alpha_osf20, remember to upgrade all AFS binaries at this point; see section 3.3.1 for details. If you are upgrading a DEC AXP machine from Digital UNIX version 2.0 to version 3.0 or 3.2, perform the upgrade at this point, remembering to run the fs_conv_osf30 program too; see section 3.3.2

  1. Wait five minutes for the upserver process on each binary distribution machine to distribute the binaries to all server machines of its system type. If you do not use the binary distribution mechanism, repeat the bos install command manually on each server machine.
  2. On the database server machine with the lowest network address, copy the vldb.DB0 (database) file, preferably to a different file system. If you copy the database to a directory in the same file system as /usr/afs/db, make sure there are still 18 megabytes free disk space to accommodate the conversion process. Copy the file as follows:

    # cp /usr/afs/db/vldb.DB0 pathname

    where pathname is the name of the directory to which the database file is to be copied.

    On the database server machine with the lowest network address, issue the vldb_convert command to convert the database to version 3 format. (The binary for this command should be in the /etc subdirectory of the temporary storage area of the local disk.) You cannot convert the VLDB from version 2 to version 4 in one command. You must first convert the VLDB to version 3 format as shown in this step; the VLDB conversion from version 3 to version 4 is automatic.     The following command completes the conversion in a few minutes.

    # vldb_convert -to 3 -from 2

    Note: You can verify the success of the conversion by running the vldb_convert command with the -showversion flag.
    On each server machine, copy the AFS kernel extensions (libafs.o or equivalent) to the local disk directory appropriate for dynamic loading (or kernel building, if you must build a kernel on this system type). If the machine actually runs client functionality (a Cache Manager), also copy the afsd binary to the local /usr/vice/etc directory. The following example command shows the recommended name for your local binary storage directory:

    # cp -r /afs/cellname/sysname/usr/afsws/root.client/usr/vice/etc /usr/vice/etc

    where cellname specifies your cell name and sysname specifies the system type name.

    For specifics on installing the files needed for dynamic loading or kernel building, consult the ``Getting Started'' section for this system type in chapter 2 of the AFS Installation Guide.

  3. Reboot the database server machine with the lowest network address.
  4. Reboot each remaining database server machine.

  5. Reboot each file server machine.
  6. Once you are satisfied that your cell is running smoothly at AFS 3.34a, there is no need to retain the pre-AFS 3.4a versions of the server binaries in the /usr/afs/bin directory (you can always use bos install to reinstall them if it becomes necessary to downgrade). To reclaim the disk space occupied in the /usr/afs/bin directory by .bak and .old files, you can use the following command:

    # bos prune -server file server machine -bak -old

    where file server machine is the name of the machine on which you wish to remove .old and .bak versions of AFS binaries.

3.6.2. Upgrading the Cache Manager on AFS Clients

The following instructions assume an AFS client is to be upgraded to full AFS 3.4a functionality. Skip these steps if the AFS client will continue to use AFS 3.2 software (though this is not recommended).
  1. Copy the afsd binary file and AFS kernel extensions (libafs.o or equivalent) to the local disk directory appropriate for dynamic loading (or kernel building, if you must build a kernel on this system type). For specifics, consult the ``Getting Started'' section for this system type in chapter 4 of the AFS Installation Guide. The following example for dynamic loading shows the recommended name for your local distribution location:
  2. # cp -r /afs/<cellname>/<sysname>/usr/afsws/root.client/usr/vice/etc /usr/vice/etc
    Chapter 13 of these release notes provides information on using the afsd command to configure the Cache Manager.

  3. You should use AFS 3.4a command binaries (login, tokens, fs suite, vos suite, etc.) on machines running the AFS 3.4a Cache Manager. Make sure that all AFS binary locations such as /usr/vice/etc or /usr/afsws are upgraded to AFS 3.4a command binaries. If necessary, load them into the appropriate local disk directories.
  4. Reboot the client machine.

3.7. Procedures for Downgrading from AFS 3.4a to AFS 3.3a

Back to Table of Contents

The following subsections contain instructions for downgrading from AFS 3.4a to AFS 3.3a. If you have not done so already, you should read Sections 3.1 through 3.3, which contain information that should be understood prior to performing the downgrade.

The following instructions assume that all file server and database server machines are to be downgraded to full AFS 3.3a server functionality. The instructions indicate steps that can be omitted in certain cases. Consider the following before downgrading your cell's server machines:

     
  1. You must convert all database server machines to use the AFS 3.3a VL server and VLDB.
  2. You do not need to downgrade your client machines when you downgrade your server machines.
  3. Because your entire cell is unavailable during the VLDB downgrade procedure, you should perform the conversion during periods of low system usage or regular network maintenance (such as overnight or over a weekend). On a per-machine basis, the conversion procedure takes only the amount of time it takes to reboot the machine. However, conversion of the VLDB adds a few extra minutes to the procedure. In general, the time required to complete the downgrade depends on the number of server machines in your cell.

3.7.1. Downgrading Servers

Perform the following steps to downgrade servers from AFS 3.4a to AFS 3.3a:
  1.  
  2. On each server machine that runs the fs process suite, issue the bos shutdown command to shut it down:

    # bos shutdown -server file server machine -instance fs

    where file server machine is the name of the file server machine on which the fs process suite is to be shut down.

    On each database server machine, issue the bos shutdown command to shut down the database server and Update Server processes:

    # bos shutdown -server database server machine -instance vlserver kaserver ptserver buserver upclient upclientbin upclientetc upserver

    where database server machine is the name of the database server machine on which the vlserver, kaserver, ptserver, buserver, upclient, upclientbin, upclientetc, and upserver processes are to be shut down.

  3. Change directories to the local directory where you store AFS 3.3a binaries, or to the /afs/transarc.com/product/afs/3.3a/sysname directory in Transarc's product tree.
  4. Use the bos install command to copy the AFS 3.3a server process binaries into the /usr/afs/bin directory on each binary distribution machine in turn:

    # bos install -server binary distribution machine -file *-dir /usr/afs/bin

    where binary distribution machine is the name of the binary distribution machine for each system type.

  5. Wait five minutes for the upserver process on each binary distribution machine to distribute the binaries to all server machines of its system type. If you do not use the binary distribution mechanism, repeat the bos install command manually on each server machine.
  6. On the database server machine with the lowest network address, copy the vldb.DB0 (database) file, preferably to a different file system. If you copy the database to a directory in the same file system as /usr/afs/db, make sure there are still 18 megabytes free disk space to accommodate the conversion process. Copy the file as follows:

    # cp /usr/afs/db/vldb.DB0 pathname

    where pathname is the name of the directory to which the database file is to be copied. Copying the vldb.DB0 file to a different directory is strongly recommended because the conversion utility concludes by removing the old version of the VLDB.

    On the database server machine with the lowest network address, issue the vldb_convert command to convert the database to AFS 3.3 format. (The binary for this command should be in the /etc subdirectory of the temporary storage area of the local disk.) The command takes no more than a few seconds to complete the conversion.

    # vldb_convert -to 3 -from 4

    Note: You can verify the success of the conversion by running the vldb_convert command with the -showversion flag.
    On each server machine, copy the AFS kernel extensions (libafs.o or equivalent) to the local disk directory appropriate for dynamic loading (or kernel building, if you must build a kernel on this system type). If the machine actually runs client functionality (a Cache Manager), also copy the afsd binary to the local /usr/vice/etc directory. The following example command shows the recommended name for your local binary storage directory:

    # cp -r /afs/cellname/sysname/usr/afsws/root.client/usr/vice/etc /usr/vice/etc

    where cellname specifies your cell name and sysname specifies the system type name.

    For specifics on installing the files needed for dynamic loading or kernel building, consult the ``Getting Started'' section for this system type in chapter 2 of the AFS Installation Guide.

  7. Reboot the database server machine with the lowest network address.
  8. Reboot each remaining database server machine.

  9. Reboot each file server machine.

3.7.2. Downgrading the Cache Manager on AFS Clients

The following instructions assume an AFS client is to be downgraded to full AFS 3.3a functionality. Skip this section if the client will continue to use AFS 3.4a software.
  1.  
  2. Copy the afsd binary file to /usr/vice/etc and AFS kernel extensions (libafs.o or equivalent) to the local disk directory appropriate for dynamic loading (or kernel building, if you must build a kernel on this system type). For specifics, consult the ``Getting Started'' section for this system type in chapter 4 of the AFS Installation Guide. The following example for dynamic loading shows the recommended name for your local distribution location:

    # cp -r /afs/cellname/sysname/usr/afsws/root.client/usr/vice/etc /usr/vice/etc

    Chapter 13 of these release notes provides information on using the afsd command to configure the Cache Manager.

  3. You should use AFS 3.3a command binaries (login, tokens, fs suite, vos suite, etc.) on machines running the AFS 3.3a Cache Manager. Make sure that all AFS binary locations such as /usr/vice/etc or /usr/afsws are upgraded to AFS 3.3a command binaries. If necessary, load them into the appropriate local disk directories.
  4. Reboot the client machine.

4. Authentication

Back to Table of Contents

This chapter describes changes to AFS authentication and login programs for version 3.4a. AFS 3.4a contains changes to the following:

     
  1. Kerberos support for the kaserver process
  2. AFS login program
  3. AIX login program
  4. Digital login program
  5. Solaris login program
These changes are marked with the heading ``AFS 3.4a Changes.''

This chapter also contains changes from the AFS 3.3 release that have not been incorporated into the full AFS documentation set. These changes are marked with the heading ``AFS 3.3 Changes.''

4.1. Kerberos Support for the kaserver Process

Back to Table of Contents

AFS 3.4a Changes

In AFS 3.4a, the AFS kaserver Authentication Server has improved compatibility with MIT's Kerberos version 4 and 5 clients. Specifically, the kaserver now listens for MIT Kerberos-format requests on UDP port 88, in addition to UDP port 750. When those requests result in an error, the kaserver now reports the error using the proper MIT error codes.

4.2. Changes to the AFS login Program

Back to Table of Contents

AFS 3.4a Changes 

In AFS 3.4a, the login program logs a failed login message in the 4.3. Changes to the AIX login ProgramBack to Table of Contents

AFS 3.4a Changes 

AFS 3.4a contains the following changes to the AIX login program:

     
  1. Support for the # and ! character entries in the /etc/passwd file
  2. Support for alternative authentication programs with AIX 4.1. Transarc supplies the afs_dynamic_auth alternative authentication program with AFS 3.4a for use with AIX 4.1.

4.3.1. Support for # and ! Entries in the /etc/passwd File

AFS 3.4a supports the pound sign (#) character in the local /etc/passwd file on AIX machines. The # character indicates that the login program goes directly to the AFS Authentication Database to check authentication and skips AIX local authentication and AIX secondary authentication. It is recommended that you include the standard AIX exclamation point (!) character as an entry in the /etc/passwd file. The ! character entry indicates that the login program checks for any AIX secondary authentication.

4.3.2. Support for Alternative Authentication Programs with AIX 4.1

Transarc does not supply a replacement login program for AIX 4.1 as is provided for AIX 3.2. Transarc supplies an external alternative authentication program that is called by the AIX 4.1 login process. In order to take advantage of this authentication program provided with AFS 3.4a, you must make the following configuration changes to AIX 4.1 on the local client machine. Ensure that you installed the afs_dynamic_auth program in the /usr/vice/etc directory on the local client machine.
  1.  
  2. The registry variable defines in which domain users are administered. In the /etc/security/user file on the local client machine running the AIX 4.1 operating system, set the registry variable for default users to DCE: 

    default:     registry = DCE

    Note: You must use DCE for the registry variable. AFS is not a valid registry variable in AIX 4.1.
    Note: In the /etc/security/user file on the local client machine, set the registry variable of the user ``root'' to files, that is,
    root:     registry = files
    The value files designates that user ``root'' can authenticate using the local password files on the local machine only.
    In the /etc/security/user file on the local client machine running AIX 4.1: 
      If the machine is an AFS client only, set the SYSTEM variable for default users to 

      default:     SYSTEM = "AFS OR (AFS[UNAVAIL] AND compat[SUCCESS])"

      If the machine is both an AFS client and a DCE client, set the SYSTEM variable to 

      default:     SYSTEM = "DCE OR DCE[UNAVAIL] OR AFS OR AFS[UNAVAIL] AND compat[SUCCESS]"

    In the /etc/security/login.cfg file on the local client machine running AIX 4.1, identify the DCE authentication method with the following: 

    DCE:     program = /usr/vice/etc/afs_dynamic_auth

    In the /etc/security/login.cfg file on the local client machine running AIX 4.1, identify the AFS token with the following: 

    AFS:     program = /usr/vice/etc/afs_dynamic_auth

Note: If you are using the afs_dynamic_kerbauth alternative authentication program with AIX 4.1, AFS does not set the KRBTKFILE environment variable.

4.3.3. Support for Secondary Authentication on AIX 3.2

AFS 3.3 Changes 

The 3.3 version of the AFS login program supports secondary authentication on the AIX 3.2 operating system. In addition, the AFS login program now checks for local password, then it prompts for only one password; if not, it prompts for both passwords. If the password entry (in the /etc/security/passwd file) is an asterisk (*), the local authentication is ignored.

4.4. Changes to the Digital UNIX (formerly DEC OSF/1) login Program

Back to Table of Contents

AFS 3.4a Changes 

Four changes have been made to the version of the login program distributed for Digital system types. Descriptions of these changes follow:

  1.  
  2. After you enter your password at the login: prompt, the login program checks for authentication to the local file system on the local machine. If you are authenticated to the local file system, the login program then checks for authentication to AFS via the Authentication Server. If this attempt to authenticate to AFS fails, the login program returns a prompt:

    Enter AFS password:
    If this second AFS authentication attempt fails, you are authenticated to the local file system without AFS authentication, because you successfully authenticated to the local "> file system. If you fail after five login attempts, the session aborts.

Process authentication group (PAG) support is identical to other systems.

4.5. Limitations for SGI Passwords

Back to Table of Contents

AFS 3.4a Changes 

The SGI login program imposes an 8-character limitation on passwords. Be aware that when using the integrated login program that SGI truncates the AFS password after the first 8 characters.

4.6. Changes to the Solaris login Program

Back to Table of Contents

AFS 3.4a Changes 

For Solaris environments, AFS supports the existence of an /etc/default/login file. In this file, you can set the following variables:

     
  1. CONSOLE - Allows only ``root'' to login from the specified terminal or console. 
  2. ALTSHELL - Specifies an alternative shell (for ``root'' only). 
  3. PASSREQ - Specifies that a password is required before permitting users access to the local system. 
  4. TIMEZONE - Specifies a time zone to overwrite the default time zone. 
  5. HZ - Overwrites the default setting of the timer-related HZ value. 
  6. PATH - Uses the path specified by this variable instead of the default. 
  7. SUPATH - Uses the path specified by this variable for ``root'' instead of the default. 
  8. ULIMIT - Sets a limit on the file size of a process controlled by the user. 
  9. TIMEOUT - Specifies a default interval of time to wait for the response of a correct password before closing the connections. 
  10. UMASK - Specifies the default file creation mask to use for login session (the default value is 022). 
Note: AFS 3.4a does not support the SLEEPTIME and IDLEWEEKS variables. 

5. The Backup System

Back to Table of Contents

This chapter describes changes to the AFS 3.4a Backup System, specifically, the Tape Coordinator and the backup command suite. In particular, AFS 3.4a contains two new commands, backup volsetrestore and backup interactive, and the following enhancements:

     
  1. An optional backup configuration file, /usr/afs/backup/CFG_<tape_device>
  2. Improved error messages and error handling
  3. Permanent tape names
  4. Improved tape scanning via the Tape Coordinator
  5. Improved Backup System prompting
  6. The backup labeltape command (enhanced)
  7. The backup readlabel command (enhanced)
  8. The backup scantape command (enhanced)
  9. The -cell argument and -localauth flag on all backup commands
These changes are marked with the heading ``AFS 3.4a Changes.''

5.1. A New Backup Configuration File for the Tape Coordinator

Back to Table of Contents

AFS 3.4a Changes 

The AFS 3.4a Backup System supports a new user-defined configuration file that allows you to automate tape operations with tape stackers and jukebox devices. Upon startup, the butc command reads the backup configuration file, /usr/afs/backup/CFG_<tape_device>, and configures the Tape Coordinator according to the parameters defined in the file. You can configure the Tape Coordinator to call executable routines that suppress operator prompts and handle changing tapes within a tape stacker or jukebox device by setting the MOUNT and UNMOUNT parameters in the CFG_<tape_device> file.

You can also use the CFG_<tape_device> file to automate operations on other types of tape devices or to files on a disk device. For example, you can automate the backup dump process and dump to a file (up to 2 GB) on a disk drive, instead of a tape drive, by configuring the FILE parameter. You can also cancel automatic querying for tapes on a tape device by configuring the AUTOQUERY parameter and turn off name checking of tapes on a tape device by configuring the NAME_CHECK parameter.

The CFG_<tape_device> file does not replace the /usr/afs/backup/tapeconfig file; the butc process still requires the tape device information stored in that file.

5.1.1. Creating a User-Defined Configuration File

Automated backup equipment, such as stackers and jukeboxes, can automatically switch tapes during a backup dump operation. Jukeboxes can also automatically fetch the proper tapes for a backup restore operation. To handle the varying requirements of automated backup equipment, the user-defined configuration file can be set up to call executable routines that you create to operate your backup equipment. Through this configuration file, you can select the level of automation you want the Tape Coordinator to use.

Each backup device on a Tape Coordinator machine can have its own user-defined configuration file. The file must reside in the /usr/afs/backup directory and it must have a name of the form CFG_<tape_device>, where <tape_device> is a variable part of the file name that specifies the relevant device (jukebox or stacker). A separate file is required for each backup device.

When starting a Tape Coordinator, the butc program reads the CFG_<tape_device> file and configures the Tape Coordinator based on the parameter settings it finds in the file. The configuration file parameters are the following:

 
MOUNT
Can name an executable file that contains a compiled program or script. The file can mount an automated backup device, such as a stacker or jukebox, and is executed instead of prompting for a tape.
UNMOUNT
Can name an executable file that contains a compiled program or script that performs tape unmount operations for an automated backup device.
ASK
Can force all Backup System prompts to accept the default answers rather than query the operator. This does not affect the prompt to mount tapes. This parameter is useful for fully automating the backup process.
AUTOQUERY
Can disable the Tape Coordinator prompt (or MOUNT request) to mount the first tape. This parameter is also useful for fully automating the backup process.
NAME_CHECK
Can prevent the Backup System from checking tape names.
BUFFERSIZE
Can allocate memory to increase performance of dump and restore operations with the Backup System.
FILE
Can direct the dump to tape or to a specified file.
The following sections define each of the parameters in detail. Section 5.1.2 contains annotated, sample scripts that illustrate typical routines to control automated backup equipment.

5.1.1.1. The MOUNT Parameter

The MOUNT parameter provides a mechanism to load a tape through an automated backup device. The MOUNT parameter takes a pathname as an argument:

MOUNT <filename>

where <filename> is the name of the file that contains the executable routine.

If you want the Backup System to support a tape stacker or jukebox device, you can write an executable routine and put it in this file to perform the tape mount operations for the device. By default, the Backup System prompts the operator to mount a tape before opening the tape device file.

Prior to opening the tape device, the Tape Coordinator checks for the MOUNT parameter in the CFG_<tape_device> configuration file. The configuration file contains an administrator-written script or program that mounts the tape device. When the Tape Coordinator locates the MOUNT parameter, it executes the file specified by the MOUNT parameter instead of prompting the operator to mount the tape. The executable routine will execute with administer rights. The following information is passed from the Tape Coordinator to the executable routine via command line arguments:

  1. The tape device pathname (as specified in the /usr/afs/backup/tapeconfig file)
  2. The tape operation (one of the following backup commands):

    1.  
    2. For an appended dump:

      1. appenddump
      For a dump operation:
      1. dump
      For any restore operations:
      1. labeltape
      2. readlabel
      3. restore
      4. restoredb
      5. savedb
      6. scantape
  3. The number of times this tape has been requested. If an error occurs when opening the tape device, this value is incremented by 1 and the executable routine specified by the MOUNT parameter is called again.
  4. The tape name. If no tape name is specified, none is passed to the executable routine.
  5. The tape ID. This is a unique identification code assigned by the Backup System. If no tape ID is specified, none is passed to the executable routine.
If you do not specify the MOUNT parameter, the Backup System prompts the operator to mount a tape. You can use the AUTOQUERY parameter to prevent the Backup System from requesting the first tape (via the MOUNT script or a prompt).

If the executable routine returns an exit code of 0, the Tape Coordinator operation continues. If the executable routine returns an exit code of 1, the Tape Coordinator operation aborts. If any other exit code is returned by the routine, it causes the Tape Coordinator to prompt the operator for the correct tape at the Tape Coordinator window.

Note: If the MOUNT operation does not close with a 0 status, the Tape Coordinator will not call the UNMOUNT operation.

5.1.1.2. UNMOUNT Parameter

The UNMOUNT parameter specifies a file that contains an administrator-written executable script or program. In this case, the executable routine removes a tape from an automated backup device. If you want the Backup System to support a tape stacker or jukebox device, you can write an executable routine in this file to perform the tape unmount operations for the device. The UNMOUNT parameter takes a pathname as an argument:

UNMOUNT <filename>

where <filename> is the name of the file which contains the executable routine for use with a tape stacker or jukebox device.

After closing a tape device, the Tape Coordinator executes the routine in the file specified by the UNMOUNT parameter (whether the close operation succeeds or fails); the routine is called only once. The routine specified by the UNMOUNT parameter removes a tape from the tape device. The Backup System passes the following information to the executable routine from the Tape Coordinator:

  1. The tape device pathname (as specified in the /usr/afs/backup/tapeconfig file).
  2. The tape operation. The only valid tape operation for use with the UNMOUNT parameter is unmount.
If the UNMOUNT parameter is not supplied, the default action is to take no action.

5.1.1.3. ASK Parameter

The ASK parameter determines whether the Backup System should ask the tape operator all questions in response to error conditions, except the request to mount the tape, or whether default answers are to be assumed. The format for this parameter in the CFG_<tape_device> configuration file is

ASK { YES | NO }

There are two valid arguments for the ASK parameter:

  1. The YES argument specifies that the Tape Coordinator prompts the operator for a response in the following error cases. YES is the default value for the ASK parameter.
  2. The NO argument specifies that the Tape Coordinator assumes default responses for the specified backup operation in the following error cases.
The error cases for the ASK parameter are:
  1. A backup dump operation failed to dump a volume. The YES argument causes the Backup System to ask the operator if it should retry the volume, omit the volume, or abort the backup dump operation. The NO argument proceeds with the backup dump operation but omits the volume from the dump.
  2. A backup restore operation failed to restore a volume. The YES argument causes the Backup System to ask if the operator wishes to continue the backup restore operation. The NO argument continues the backup restore operation and restores the remaining volumes.
  3. A backup scantape operation cannot determine if there is a next tape in the dump set. The YES argument causes the Backup System to ask if there are more tapes to be dumped. The NO argument assumes there are more tapes.
  4. A backup labeltape operation is attempting to label a non-expired tape. The YES argument causes the Backup System to ask if the backup labeltape operation should proceed. The NO argument does not label the tape.

5.1.1.4. AUTOQUERY Parameter

The AUTOQUERY parameter determines whether to disable the Tape Coordinator's initial prompt or MOUNT script execution for tape insertion when executing backup commands involving a tape device. Use the AUTOQUERY parameter in conjunction with the ASK parameter to disable all prompting from the Backup System. The format for this parameter in the CFG_<tape_device> configuration file is

AUTOQUERY { YES | NO }

There are two valid arguments for the AUTOQUERY parameter:

  1. The YES argument requests the first tape of a dump set. YES is the default value for the AUTOQUERY parameter.
  2. The NO argument does not request the first tape of a dump set, but assumes it is already mounted in the tape device. A NO argument for the AUTOQUERY parameter is similar to the -noautoquery flag for the butc command.

5.1.1.5. NAME_CHECK Parameter

The NAME_CHECK parameter determines whether the Backup System should check tape names. Disabling tape name checking is useful for recycling tapes without first relabeling them. The format for this parameter in the CFG_<tape_device> configuration file is

NAME_CHECK { YES | NO }

There are two valid arguments for the NAME_CHECK parameter:

  1. The YES argument enables tape name checking. When dumping a volume, the Tape Coordinator verifies that the tape name is NULL or the same tape name as the specified dump. YES is the default value for the NAME_CHECK parameter.
  2. The NO argument disables tape name checking. Any expired tape is acceptable.

5.1.1.6. BUFFERSIZE Parameter

The BUFFERSIZE parameter allows the allocation of memory to increase the performance of dump and restore operations with the Backup System. The format for this parameter in the CFG_<tape_device> configuration file is

BUFFERSIZE <size>

where <size> specifies the memory allocation for backup dump and backup restore operations. By default, <size> is specified in bytes. If you wish to use a different unit of measure, you can specify kilobytes (for example, 10k) or megabytes (for example, 1m) when you specify the size.

For backup dump operations, volumes are read into the memory buffer and then written out to tape, in contrast to the normal operation of going from disk to the tape drive at a slower rate. This allows faster transfers of volume data from a file server to the Tape Coordinator machine and faster transfers (streaming of the tape drive) from memory to the tape drive. A buffer size of 1 tape block (16 KB) is the default for the parameter for a backup dump operation.

For backup restore operations, volumes are read into the memory buffer and then written out to the File Server. This allows faster transfers of volume data from a File Server to the Tape Coordinator machine and faster transfers (streaming of the tape drive) from memory to the tape drive. A buffer size of 2 tape blocks (32 KB) is the default for the parameter for a backup restore operation.

5.1.1.7. FILE Parameter

The FILE parameter specifies whether backup dump and backup restore operations are written to or read from a tape device or a file. The format for this parameter in the CFG_<tape_device> configuration file is

FILE { YES | NO }

The FILE parameter has two valid arguments:

  1. The YES argument specifies that backup dump operations are written to a file and backup restore operations are restored from a file. The pathname specified in the /usr/afs/backup/tapeconfig file is the pathname of the file to which the operation is to be written.
  2. The NO argument specifies that backup dump and backup restore operations use a tape device. NO is the default value for the FILE parameter.
Note the following requirements if you specify the YES argument (dump to a file or restore from a file):
  1. If the Tape Coordinator needs another file to continue an operation (for example, a disk partition is filled), the Tape Coordinator prompts the operator for the next tape but continues to use the pathname in the /usr/afs/backup/tapeconfig file. A good practice is to specify a pathname that links the device file to another file name. If you must then provide another file name, you can take advantage of the prompt for a new tape to change the link to a new pathname. You can also change the link by using executable routines specified with the MOUNT parameters.
  2. Do not specify the YES argument when the /usr/afs/backup/tapeconfig file specifies a tape device or the NO argument for a file. Neither arrangement works; you cannot restore the data if this is done.
  3. When writing to a file, the Backup System breaks the volume data into 16 kilobyte chunks. All tape ioctl system calls are removed when dumping to a file. Data is still written in 16 kilobyte blocks; however, the Backup Database records data positions not in terms of filemarks (as it does for data written to tape), but in terms of 16 kilobyte blocks. Positioning to a volume is done directly with a seek system call.

5.1.2. Example of User-Defined Configuration Files

The following example configuration files detail how you might structure configuration files for stackers, jukeboxes, and file dumps. Consider these files as examples and not as recommendations.

There are two general considerations concerning the CFG_<tape_device> files (these considerations are discussed in detail in Section 5.1.1.1):

  1. The Backup System passes the following parameters to the CFG_<tape_device> file:
    1. The tape device pathname
    2. The tape operation
    3. The number of times the tape has been requested
    4. The tape name
    5. The dump ID
  2. The Backup System responds to exit codes from the file in the following ways:
Exit code 0
Continue the backup process.
Exit code 1
Abort the backup process.
Any other exit code
Prompt the operator for the correct tape at the Tape Coordinator window.

5.1.2.1. Example CFG_<tape_device> File for Stackers

The following example /usr/afs/backup/tapeconfig file contains configuration information for the tape stacker stacker0.1.

2G 5K /dev/stacker0.1 0

The following five lines comprise an example of a configuration file for dealing with stacker-type automated backup equipment:

MOUNT /usr/afs/backup/stacker0.1     UNMOUNT /usr/afs/backup/stacker0.1     AUTOQUERY NO     ASK YES     NAME_CHECK NO

This example CFG_<tape_device> file sets the following conditions:

 
MOUNT     /usr/afs/backup/stacker0.1
The Backup System executes the /usr/afs/backup/stacker0.1 file to initialize the stacker.
UNMOUNT     /usr/afs/backup/stacker0.1
The Backup System executes the /usr/afs/backup/stacker0.1 file when it closes a tape device and removes the tape from the stacker.
AUTOQUERY NO
The Backup System does not prompt the operator to mount the first tape.
ASK YES
The Backup System prompts the operator when an error occurs during the backup process.
NAME_CHECK NO
The Backup System does not ensure that the name of the next tape in the "stack" matches the dump set name.
The previous example specifies the /usr/afs/backup/stacker0.1 file containing an executable routine that initializes the stacker and loads a tape. An example of such an executable routine follows.:
#! /bin/csh -f
set devicefile = $1
set operation = $2
set tries = $3
set tapename = $4
set tapeid = $5
set exit_continue = 0
set exit_abort = 1
set exit_interactive = 2
# - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
if (${tries} > 1) then
    echo "Too many tries"
    exit ${exit_interactive}
endif

if (${operation} == "unmount") then
    echo "UnMount: Will leave tape in drive"
    exit ${exit_continue}
endif

if ((${operation} == "dump") |\ 
    (${operation} == "appenddump")|\
    (${operation} == "savedb"))then
    stCmd_NextTape ${devicefile}
    if (${status} != 0) exit ${exit_interactive}
    echo "Will continue"
    exit ${exit_continue}
endif

if ((${operation} == "labeltape") |\
    (${operation} == "readlabel")) then
    echo "Will continue"
    exit ${exit_continue}
endif

echo "Prompt for tape"
exit ${exit_interactive}
This routine makes use of only two of the parameters passed to it by the Backup System: tries and operation. It is a good practice to watch the number of "tries" and exit if the number exceeds 1 (which implies that the stacker is out of tapes). Note that this routine calls the stCmd_NextTape function for backup dump or backup savedb operations; however, your file should call whatever routine is required to load the next tape for your stacker. Also note that the routine sets the appropriate exit code to prompt an operator to load a tape if either the stacker cannot load a tape or a backup restore operation is in process.

5.1.2.2. Example CFG_<tape_device> File for Dump to File

The following example /usr/afs/backup/tapeconfig file contains configuration information for the file.:

1536M 0K /dev/HSM_device 20

The following example CFG_<tape_device> file configures the Backup System to dump directly to a file.

MOUNT /usr/afs/backup/file     FILE YES     ASK NO

This example CFG_<tape_device> file sets the following conditions:

 
MOUNT     /usr/afs/backup/file
The Backup System calls the /usr/afs/backup/file script when butc needs a file. The routine initializes the /dev/HSM_device file.
FILE YES
The Backup System determines that the information should be dumped directly to a file. The pathname for the target dump file is set in the /usr/afs/backup/tapeconfig file.
ASK NO
The Backup System does not prompt the operator when an error occurs during the backup process.
The following routine, contained in the /usr/afs/backup/file file, demonstrates how to configure the Backup System to handle dumps to a file:
#! /bin/csh -f
set devicefile = $1
set operation = $2
set tries = $3
set tapename = $4
set tapeid = $5
set exit_continue = 0
set exit_abort = 1
set exit_interactive = 2
# - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -

if (${tries} > 1) then
    echo "Too many tries"
    exit ${exit_interactive}
endif

if (${operation} == "labeltape") then
    echo "Won't label a tape/file"
    exit ${exit_abort}
endif

if ((${operation} == "dump")|\ 
    (${operation} == "appenddump")|\
    (${operation} == "restore")|\
    (${operation} == "savedb")|\
    (${operation} == "restoredb")) then
    /bin/rm -f ${devicefile}
    /bin/ln -s /hsm/${tapename}_${tapeid} ${devicefile}
    if (${status} != 0) exit ${exit_abort}
endif

exit ${exit_continue}
As with the stacker routine, this routine makes use of two of the parameters passed to it by the Backup System: tries and operation. The tries parameter monitors the number of attempts to write to or read from a file. If the number of attempts exceeds 1, the Backup System is unable to write to or read from the file specified in the /usr/afs/backup/tapeconfig file. The routine will then exit and return an exit code of 2 (which will cause the Backup System to prompt the operator to load a tape). The operator can use this opportunity to change the name of the file specified in the /usr/afs/backup/tapeconfig file.

The primary function of this routine is to establish a link between the device file and the file to be dumped or restored. The UNIX ln -s command creates the symbolic link between the two files.

A backup dump, backup restore, backup savedb, or backup restoredb operation will link to a new file using the tapename and tapeid parameters to build the file name. The tapename and tapeid parameters are used so that backup restore operations can easily link to the proper file.

5.2. Improved Error Messages and Error Handling

Back to Table of Contents

AFS 3.4a Changes 

AFS 3.4a contains several enhancements to error messages, log messages, and error handling. These include the following:

     
  1. More Backup System error messages were added, containing greater detail.
  2. Backup System and Tape Coordinator status and error messages appearing in the TE_<tape_device> and the TL_<tape_device> log files are more consistent.
  3. All Tape Coordinator (butc) status and error messages are sent to the TL_<tape_device> log file.
  4. Only error messages are sent to the TE_<tape_device> log file.
  5. Error and status messages, depending on their level, are sent to the Tape Coordinator window.
  6. Error and warning messages are differentiated in the TE_<tape_device> log file. Error messages are specified as failures; warning messages are not specified as failures.

5.3. Permanent Tape Names

Back to Table of Contents

AFS 3.4a Changes 

Note: The permanent tape name was set with the -name argument of the backup labeltape command in the AFS 3.4 Beta product. In AFS 3.4a, the permanent tape name is set with the -pname argument.
The AFS 3.4a backup labeltape command allows users to label tapes explicitly with a permanent name. If a user supplies a permanent name for a tape with the backup labeltape command's -pname argument, the Backup System will use the permanent name (tape name) as the tape is re-used or recycled. When a user labels the tape with the backup labeltape command's -pname argument, it also sets the AFS tape name to NULL. The Backup System uses the permanent name until the user explicitly changes it with the backup labeltape command's -pname argument. It is recommended that permanent tape names be unique since that is the tape name that is recorded in the Backup Database and that is requested on backup restore operations. The permanent name is listed in the tape name field in the output resulting from the backup readlabel command. You should use the -name argument to set the AFS tape name.
Note: If you use the -pname argument to label the tape with a permanent name, you can no longer refer to the tape by its AFS tape name. The Backup System and Backup Database will only recognize the tape's permanent name on commands after labelling the tape using the -pname argument of the backup labeltape command.
If the user does not explicitly name a tape with a permanent name, the Backup System assigns a non-permanent name to the tape as it did in previous AFS versions. The Backup System produces this non-permanent name by concatenating the volume set and dump level with a tape sequence index number (for example, guests.monthly.3). This name is not permanent and changes whenever the tape label is rewritten by a backup command (for example, when using the backup dump, backup labeltape, and backup savedb commands). The AFS-assigned non-permanent name is listed in the AFS tape name field in the output resulting from the backup readlabel command.
Note: In AFS 3.3 or earlier, if a user labeled a tape using the -name argument and used that tape in a tape recycling scheme, the AFS Backup System enforced name checking by requesting that the AFS tape name of the volume to be dumped or restored match the name of the tape in the drive. In AFS 3.4a, if users set the permanent tape name using the -pname argument, any pre-existing AFS tape name on the tape label from AFS 3.3 or earlier is set to NULL and the AFS Backup System cannot verify the tape being used for the dump or restore.

5.4. Tape Coordinator Enhancement

Back to Table of Contents

AFS 3.4a Changes 

In AFS 3.4a, the Tape Coordinator allows a scan to begin on any tape of a dumpset. In previous versions of AFS, the Tape Coordinator had to start a scan on the first tape of a dump set.

The limitations to a tape scan are:

     
  1. If the first volume on the scanned tape is a continuation from the dump set's previous tape, the volume is not added to the Backup Database.
  2. If the first volume on the scanned tape is part of an appended dump, that volume is not added to the Backup Database. Each volume after the first volume is also not added to the Backup Database until a new appended dump label is read. The backup scantape command must read the dump label of a tape before the Tape Coordinator can add its volume to the Backup Database. 

5.5. Modification to Backup Prompting

Back to Table of Contents

AFS 3.4a Changes 

Previously, the Backup System prompted the user for the name of the tape by sounding a bell and sending a message to the screen. The system reprompted the user for the information every 15 seconds. The repeated messages caused the user's information to scroll off the screen. In AFS 3.4a, the system initially prompts for the tape name by sounding a bell and sending a message to the screen but reprompts every 15 seconds only by sounding a bell. This modification keeps the user's information from scrolling off the screen.

5.6. The backup Command

Back to Table of Contents

This section describes changes to individual commands in the backup command suite for AFS 3.4a.

5.6.1. The -localauth Flag and -cell Argument (New)

AFS 3.4a Changes 

All commands in the backup command suite now support the -localauth flag and the -cell argument.

5.6.1.1. The -localauth Flag

The -localauth flag assigns the backup command and butc process a token that never expires. You need to run a backup command with the -localauth flag from a file server machine as ``root.'' The -localauth flag instructs the backup command interpreter running on the local file server machine to construct a service ticket using the server encryption key with the highest key version number in the /usr/afs/etc/KeyFile file on the local file server machine. The backup command presents the ticket to the Volume and/or Volume Location Server to use in mutual authentication.

This flag is useful only for commands issued on file server machines, since client machines do not have a /usr/afs/etc/KeyFile file. It is intended for cron-type processes or jobs included in the machine's /usr/afs/local/BosConfig file. An example might be a command that automatically runs the backup dump command on certain volumes for archival backups. See the chapter in the AFS System Administrator's Guide for information about backing up the system.

The -localauth flag can also be used if the issuer is unable to authenticate with AFS but is logged into the local file system as ``root.''

5.6.1.2. The -cell Argument

The -cell argument specifies the cell in which the Backup System and volumes that are affected by the backup command and butc process reside. The issuer can abbreviate cell name to the shortest form that distinguishes it from the other cells listed in the /usr/vice/etc/CellServDB file on the local client machine. By default, commands are executed in the local cell, as defined:
     
  1. First, by the value of the environment variable AFSCELL. 
  2. Second, in the /usr/vice/etc/ThisCell file on the client machine on which the command is issued.

5.6.2. The backup volsetrestore Command (New)

AFS 3.4a Changes 

A new command, backup volsetrestore, has been added to the backup command suite. The backup volsetrestore command restores all volumes in a volume set or restores one or more individual volumes. The command is useful for recovering from catastrophic losses of data, such as the loss of all volumes on multiple partitions of a file server machine or the loss of multiple partitions from multiple file server machines. The backup volsetrestore command can restore specialized collections of volumes, as well as restore different volumes to different sites. In contrast, the backup volrestore command restores one or more volumes to a single site, and the backup diskrestore command restores all volumes that reside on a single partition to the same partition.

The syntax of the backup volsetrestore command follows:

backup volsetrestore [-name <volume set name>] [-file <file name>]     [-portoffset <TC port offset>] [-n] [-localauth] [-cell <cell name>] [-help]

Arguments:

-name
Names a volume set to be restored. The command restores all of the volumes in each of the entries in the specified volume set. You must use either this argument or the -file argument. Refer to the ``Using the -name Argument'' section in this description for more information about using this argument.
-file
Specifies the full pathname of a file from which the command is to read the name of each volume to be restored and the site (file server machine and partition) to which the volume is to be restored. Specify each volume and site on a different line, using the following format:
machine partition volume
You must use either this argument or the -name argument. Refer to the ``Using the -file Argument'' section for information about using this argument.
-portoffset
Specifies the port offset number of the tape drive to be used in the operation. If different types of tape drives were used to create the tapes that contain the dumpsets from which data is to be restored, use the backup volrestore command. Port 0 is the default.
Note: If multiple volumes are to be restored, the port offset order must be the same for all volumes. That is, all full dumps must be done on the same port offset, all first-level incrementals on the same port offset, etc.
-n
Designates that no action is taken. This flag lists all of the volumes to be restored (both the full and incremental dumps of volumes), the location to which the volumes are to be restored, and on which tape the volume is to be found without actually performing the operation. Output from this command can be used as input with the -file argument. Include the other arguments as you would to execute the command. You can use this flag with the -name argument to write a list of volumes to a file, which you can then modify for use with the -file argument. See the ``Output'' section for information about using the -n flag.
-localauth
Assigns the backup volsetrestore command a token that never expires. You need to run the backup volsetrestore command from a file server machine as ``root.'' The -localauth flag instructs the backup volsetrestore command interpreter running on the local file server machine to construct a service ticket using the server encryption key with the highest key version number in the /usr/afs/etc/KeyFile file on the local file server machine. The backup volsetrestore command presents the ticket to the Volume and/or VL Server to use in mutual authentication.         This flag is only useful for commands issued on file server machines, since client machines do not have a /usr/afs/etc/KeyFile file. It is intended for cron-type processes or jobs included in the machine's /usr/afs/local/BosConfig file.         The -localauth flag can also be used if the issuer is unable to authenticate to AFS but is logged into the local file system as ``root.''
-cell
Specifies the cell in which the Backup System and volumes that are affected by the backup volsetrestore command reside. The issuer can abbreviate cell name to the shortest form that distinguishes it from the other cells listed in the /usr/vice/etc/CellServDB file on the local client machine. By default, commands are executed in the local cell, as defined:
  1. First, by the value of the environment variable AFSCELL. 
  2. Second, in the /usr/vice/etc/ThisCell file on the client machine on which the command is issued.
-help
Prints the online help for this command. All other valid arguments specified with this flag are ignored.
Description:

The backup volsetrestore command restores the contents of specified volumes from tape to the file system. The command performs a full restore of each volume, restoring data from the last full dump and all subsequent incremental dumps (if any) of each volume. Use the -name argument or the -file argument to indicate the volumes to be restored.

     
  1. The -name argument lets you restore all of the volumes in a specified volume set. The command reads the Volume Location Database (VLDB) to determine the volumes to be restored and restores them to the site listed in the VLDB.
  2. The -file argument lets you restore individual volumes specified in a file. The command restores each volume to the site you specify in the file.
The -n flag instructs the command to produce a list of the volumes it would restore without actually restoring any volumes. The command also provides information about the tapes that contain dumps of the volumes. You can use the -n flag with the -file argument to determine the tapes required to restore the indicated volumes. You can also use the -n flag with the -name argument to construct a list of volumes that would be restored with a specified volume set; you can then modify the list of volumes as necessary to produce a file for use with the -file argument. You could create a file for the backup volsetrestore command if you want to restore volumes in a volume set to a different location, restore only a subset of the volume set, or change the order of volume restores within the volume set.

Note that if you restore a volume to a site other than the site that is indicated in the VLDB and if the volume resides in the location specified in the VLDB, the existing version of the volume is removed when the volume is restored and the volume's entry in the VLDB is updated accordingly. If you restore a volume to the site at which it currently exists, the command overwrites the existing version of the volume.

Using the -name Argument:

Use the -name argument of the backup volsetrestore command to restore the volumes included in a specified volume set. The command reads the VLDB to determine all volumes that satisfy fields of the entries in the volume set. It then looks in the Backup Database to determine the tapes that contain the last full dump and all subsequent incremental dumps of each volume. It restores each volume included in an entry in the volume set to the site listed in the VLDB, overwriting any existing version of the volume.

You can specify the name of an existing volume set, or you can define a new volume set and add entries that correspond to the volumes that need to be restored. It can be useful to define a new volume set when you are starting new file servers and want to create a new volume set for backing up these file servers. For example, suppose you need to restore all volumes that reside on the file server machines named fs1.abc.com and fs2.abc.com. You can use the backup addvolset command to create a new volume set. You can then use the backup addvolentry command to add the following entries to the new volume set:

fs1.abc.com.*.*     fs2.abc.com.*.*

These entries indicate all volumes on all partitions on the machines named fs1.abc.com and fs2.abc.com. Once the new volume set is defined, you can issue the backup volsetrestore command, specifying the name of the volume set with the -name argument.

For volume sets created for use with the backup volsetrestore command, define entries that match the ReadWrite versions of volumes. The Backup System then searches the Backup Database for a dump of the ReadWrite or Backup volume. If you define a ReadOnly or Backup volume, the Backup System will restore only that volume name (if the ReadWrite volume exists). Also, the volume set expansion may miss volumes that may have been dumped.

Using the -file Argument:

Use the -file argument of the backup volsetrestore command to restore each volume that has an entry in a specified file. The command examines the Backup Database to determine the tapes that contain the last full dump and all subsequent incremental dumps of each specified volume. It restores each volume to the site indicated in the specified file.

An entry for a volume in a file to be used with the command must have the following format:

machine partition volume [comments...]

The entry provides the following information:

 
machine
Names the file server machine to which the volume is to be restored.
partition
Names the partition to which the volume is to be restored.
volume
Names the volume to be restored. In general, you should provide the name of the ReadWrite version of the volume.
comments...
All remaining text. The command treats any other text provided with the entry for the volume as a comment and ignores it. This text is optional.
Do not use wildcards (for example, .*) in an entry. Also, do not include a newline character in an entry for a volume; each entry must appear on a single line of the file. Include only a single entry for each volume in the file. The command uses only the first entry for a given volume; it ignores all subsequent entries for the volume.

Output:

If you omit the -n flag, the backup volsetrestore command returns the unique task ID number associated with the restore operation. The task ID number is displayed in the command window directly following the command line and in the Tape Coordinator's monitoring window if the butc command is issued with debug level 1. The task ID number is not the same as the job ID number, which is visible with the (backup) jobs command if the backup volsetrestore command is issued in interactive mode. The task ID number is a temporary number assigned to the task by the Tape Coordinator, whereas the job ID number is a permanent number assigned to the job by the Backup System. Since the job ID number is permanent, it can be referenced. Note that the task ID and job ID numbers are not assigned to the operation until the command actually begins to restore volumes.

If you include the -n flag, the command displays the number of volumes that would be restored, followed by a separate line of information about each volume to be restored (its full and incremental dumps). For each volume, the command provides the following output:

machine partition volume_dumped # as volume_restored; tape_name; pos position number; date

The output provides the following information:

 
machine
The host name of the file server machine to which the volume would be restored (for example, fs1.abc.com).
partition
The name of the partition to which the volume would be restored (for example, /vicepa).
volume_dumped
The name of the volume that was dumped (for example, user.frost). The command displays the name of the Backup version of the volume (for example, user.frost.backup) if that version was dumped.
volume_restored
The name with which the volume would be restored (for example, user.frost). The command always displays the name and volume ID of the ReadWrite version of the volume.
tape_name
The name of the tape that contains the volume (for example, user.full.3).
position number
The position of the volume with respect to other volumes on the tape that contains the dump set (for example, 31 specifies that there are 30 volumes preceding the current one on the tape).
date
The date and time at which the volume was dumped (for example, Wed Jul 13 05:59:01 1994).
The command displays multiple lines of information for a volume if one or more incremental dumps were performed since the last full dump of the volume. The command displays one line of output for the last full dump and one line of output for each incremental dump. It displays the lines in the order in which the dumps would need to be restored, beginning with the full dump. It does not necessarily present all of the lines for a volume consecutively in the order in which the incremental dumps occurred.

If you intend to write the output of the -n flag to a file for use with the -file argument, you may have more than one entry for a volume; the command ignores any additional lines for the volume, but if you wish to exclude a volume you must remove all existing entries for that volume in the file. You do not need to remove the number sign (#) and the information that follows it; the command ignores any characters that follow the third argument on a line.

When the -n flag is included, no task ID and job ID numbers are reported because none are assigned.

Notes:

The amount of time required for the backup volsetrestore command to complete depends on the number of volumes to be restored. However, a restore operation that includes a large number of volumes can take hours to complete. To reduce the amount of time required for the operation, you can execute multiple instances of the command simultaneously, specifying disjoint volume sets with each command if you use the -name argument, or indicating files that list different volumes with each command if you use the -file argument. Depending on how the volumes to be restored were dumped to tape, specifying disjoint volume sets can also enable you to make the most efficient use of your backup tapes when many volumes need to be restored.

Examples:

The following example restores all volumes included in entries in the volume set named data.restore, which was created expressly to restore data to a pair of file server machines on which all data was corrupted due to an error. All volumes are restored to the sites recorded in their entries in the VLDB.

% backup volsetrestore data.restore

Starting restore 
backup: task ID of restore operation: 112
backup: Finished doing restore
The following example restores all volumes that have entries in the file named /tmp/restore:

% backup volsetrestore -file /tmp/restore -portoffset 1

Starting restore 
backup: task ID of restore operation: 113
backup: Finished doing restore
The /tmp/restore file has the following contents:
fs1.abc.com b user.morin
fs1.abc.com b user.vijay
fs1.abc.com b user.pierette
fs2.abc.com c user.frost
fs2.abc.com c user.wvh
fs2.abc.com c user.pbill
... ...
Privilege Required:

The issuer must be listed in the /usr/afs/etc/UserList file for the specified cell.

5.6.3. The backup labeltape Command

AFS 3.4a Changes 
Note: The permanent tape name was set with the -name argument of the backup labeltape command in the AFS 3.4 Beta product. In AFS 3.4a, the permanent tape name is set with the -pname argument.
The backup labeltape command allows you to label tapes explicitly with a permanent name in AFS 3.4a. The Backup System uses the tape's permanent name as the tape is re-used or recycled and prompts for the tape by its permanent name on backup restore operations. A tape keeps its permanent name until the user explicitly changes it using the backup labeltape command with the -pname argument. It is recommended that permanent tape names be unique since that is the tape name that is recorded in the Backup Database and that is requested on backup restore operations. The -pname argument has been changed to

-pname <permanent_tape_name>

where permanent_tape_name specifies the permanent name that the user assigns to the tape.

The new syntax for the backup labeltape command is

backup labeltape [-name <AFS_tape_name>] [-size <tape size in Kbytes, defaults to size in tapeconfig>] [-portoffset <TC port offset>] [-pname <permanent_tape_name>] [-localauth] [-cell <cell name>] [-help]

If the user does not explicitly name a tape with a permanent name, AFS assigns a non-permanent name to the tape as it did previously. The Backup System produces this non-permanent name by concatenating the volume set and dump level with a tape sequence index number (for example, guests.monthly.3). This name is not permanent and changes whenever the tape label is re-written by a backup command (for example, when using the backup dump, backup labeltape, and backup savedb commands). The AFS-assigned non-permanent name is listed in the AFS tape name field in the output resulting from the backup readlabel command.

As in AFS 3.3, the backup labeltape command overwrites the existing tape label and destroys any data on the tape, for example, when the user wishes to recycle a tape that was previously used to store other dumped volumes. If the -pname argument is not supplied with the backup labeltape command, the tape keeps its permanent name. A user can enter a null name to remove the permanent name as shown in the following example:

backup labeltape -pname ""

Note: When you label a tape, the backup labeltape command removes all existing data on the tape. The backup labeltape command also removes all information about the tape's corresponding dump set (both its initial and appended dumps) from the Backup Database.     In AFS 3.3 or earlier, if a user labeled a tape using the -name argument and used that tape in a tape recycling scheme, the AFS Backup System enforced name checking by requesting that the AFS tape name of the volume to be dumped or restored match the name of the tape in the drive. In AFS 3.4a, if users set the permanent tape name using the -pname argument, any pre-existing AFS tape name on the tape label from AFS 3.3 or earlier is set to NULL and the AFS Backup System cannot verify the tape being used for the dump or restore.

5.6.4. The backup readlabel Command

AFS 3.4a Changes 

In AFS 3.4a, the backup readlabel command lists the permanent tape name, which users can assign with the backup labeltape command, and the AFS tape name, which is assigned by AFS, in the output of the command. If you designated a permanent tape name with the backup labeltape command, the command displays the permanent tape name (tape name) and the AFS-assigned tape name (AFS tape name), as shown in the following output:

Tape label
tape name = monthly.guest.dump
AFS tape name = guests.monthly.3
creationTime = Sun Jan 1 00:10:00 1995
cell = abc.com
size = 2097152 Kbytes
dump path = /monthly
dump id = 78893700
useCount = 5
-- End of tape label --
If you did not designate a permanent tape name, the backup readlabel command displays only the AFS-assigned tape name, as shown in the following output:
Tape label
AFS tape name = guests.monthly.3
creationTime = Wed Feb 1 00:53:20 1995
cell = abc.com
size = 2097152 Kbytes
dump path = /monthly
dump id = 791618000
useCount = 1
-- End of tape label --

5.6.5. The backup scantape Command

AFS 3.4a Changes 

In AFS 3.4a, the backup scantape command lists the permanent tape name, which users can assign with the backup labeltape command, and the AFS tape name, which is assigned by AFS, in the output of the command. If you designated a permanent tape name with the backup labeltape command, the command displays the permanent tape name (tape name) and the AFS-assigned tape name (AFS tape name), as shown in the following output:

Tape label
tape name = monthly.guest.dump
AFS tape name = guests.monthly.3
creationTime = Fri Nov 11 05:31:32 1994
cell = abc.com
size = 2097152 Kbytes
dump path = /monthly
dump id = 78893700
useCount = 5
-- End of tape label --

- - volume - -
volume name: user.guest10.backup
volume ID: 1937573829
dumpSetName: guests.monthly
dumpID 697065340
level 0
parentID 0
endTime 0
clonedate Fri Feb 7 05:03:23 1995

- - volume - -
volume name: user.guest11.backup
volume ID: 1938519386
dumpSetName: guests.monthly
dumpID 697065340
level 0
parentID 0
endTime 0
clonedate Fri Feb 7 05:05:17 1995
If you did not designate a permanent tape name, the backup scantape command displays only the AFS-assigned tape name, as shown in the following output:
Tape label
AFS tape name = guests.monthly.3
creationTime = Fri Nov 11 05:31:32 1994
cell = abc.com
size = 2097152 Kbytes
dump path = /monthly
dump id = 697065340
useCount = 44
-- End of tape label --
>

6. The bos Commands 

Back to Table of Contents

This chapter describes changes to the bos command suite for AFS 3.4a. In particular, AFS 3.4a contains changes to the bos addkey command.

These changes are marked with the heading ``AFS 3.4a Changes.''

This chapter also contains changes from the AFS 3.3 release that have not been incorporated into the full AFS documentation set. These changes are marked with the heading ``AFS 3.3 Changes.''

6.1. The bos addkey Command

Back to Table of Contents

AFS 3.4a Changes 

In AFS 3.4a, the bos addkey command has been updated to prompt you twice for the key in the same manner that you are prompted to enter a password during a password change. The prompt follows:

# bos addkey -server <machine name> -kvno 0 Input key: Retype input key:

If you type the key incorrectly the second time, the command displays the following error message and exits without adding a new key:

Input key mismatch

In AFS 3.3, if the -key argument was not provided on the command line, the command only prompted you to enter the key once.

The bos addkey command has also been updated to prevent you from reusing a key version number currently found in the /usr/afs/etc/KeyFile file. This ensures that users who still have tickets sealed with the current key are not prevented from communicating with the file server because the current key is overwritten with a new key.

AFS 3.3 Changes

In earlier versions of AFS, the bos addkey command required the entry of a new key on the command line. This approach posed many obvious security problems because the key was visible on the screen, in the process entry for the ps command, and in the command history of the issuer's shell.

To prevent these security risks, the -key argument has been made optional on the bos addkey command. If you do not provide the argument on the command line, you are prompted to enter the key in the same way that you are prompted to enter a password during a password change.

The new syntax of the bos addkey command follows:

bos addkey -server <machine name> [-key <key>] -kvno <key version number> [-cell <cell name>] [-noauth] [-localauth] [-help]

6.2. The bos status Command

Back to Table of Contents

AFS 3.3 Changes 

The -long flag with the bos status command now displays the pathnames of notifier programs associated with processes via the bos create command.

7. The fs Commands

Back to Table of Contents

This chapter describes changes to the fs command suite for AFS 3.4a. In particular, AFS 3.4a contains a new command, fs storebehind, and changes to the following fs commands:

     
  1. The fs setserverprefs command
  2. The fs getserverprefs command
  3. The fs checkservers command
  4. The fs exportafs command
These changes are marked with the heading ``AFS 3.4a Changes.''

This chapter also contains changes from the AFS 3.3 release that have not been incorporated into the full AFS documentation set. These changes are marked with the heading ``AFS 3.3 Changes.''

7.1. Setting and Getting Cache Manager Server Preferences

Back to Table of Contents

AFS 3.4a Changes 

AFS 3.4a supports preferences for Volume Location (VL) servers in addition to preferences for file servers. These preferences are file server or VL server machines from which the client machine's Cache Manager prefers to access ReadOnly volumes or VLDB information, respectively. Preferences are specified as servers and ranks. The first value is the name or IP address of the server; the second is the numerical rank to be associated with that server. The Cache Manager bases its preference on a numerical rank; the smaller the numerical rank, the greater the Cache Manager's preference for selecting that server. The numerical rank can be set by the Cache Manager or by the user explicitly with the fs setserverprefs command.

Each Cache Manager stores a table of preferences for file server and VL server machines. A preference is stored as a file server or VL server machine's Internet Protocol (IP) address and an associated ``rank.'' A file server or VL server machine's rank is an integer in the range from 0 to 65,534 that determines the Cache Manager's preference for selecting the server machine when the Cache Manager must access a ReadOnly replica or VLDB that resides on it. Preferences can bias the Cache Manager to access ReadOnly replicas or VLDBs from machines that are ``near'' rather than from those that are ``distant'' (``near'' and ``far'' refer to network distance rather than physical distance). Effective preferences can generally reduce network traffic and result in faster access of data.

Most AFS cells have multiple database server machines running the vlserver process. When a Cache Manager needs volume information from the VLDB, it first contacts the VL server with the lowest numerical rank. If that VL server is unavailable, it attempts to contact the VL server with the next lowest rank. If all of a cell's VL servers are unavailable, the Cache Manager will not be able to retrieve files from that cell.

A replicated AFS volume typically has multiple ReadOnly volumes. Each ReadOnly volume provides the same data, but each resides on a different file server. When the Cache Manager needs to access a ReadOnly volume, it first contacts the VL server to determine the IP addresses of the file servers on which the ReadOnly volume resides. The Cache Manager then checks its internal table to determine the rank associated with each of the file server machines. After comparing the ranks of the machines, the Cache Manager attempts to access the ReadOnly volume on the machine that has the lowest integer rank.

If the Cache Manager cannot access the ReadOnly volume on the server with the lowest rank (possibly because of a server process, machine, or network outage), the Cache Manager attempts to access the ReadOnly volume on the server with the next lowest rank. The Cache Manager continues in this manner until it either accesses the ReadOnly volume, or determines that all of the relevant servers are unavailable.

If the Cache Manager is unable to access any server, the Cache Manager marks that server as ``down.'' The server's rank is unchanged, but the Cache Manager will not request the server until it knows that the server has returned to service.

The Cache Manager assigns preferences to file servers as it accesses files from volumes on those machines; the Cache Manager assigns preferences to VL servers when it is first initialized. The Cache Manager stores the preferences as IP addresses and associated ranks in the kernel of the client machine. Because they are stored in the kernel of the client machine, the preferences are recalculated when the client machine is rebooted. To rebuild its preferences following initialization, the Cache Manager assigns a default rank to each VL server listed in the /usr/vice/etc/CellServDB file and to each file server that houses a copy of a ReadOnly volume from which it accesses data. To display the Cache Manager's current set of file server or VL server machine preferences, use the fs getserverprefs command. By default, the command displays its output on standard output, but you can direct the output to a specified file.

AFS provides commands for displaying and modifying a Cache Manager's preferences for server machines. The fs getserverprefs command can be used to display a Cache Manager's preferences for file server and VL server machines.

The fs setserverprefs command can be used to set the preference for one or more file server or VL server machines. Preferences are specified with the command as server names and ranks. The first value is the name or IP address of the server; the second is the numerical rank to be associated with that server.

A Cache Manager's file server preferences are potentially derived from four different sources:

     
  1. Explicitly specified with the -servers argument of the fs setserverprefs command
  2. Received from an input file specified with the -file argument of the fs setserverprefs command
  3. Received from standard input specified with the -stdin flag of the fs setserverprefs command
  4. Default values selected by the Cache Manager on the basis of the algorithm described in Section 7.1.1
The arguments and flag are not mutually exclusive, so multiple preferences can be specified with one issuance of the command. You can include the fs setserverprefs command in a machine's initialization file (the rc.afs file or equivalent) to load server preferences at reboot.

The Cache Manager uses the preferences input via the fs setserverprefs command, when they exist, over existing default preferences. The Cache Manager uses the last preference entered from the combined input for a particular server machine. For example, consider the following sequential input is given for the file server fs1.abc.com via the fs setserverprefs command:

     
  1. The -servers argument sets fs1.abc.com to a rank of 20000
  2. The file specified by the -file argument sets fs1.abc.com to a rank of 25000
  3. The -stdin flag produces an entry for fs1.abc.com set to a rank of 21000
The Cache Manager previously had a rank for the file server fs1.abc.com of 25000. The resulting rank for fs1.abc.com is 21000 because the Cache Manager uses the last rank entered with the fs setserverprefs command (with the -stdin flag).

A Cache Manager's VL server preferences are potentially derived from two different sources:

     
  1. Explicitly specified with the -vlservers argument of the fs setserverprefs command
  2. Default values randomly selected by the Cache Manager in the range from 10000 to 10126
The fs setserverprefs command contains a -vlservers argument that allows you to explicitly set VL server preferences and ranks. The fs getserverprefs command contains a -vlservers flag that allows the Cache Manager's VL server preferences and ranks to be displayed. The AFS 3.4a Cache Manager supports preferences for VL servers; the Cache Manager does not contact Protection, Authentication, or Backup Database servers.

7.1.1. The fs setserverprefs Command

AFS 3.4a Changes

In addition to file server preferences, the fs setserverprefs command can set preferences for Volume Location (VL) servers via the -vlservers argument. This section contains the revised command reference page for the fs setserverprefs command.

fs setserverprefs [-servers <fileserver names and ranks>+] [-vlservers <VL server names and ranks>+] [-file <input from named file>] [-stdin] [-help]

Acceptable Abbreviations/Aliases: 

fs sets [-se <fileserver names and ranks>+] [-vl <VL server names and ranks>+] [-f <input from named file>] [-st] [-h]

fs sp [-se <fileserver names and ranks>+] [-vl <VL server names and ranks>+] [-f <input from named file>] [-st] [-h]

Description:

Sets the Cache Manager's preferences for one or more file server or VL server machines. These preferences are file server or VL server machines from which the client machine's Cache Manager prefers to access ReadOnly volumes or VLDB information, respectively. The Cache Manager bases its preference on a numerical rank; the lower the numerical rank, the greater the Cache Manager's preference for selecting that file server or VL server. The numerical rank can be set by the Cache Manager or by the user explicitly with the fs setserverprefs command.

Each Cache Manager stores a table of preferences for file server machines and a table of preferences for VL server machines. A preference is stored as a server machine's Internet Protocol (IP) address and an associated ``rank.'' A file server or VL server machine's rank is an integer in the range from 1 to 65,534.

When the Cache Manager needs to access a VL server and look up information in the VLDB, the Cache Manager checks its internal table to see which VL server has the lowest recorded rank. The Cache Manager then attempts to contact the VL server with the lowest rank. If multiple VL servers have the same rank, the Cache Manager selects them in the order in which it finds them in its internal table of preferences.

When the Cache Manager needs to access data from a ReadOnly volume, it first contacts the VL server and accesses the VLDB to determine the names of the file server machines on which a ReadOnly volume resides. If multiple servers house the ReadOnly volume, the Cache Manager consults its preferences for server machines and attempts to access the server with the lowest recorded rank. If multiple servers have the same rank, the Cache Manager selects them in the order in which it received their names from the VL server.

If the Cache Manager cannot access the server with the lowest rank, the Cache Manager attempts to access the server with the next-lowest rank. The Cache Manager continues in this manner until it either succeeds in accessing the ReadOnly volume (or VLDB) or determines that all of the appropriate servers are unavailable.

The Cache Manager stores its server preferences in the kernel of the local machine. The preferences are lost each time the Cache Manager is initialized with the afsd command (each time the client machine is rebooted). After it is initialized, the Cache Manager rebuilds its collection of preferences by assigning a rank to each VL server listed in the /usr/vice/etc/CellServDB file and to each file server that it contacts or that houses a ReadOnly volume from which it accesses data. The Cache Manager makes no distinction between preferences for servers from the local cell and those for servers from a foreign cell. However, default preferences bias the Cache Manager to select servers that are in the same subnetwork or network as the local machine. You can use the fs setserverprefs command to alter the default preferences.

If the fs setserverprefs command specifies a rank for a server for which the Cache Manager has no rank, the command defines the server's initial rank. If the command specifies a rank for a server for which the Cache Manager already has a rank, the command changes the current rank to match the specified rank. You can include the fs setserverprefs command in a machine's initialization file to load a predefined collection of server preferences when the machine is rebooted.

Specifying Preferences:

Using the fs setserverprefs command, you specify preferences as pairs of values. The first value of the pair is the hostname (for example, fs1.abc.com) or IP address, in dotted decimal format, of a file server or VL server; the second value of the pair is the machine's numerical rank, an integer in the range from 0 to 65,520. Note that you must use the -vlservers argument with the fs setserverprefs command to specify VL server preferences for the Cache Manager.

To minimize the chances that different servers are consistently assigned the same rank by all clients (to ensure some load balancing among servers), the Cache Manager adds a random number in the range from 0 (zero) to 14 to each rank that you specify. For example, if you specify a rank of 15,000 to a server, the Cache Manager records the rank as an integer in the range from 15,000 to 15,014.

You can specify servers and their ranks

     
  1. On the command line via the -servers or -vlservers argument. Use the argument to tune the preferences manually in response to system or network adjustments.
  2. From a file via the -file argument. Use the argument to configure one or more Cache Managers with a fixed set of preferences, specifying a file created manually or generated automatically. You can use the fs getserverprefs command to generate a file of preferences that has the proper format.
  3. From standard input via the -stdin flag. Use the flag to pipe preferences to the command from a user-defined process that generates preferences in an acceptable format.
Note: You can specify a unique preference for any of the multihomed addresses available at a multihomed file server machine using the fs setserverprefs command.
You cannot specify VL server preferences with the -file argument or the -stdin flag. You can specify pairs of VL server machines and their ranks explicitly via the -vlservers argument only.

The -servers, -file, and -stdin arguments are not mutually exclusive. You can include any combination of these arguments with the command. Note that the command does not verify the IP addresses specified with any of its arguments. You can add a preference for an invalid IP address; the Cache Manager stores such preferences in the kernel, but it ignores them (the Cache Manager never needs to consult such preferences).

Allowing the Cache Manager to Assign Preferences to File Server Machines:

The Cache Manager bases default ranks that it calculates on IP addresses rather than on actual physical considerations such as location or distance. It uses the following heuristic to calculate default ranks for file server machines only:

     
  1. If the local machine is also a file server machine, the machine receives an initial rank of 5000.
  2. Each file server machine in the same subnetwork as the local machine receives an initial rank of 20,000.
  3. Each file server machine in the same network as the local machine receives an initial rank of 30,000.
  4. Each file server machine on the distant ends of point-to-point links from the local machine receives an initial rank of 30,000.
  5. Each file server machine in a different network from the local machine receives an initial rank of 40,000.
  6. Each file server machine for which the Cache Manager cannot determine network information receives an initial rank of 40,000.
As it does with ranks specified with the fs setserverprefs command, the Cache Manager adds a random number in the range from 0 (zero) to 14 to each initial rank that it determines. For example, when it assigns an initial rank of 20,000 to a file server machine in the same subnetwork as the local machine, the Cache Manager records the actual rank as an integer in the range from 20,000 to 20,014.

Arguments:

-servers
Specifies one or more pairs of file server machines and their respective ranks. Separate each machine specification and each rank with one or more spaces. Refer to the section of this command reference page titled ``Specifying Preferences'' for information about specifying file server machines and ranks.
-vlservers
Specifies one or more pairs of VL server machines and their respective ranks. Separate each machine specification and each rank with one or more spaces. Refer to the section of this command reference page titled ``Specifying Preferences'' for information about specifying VL server machines and ranks.
Note: You cannot specify VL server preferences with the -file argument or the -stdin flag. You can specify pairs of VL server machines and their ranks explicitly via the -vlservers argument only.
-file
Specifies the full pathname of a file from which the command is to read pairs of file server machines and their respective ranks. Separate each machine specification from its rank with one or more spaces, and include each paired machine specification and rank on a separate line. You cannot specify VL server preferences with the -file argument. Refer to the section of this command reference page titled ``Specifying Preferences'' for information about specifying file server machines and ranks.
-stdin
Directs the command to read pairs of file server machines and their respective ranks from standard input (stdin). Separate each machine specification and each rank with one or more spaces. You cannot specify VL server preferences with the -stdin flag. Refer to the section of this command reference page titled ``Specifying Preferences'' for information about specifying file server machines and ranks.
-help
Prints the online help for this command. All other valid arguments specified with this flag are ignored.
Examples:
    The following command uses the -servers argument to set the Cache Manager's preferences for the file server machines named fs3.abc.com and fs4.abc.com, the latter of which is specified by its IP address, 128.21.18.100. Assume that the file server machines reside on a different subnetwork in the same network as the local machine, so by default the Cache Manager would assign each a rank of 30,000 plus an integer in the range from 0 to 14. To make the Cache Manager prefer these file server machines over file server machines in other subnetworks in the local network, you can use the fs setserverprefs command to assign these machines ranks of 25,000, to which the Cache Manager adds an integer in the range from 0 to 14.

    # fs setserverprefs -servers fs3.abc.com 25000 128.21.18.100 25000

    The following command uses the -servers argument to set the Cache Manager's preferences for the same two file server machines, but it also uses the -file argument to read a collection of preferences from a file that resides on the local machine in the /etc/fs.prefs file:

    # fs setserverprefs -servers fs3.abc.com 25000 128.21.18.100 25000 -file /etc/fs.prefs

    The /etc/fs.prefs file has the following contents and format:

    128.21.16.2147500
    128.21.16.2127500
    121.86.33.4139000
    121.86.33.3439000
    121.86.33.3641000
    121.86.33.3741000
    Note: If you specify different ranks for the same file server with the -servers argument, the -stdin flag, and the -file argument, the Cache Manager uses the rank specified with the -servers argument.
    The following command uses the -stdin flag to read preferences from standard input (stdin). The preferences are piped to the command from a program, calc_prefs, which was written by the issuer to calculate preferences based on values significant to the local cell.

    # calc_prefs | fs setserverprefs -stdin

    The following command uses the -vlservers argument to set the Cache Manager's preferences for the VL server machines named fs1.abc.com, fs3.abc.com, and fs4.abc.com with ranks of 10000, 30000, and 45000, respectively:

    # fs setserverprefs -vlservers fs1.abc.com 10000 fs3.abc.com 30000 fs4.abc.com 45000

    If you want VL server preferences to survive a reboot, you can add the fs setserverprefs command to your startup files on your client machine.

Privilege Required:

The issuer must be ``root'' on the local machine.

7.1.2. The fs getserverprefs Command

AFS 3.4a Changes 

In AFS 3.4a, the fs getserverprefs command can display preferences for Volume Location (VL) servers via the -vlservers flag, in addition to file servers. This section contains the revised command reference page for the fs getserverprefs command.

fs getserverprefs [-file <output to named file>] [-numeric] [-vlservers] [-help]

Acceptable Abbreviations/Aliases: 

fs gets [-f <output to named file] [-n] [-vl] [-h]
fs gp [-f <output to named file] [-n] [-vl] [-h]

Description:

Displays the Cache Manager's preferences for file server or VL server machines. These preferences are file server or VL server machines from which the client machine's Cache Manager prefers to access ReadOnly volumes or VLDB information, respectively. The Cache Manager bases its preference on a numerical rank; the lower the numerical rank, the greater the Cache Manager's preference for selecting that file server or VL server. The numerical rank can be set by the Cache Manager or by the user explicitly with the fs setserverprefs command. To display VL server preferences, you must specify the -vlservers flag with the fs getserverprefs command. Refer to the Description section of the fs setserverprefs command for a discussion on how the Cache Manager assigns preferences to file servers.

Each Cache Manager stores a table of preferences for file server machines and a table of preferences for VL server machines. A preference is stored as a server machine's Internet Protocol (IP) address and an associated ``rank.'' A file server or VL server machine's rank is an integer in the range from 0 to 65,534. The default rank assigned to a VL server is an integer in the range from 10,000 to 10,126.

The fs getserverprefs command displays file server rank information on standard output (stdout) by default. To write the output to a file instead of standard output (stdout), use the -file argument.

Arguments:

-file
Specifies the full pathname of a file to which the server preferences (either file server or VL server) are to be written. If the specified file already exists, the command overwrites it. If the pathname is invalid, the command fails. If this argument is not specified, the preferences are displayed on standard output (stdout).
-numeric
Displays the IP addresses rather than the hostnames of the file server or VL server machines in the server preferences that it reports. If this flag is not specified, the hostname (for example, fs1.abc.com) of each machine is displayed.
-vlservers
Displays the hostnames and rankings of VL server machines rather than file server machines.
-help
Prints the online help for this command. All other valid options specified with this option are ignored.
Output:

The fs getserverprefs command displays a separate line of output for each file server or VL server machine for which it maintains a preference. By default, each line consists of the name of a file server machine followed by the machine's rank, as follows:

hostnamerank

where hostname is the name of a file server machine, and rank is the rank associated with the machine. If the -numeric flag is included with the command, the command displays the IP address, in dotted decimal format, of each file server machine instead of the machine's name. The command also displays the IP address of any machine whose name it cannot determine (for example, if a network outage prevents it from resolving the address into the name).

Examples:

    The following command displays the preferences (the list of file server machines and their respective ranks) associated with the Cache Manager on the local machine. The local machine belongs to the AFS cell named abc.com; the ranks of the file server machines from the abc.com cell are lower than the ranks of the file server machines from the foreign cell, def.com. The command shows the IP addresses, not the names, of two machines for which names cannot be determined.

    % fs getserverprefs
    fs2.abc.com20007
    fs3.abc.com30002
    fs1.abc.com20011
    fs4.abc.com30010
    server1.def.com40002
    121.86.33.3440000
    server6.def.com40012
    121.86.33.3740005

    The following command displays the same Cache Manager's preferences, but the -numeric flag is included with the command to display the IP addresses rather than names of the server machines. The IP address of the local machine is 128.21.16.212. The two file server machines on the same subnetwork as the local machine have ranks of 20,007 and 20,011; the two file server machines on a different subnetwork in the same network as the local machine have ranks of 30,002 and 30,010; the remainder of the file server machines are in a different network, so their ranks range from 40,000 to 40,012.

    % fs getserverprefs -numeric
    128.21.16.21420007
    128.21.18.9930002
    128.21.16.21220011
    128.21.18.10030010
    121.86.33.4140002
    121.86.33.3440000
    121.86.33.3640012
    121.86.33.3740005

    The following command displays the Cache Manager preferences for VL servers by specifying the -vlservers flag.

    % fs getserverprefs -vlservers
    fs2.abc.com10005
    fs3.abc.com30004
    fs1.abc.com45003

Privilege Required:

No privileges are required.

7.2. The fs storebehind Command (New)

Back to Table of Contents

AFS 3.4a Changes 

The fs storebehind command is a new command for controlling the timing of data storage from the Cache Manager to the file server. The fs storebehind command performs a delayed asynchronous write to the file server for specified file(s). The fs storebehind command allows the Cache Manager to return control to a closing application program before the final portion of a file is completely transferred to the file server.

This command is useful for accessing and writing very large files in AFS. For example, if you have finished working on a large database file, the fs storebehind command can close the file in the background and asynchronously write it to the file server while you move on to work on something else.

The fs storebehind command does not change the normal AFS open and close file semantics. Note that while the file is in the process of being closed and stored to the file server, the user closing the file still holds the file lock.

You can specify that a particular file or all files be closed (by using the -files and -kbytes arguments or using the -allfiles argument, respectively) after control has been returned to the closing application program. You can also indicate the maximum amount of data that is written to the file server after returning control to the closing application.

The -kbytes and -files arguments must appear together on the command line to define the asynchrony for a file. If you specify only the -kbytes argument, you will see the following message:

fs: you must specify -kbytes and -files together

If you issue the fs storebehind command without arguments or with the -verbose argument, the command displays the current default Cache Manager asynchrony setting (the value for the -allfiles setting). If you issue the fs storebehind command with the -files argument, the command displays the current asynchrony setting for the named file.

If the delayed close and write on the specified file fails, the fs storebehind command does not notify the application or inform the application that the close and write operations failed.

Caution: Make certain that you check the disk quota for the volume to which the specified file belongs and that you do not exceed the disk quota when using the fs storebehind command; if you exceed the disk quota when writing the specified file, the portion of the file that exceeds the disk quota will be lost.     If you exceed the disk quota, you will see the following message:
No space left on device

In AFS 3.4a, the default for the Cache Manager store operation is to complete the transfer of a closed file to the file server after returning control to the application invoking the close. In AFS 3.3, the default for the Cache Manager operation was to return control to a closing application program after the final chunk of a file was completely written to the file server.

The functionality of the fs storebehind command in AFS 3.4a (delayed asynchronous writes) was previously provided by the default setting of the afsd command. The default functionality of the AFS 3.4a Cache Manager (complete the transfer of a closed file to the file server) was previously provided by the -waitclose flag of the afsd command; for this reason, the -waitclose flag has no effect on the operation of the Cache Manager in AFS 3.4a.

The syntax for the fs storebehind command follows:

fs storebehind [-kbytes <asynchrony for specified names>] [-files <specific pathnames>+]     [-allfiles <new default (KB)>] [-verbose] [-help]

Arguments:

-kbytes
Specifies the maximum amount of data that can remain to be stored to the file server for the files specified with the -files argument after returning control to the closing application program on the client machine. For example, setting this argument to 100 kbytes means that there can be up to 100 kbytes of data waiting to be written to the file server after returning control to the closing application program. A value of 0 means that all data must be written to the file server before returning control to the closing application. The values for this argument range from 0 to the maximum AFS file size expressed in kbytes (in AFS 3.4a, the maximum file size is 2 GB). The default value is 0.
-files
Specifies a particular file or files to be stored to the file server after returning control to the closing application program. The specified file is closed in this manner only as long as it is cached. When the file is no longer cached, AFS returns to the default Cache Manager operation; that is, files are closed completely and written to the file server after control is returned to the closing application program.
-allfiles
Specifies the maximum amount of data that can remain to be written to the file server for all files in the AFS Cache Manager (other than filenames specified with the -files argument) after returning control to the closing application program. The initial default value is 0, which means files are closed completely and written to the file server after returning control to the closing application program. The values for this argument range from 0 to the maximum AFS file size expressed in kbytes (in AFS 3.4a, the maximum file size is 2 GB).
-verbose
Tells the Cache Manager and file server to report on what they are doing as the command executes.
-help
Prints the online help entry for this command. All other options specified with this option are ignored.
Examples:
    The following command performs a delayed asynchronous write on the test.data file and returns control to the application program when 500 KB of the file remains to be written to the file server.

    % fs storebehind -kbytes 500 -files test.data

    The following command performs a delayed asynchronous write on all files in the client's AFS cache and returns control to the application program when 100 KB of any file remains to be written to the file server.

    % fs storebehind -allfiles 100

    You also can combine the previous examples on the same command line. The following command performs a delayed asynchronous write on the test.data file and returns control to the application program when 500 KB of the file remains to be written to the file server. For all other files in the Cache Manager, the command returns control to the application program when 100 KB remains to be written to the file server.

    % fs storebehind -kbytes 500 -files test.data -allfiles 100

Privilege Required:

The issuer must be ``root'' to set the -files and -allfiles arguments on the command or the issuer must have ``write'' permissions on the file specified with the -files argument.

7.3. The fs checkservers Command

Back to Table of Contents

AFS 3.4a Changes 

The fs checkservers command probes file servers to determine if they are available and reports any file servers that did not respond to the probe. The output of this command has been modified for AFS 3.4a.

The following AFS 3.3 example reports that the machines fs1.abc.com and fs3.abc.com did not respond to the client machine's probe:

% fs checkservers -cell abc.com
These servers are still down:fs1.abc.comfs3.abc.com

In AFS 3.4a, the output of the fs checkservers command has been modified. The new report follows:

% fs checkservers -cell abc.com
These servers unavailable due to network or server problems:fs1.abc.comfs3.abc.com

Each AFS client machine probes file server machines to determine if any file servers it has accessed in the requested cell since the client has been ``up'' are available. Specifically, each client machine probes those file servers that house data that the client has cached in its local cell by default or a cell specified by the -cell argument. If a file server does not respond to a probe, the client assumes the file server is unavailable due to server or network problems.

AFS 3.3 Changes

In previous versions of AFS, the interval between probes was automatically set to 3 minutes. For some uses, a 3-minute probe interval may be too long or too short. Therefore, a new argument, -interval, has been added to the fs checkservers command to allow you to specifically set this interval. The default value is 180 seconds; the maximum and minimum values are 10 minutes (600 seconds) and 1 second, respectively. To check the current length of the interval, specify 0 with the -interval argument.

Only ``root'' can issue the fs checkservers command with the -interval argument. Once set, the probe interval remains set until it is changed via this command or until the client machine is rebooted (at which time it returns to the default setting). If you want the time interval specified by the -interval argument to survive a reboot, you can put the fs checkservers command in the startup files.

7.4. The fs exportafs Command

Back to Table of Contents

AFS 3.4a Changes 

Several modifications have been made to the fs exportafs command syntax for AFS 3.4a.

     
  1. The -state argument of the fs exportafs command has been changed to the -start argument since this argument is used to start and stop the NFS/AFS Translator.
  2. The -uidcheck and -submounts arguments of the fs exportafs command now support an on or off selection.

  3. The -noconvert argument has been changed to the -convert argument so that it is more compatible with the on and off selections for the other arguments used with the fs exportafs command.
When issuing any of these four arguments, you must select on or off. If you do not specify a certain argument, the value of that argument is either of the following:
  1. If the argument has never been specified before the default value
  2. The value assigned to it in a previous execution of the fs exportafs command
The modified fs exportafs command syntax follows:

fs exportafs -type <exporter name> [-start <start/stop translator ( on |off )>]     [-convert <convert from afs to unix mode ( on |off )>]     [-uidcheck <run on strict 'uid check' mode ( on |off )>]     [-submounts <allow nfs mounts to subdirs of /afs/.. ( on |off)>] [-help]

Arguments:

-type
Names the alternate file system for which the setting is to be changed or reported. Only lowercase letters are acceptable. The only legal value is nfs.
-start
Controls whether the machine is accessible as a server of the non-AFS file system or not. The legal values are on, which enables the machine as a server, and off, which makes it inaccessible as a server. If the issuer omits this argument, the output reports the current setting.
-convert
Determines whether the ``group'' and ``other'' mode bits on exported AFS files and directories are converted to match the ``user'' mode bits. The legal values are on, which converts the ``group'' and ``other'' mode bits to match the ``user'' mode bits, and off, which leaves the mode bits as they are in AFS. The default value of this argument is on.
-uidcheck
Prevents users from deleting the tokens of other users with the knfs command. You can use this feature only if users have the same UIDs in the /etc/passwd files (or equivalent) on both the NFS client and the NFS server (the NFS/AFS Translator machine). The legal values are on, which prevents users from deleting the tokens of other users with the knfs command, and off, which does not prevent users from deleting the tokens of other users with the knfs command. The default value of this argument is off.
-submounts
Allows a user to NFS-mount any directory, in addition to the /afs directory, in the AFS filespace. The legal values are on, which allows users to NFS-mount any directory in the AFS filespace, in addition to the /afs directory, and off, which prevents users from NFS-mounting directories in the AFS filespace other than the /afs directory. The default value of this argument is off.
-help
Prints the online help entry for this command. Do not provide any other arguments with this flag.
To find out the current status of the fs exportafs command arguments, execute the following command:

% fs exportafs nfs

To reset all arguments to their default values, execute the following commands:

% fs exportafs nfs off     % fs exportafs nfs on

AFS 3.3 Changes

In the past, when using the NFS/AFS Translator, it was often easy to assign a token mistakenly to the wrong user or to delete the wrong user token by entering the wrong UID with the knfs command. A new flag, -uidcheck, has been added to the fs exportafs command that, when used, prevents users from assigning and deleting the tokens of other users with the knfs command. You can use this feature only if your users have the same UID's in the /etc/passwd files (or equivalent) on both the NFS client and the NFS server (the NFS/AFS Translator machine).

7.5. The -cell Argument on fs Commands

Back to Table of Contents

AFS 3.3 Changes

The -cell argument on fs commands now fully expands shortened versions of a cell name (for example, tr is a shortened version of the cellname transarc.com), provided the shortened version is unique. The Cache Manager determines if a shortened version is unique by consulting the CellServDB file. 

The following fs commands are affected by this change:

     
  1. The fs checkservers command 
  2. The fs getcellstatus command 
  3. The fs mkmount command 
  4. The fs setcell command 

7.6. The fs copyacl, fs listacl, and fs setacl Commands

Back to Table of Contents

AFS 3.3 Changes 

Two new flags, -id and -if, have been added to the fs copyacl, fs listacl, and fs setacl commands to allow AFS interaction with Transarc Corporation's AFS/DFS Migration ToolkitTM. The new flags provide no functionality outside of the Migration Toolkit.

The new syntax of the commands follows:

fs copyacl -fromdir <source directory (or DFS file)>     -todir <destination directory (or DFS file)>+[-clear] [-id] [-if] [-help]

fs listacl [-path <dir/file path>+] [-id] [-if] [-help]

fs setacl -dir <directory>+-acl <access list entries>+[-clear] [-negative] [-id] [-if] [-help]

Refer to the AFS/DFS Migration Toolkit Administration Guide and Reference for more information about these commands.

7.7. The fs newcell Command

Back to Table of Contents

AFS 3.3 Changes 

A new argument, -linkedcell, has been added to the fs newcell command to allow AFS interaction with Transarc Corporation's AFS/DFS Migration Toolkit.

The new syntax of the command follows:

fs newcell -name <cell name> -servers <primary servers>+[-linkedcell <linked cell name>]     [-help]

Refer to the AFS/DFS Migration Toolkit Administration Guide and Reference for more information about the fs newcell command.

8. The fstrace Commands (New)

Back to Table of Contents

AFS 3.4a Changes

This chapter defines the fstrace commands that system administrators employ to trace Cache Manager activity for debugging purposes. It assumes the reader is familiar with the concepts described in the AFS System Administrator's Guide, especially the operation of the AFS Cache Manager. This chapter includes the following sections:

     

    Section 8.1, About the fstrace Command Suite

    1. Section 8.1.1, Requirements for Using the fstrace Command Suite
    2. Section 8.1.2, Recommendations for Using the fstrace Command Suite
  1. Section 8.2, Setting the State of an Event Set
  2. Section 8.3, Changing the Size of Trace Logs
  3. Section 8.4, Dumping the Contents of Trace Logs
  4. Section 8.5, Listing Information about Trace Logs
  5. Section 8.6, Listing Information about Event Sets
  6. Section 8.7, Clearing Trace Logs
  7. Section 8.8, Getting Help for Command Usage
In addition, Section 8.9 provides a step-by-step example of a kernel tracing session.

8.1. About the fstrace Command Suite

Back to Table of Contents

The fstrace command suite monitors the internal activity of the Cache Manager and allows you to record, or trace, in detail the processes executed by the AFS Cache Manager. These processes, or events, executed by the Cache Manager comprise the Cache Manager (cm) event set. Examples of cm events are fetching files and looking up information for a listing of files and subdirectories using any form of the ls command.

The functionality of the fstrace command suite replaces the functionality provided by the fs debug command. The fstrace log process is not intended to be a continuous process log as other AFS logs (FileLog, VLLog, AFSLog, etc.) are. It is only intended for diagnosing specific problems that occur within the AFS Cache Manager.

Following are the fstrace commands and their respective functions:

  1. The fstrace apropos command provides a short description of commands.
  2. The fstrace clear command clears the trace log.
  3. The fstrace dump command dumps the contents of the trace log.
  4. The fstrace help command provides a description and syntax for commands.
  5. The fstrace lslog command lists information about the trace log.
  6. The fstrace lsset command lists information about the event set.
  7. The fstrace setlog command changes the size of the trace log.
  8. The fstrace setset command sets the state of the event set.
There are two groups of AFS customers and each will have a different purpose for using the fstrace command suite:
     
  1. AFS "source" customers have access to the AFS source code and can diagnose AFS kernel problems at their site. This diagnosis is performed by reading the output of trace logs containing diagnostic messages written by event sets that track specific actions performed by the AFS kernel.
  2. AFS "non-source" customers will be instructed to establish an fstrace log and dump by a Transarc Product Support Representative upon encountering kernel problems. AFS "non-source" customers should mail the contents of the fstrace log dump to their AFS Product Support Representative or copy the dump file into Transarc's AFS file space so that the AFS Product Support Representative can access the file. The fstrace command suites are helpful if you notice certain problems on the client machine.
Some of the reasons to start tracing with the fstrace commands are:
     
  1. Cache consistency problems on the AFS client
  2. Clock synchronization errors between the AFS client and the fileserver process
  3. AFS clients receiving clock information from file servers
  4. Problems accessing a particular volume or file in the AFS filespace
The logging provided by the fstrace utility can be a valuable tool for debugging problems with the AFS Cache Manager. The types of problems where this logging may be useful are Cache Manager access failures, crashes, hangs, or Cache Manager data corruption. It is particularly helpful when the problem is reproducible.

Caution should be used when enabling fstrace since the log can grow in size very quickly; this can use valuable disk space if you are writing to a file in the local file space. Additionally, if the size of the log becomes too large, it may be difficult for AFS Product Support to parse the results for pertinent information.

To use the fstrace kernel tracing utility, you must first enable tracing and reserve, or allocate, space for the trace log with the fstrace setset command. With this command, you can set the cm event set to one of three states:

 
active
Enables tracing for the event set and allocates space in the kernel for the trace log.
inactive
Temporarily disables tracing for the event set; however, the event set continues to allocate kernel space for the log to which it sends data.
dormant
Disables tracing for the event set; furthermore, the event set releases the space occupied by the log to which it sends data. When the cm event set that sends data to the cmfx trace log is in this state, the space allocated for that log is freed or unallocated.
When a problem occurs, set the cm event set to active using the fstrace setset command. When tracing is enabled on a busy AFS client, the volume of events being recorded is significant; therefore, when you are diagnosing problems, restrict AFS activity as much as possible so that unrelated fstrace logging is minimized.

When AFS tracing is enabled, each time a cm event occurs, a message is written to the trace log, cmfx. To diagnose a problem, you may read the output of the trace log and analyze the processes executed by the Cache Manager. The trace log has a default size of 60K; however, its size can be increased or decreased.

If a problem is reproducible, clear the cmfx trace log with the fstrace clear command and reproduce the problem. If the problem is not easily reproduced, keep the state of the event set active until the problem recurs.

To view the contents of the trace log and analyze the cm events, use the fstrace dump command to copy the content lines of the trace log to standard output (stdout) or to a file.

Note: If a particular command is causing problems, it may be helpful to determine the UNIX process id (pid) of that command. The output of the fstrace dump command can later be searched for the given pid to show only those lines associated with the process of the command that exhibits the problem with AFS.

8.1.1. Requirements for Using the fstrace Command Suite

Except for the fstrace help and fstrace apropos commands, which require no privilege, the issuer of the fstrace commands must be "root" on the local machine. Before issuing an fstrace command, verify that you have the necessary privilege.

The Cache Manager catalog must be in place so that logging can occur. The fstrace command suite uses the standard catalog utilities. The default location is /usr/vice/etc/C/afszcm.cat. It can be placed in another directory by placing the file elsewhere and using the proper NLSPATH and LANG environment variables.

8.1.2. Recommendations for Using the fstrace Command Suite

Transarc recommends the following with regard to your use of the fstrace command suite:
     
  1. Locate the fstrace binary in the local filespace on the local machine.
  2. Locate the fstrace dump file in the local filespace on the local machine. Logs can get very large in a relatively short period of time (several minutes).
  3. Ensure that you have enough room in the local file space to store the dump file. Be particularly careful if you are using disk quotas on partitions in the local file system.
  4. Attempt to isolate the activity on the local AFS client to a specific task or problem area. For example, if you are having problems accessing a particular volume or file, you should ensure that AFS activity on the local AFS client is kept to a minimum while attempting to access the volume or file, so that the trace will focus on the access of the volume or file.
Keep the fstrace log open for only a short period of time. Ideally, you should begin the trace and keep the log open long enough for a reasonable data sample, make the trace inactive, and dump the trace log. On a busy AFS client that has tracing enabled, the volume of Cache Manager events being recorded can be significant. When debugging an AFS problem, you should restrict AFS activity as much as possible so that unrelated fstrace logging is minimized. In particular, the output of fstrace is not normally be written to AFS because that could lead to extra fstrace output. Because tracing may have a negative impact on system performance, leave cm tracing in the dormant state when you are not diagnosing problems.

8.2. Setting the State of an Event Set

Back to Table of Contents

The fstrace setset command allows you to specify the state of the cm event set. The state of an event set determines whether information on the events in that event set is logged. To set the state of a kernel event set, you must issue the command on the machine on which the event set resides. The syntax of the command is as follows:

fstrace setset [-set <set_name>+] [-active] [-inactive] [-dormant] [-help]

Arguments:

-set
Specifies the name of the event set to be set. The only valid value is cm. If the -set argument is omitted, the default is cm.
-active
Enables tracing for the event set.
-inactive
Temporarily disables tracing for the event set; however, the event set continues to allocate space occupied by the log to which it sends data.
-dormant
Disables tracing for the event set; furthermore, the event set releases the space occupied by the log to which it sends data. When the cm event set that sends data to the cmfx trace log is in this state, the space allocated for that log is freed or unallocated.
You must be ``root'' on the local machine to use this command.

Example:

The following example sets the state of the cm event set to active.

# fstrace setset cm -active

8.3. Changing the Size of Trace Logs

Back to Table of Contents

The trace log occupies 60K of kernel memory by default. You can change the size of the log with the fstrace setlog command. If the specified log already exists, it is cleared when this command is issued and a new log of the given size is created. Otherwise, a log of the desired size is created when the log is allocated. The syntax of the command is as follows:

fstrace setlog [-log <log_name>+] -buffersize <1-kilobyte_units> [-help]

Arguments:

-log
Specifies the name of the trace log to be affected. The only valid value is cmfx. If the -log argument is omitted, the default is cmfx.
-buffersize
Specifies the number of 1-kilobyte blocks to allocate. The default value is 60K.
Because log data is stored in a finite, circular buffer, some of the data can be overwritten before being read. If this happens, the following message is sent to standard output (stdout) when data is being dumped:

Log wrapped; data missing.

Note: If this message appears in the middle of a dump, which can happen under a heavy work load, it indicates that not all of the log data is being written to the log or that some data is being overwritten. Increasing the size of the log with the fstrace setlog command can alleviate this problem.
You must be ``root'' on the local machine to use this command.

Example:

The following example sets the size of the cmfx kernel trace log to 80 kilobytes.

# fstrace setlog cmfx 80

8.4. Dumping the Contents of Trace Logs

Back to Table of Contents

To view the information in a trace log, you must copy the content lines of the log to standard output (stdout) or to a file. The fstrace dump command dumps trace logs to standard output (stdout) to allow you to analyze the Cache Manager processes. You can also direct the contents of a trace log dump to a file by using the -file argument.

To continuously dump a single trace log, issue the fstrace dump command with the -follow argument. If you want to dump a trace log, it must reside on the local machine. The syntax of the command is as follows:

fstrace dump [-set <set_name>+] [-follow <log_name>+] [-file <output_filename>]     [-sleep <seconds_between_reads>] [-help]

Arguments:

-set
Specifies the name of the event set for which you wish to dump the corresponding log. The only valid argument is cm. You can specify the -set argument or the -follow argument, but not both. If you omit both arguments, the default trace log to be dumped is cmfx, which corresponds to the cm event set.
-follow
Specifies the name of a trace log to dump continuously (the command continues to dump log information as changes to the log occur); a log that is continuously dumped is also cleared after the dump is stopped.         Using the -follow argument with the fstrace dump command is analogous to the -tail -f command because it runs continuously until interrupted [with ^c (Control-c)], displaying new messages as they are added to the log. The only valid argument is cmfx. You can specify the -follow argument or the -set argument, but not both. If you omit both arguments, the cmfx log is dumped by default.
-file
Indicates the destination for the output. You can specify the full or relative pathname of the output file. If the -file argument is omitted, the output is directed to standard output (stdout).
-sleep
Specifies the number of seconds to wait between dumps. The default value is 10 seconds. This argument can only be used with the -follow argument.
At the beginning of the output of each dump is a header specifying the date and time at which the dump began. The number of logs being dumped is also displayed if the -follow argument is not specified. The header appears as follows:

AFS Trace Dump --     Date: date time     Found n logs.

where date is the starting date of the trace log dump, time is the starting time of the trace log dump, and n specifies the number of logs found by the fstrace dump command.

The following is an example of a trace log dump header:

AFS Trace Dump --     Date: Fri Nov 18 10:44:38 1994     Found 1 logs.

The contents of the log follow the header and are comprised of messages written to the log from an active event set. The messages written to the log contain the following three components:

     
  1. The timestamp associated with the message (number of seconds from the start of logging)
  2. The process ID or thread ID associated with the message
  3. The message itself
A trace log message is formatted as follows:

time timestamp, pid pid:event message

where timestamp is the number of seconds from the start of trace logging, pid is the process ID number of the Cache Manager event, and event message is the Cache Manager event that corresponds with a function in the AFS source code.

The following is an example of a dumped trace log message:

time 749.641274, pid 3002:Returning code 2 from 19

A catalog file needs to be installed when AFS is installed in order to format the messages that are written to a log file. If your message looks similar to the following, verify that the catalog file (afszcm.cat) was installed in the /usr/vice/etc/C directory:

raw op 232c, time 511.916288, pid 0
p0:Fri Nov 18 10:36:31 1994

If the afszcm.cat file is not in the directory, copy it there from Transarc's distribution location, your cell's distribution location, or your AFS distribution tape.

Every 1024 seconds, a current time message is written to each log. This message has the following format:

time timestamp, pid pid Current time: unix_time

where timestamp is the number of seconds from the start of logging, pid is the process ID number of the Cache Manager event, and unix_time is standard time format since January 1, 1970.

The current time message can be used to determine the actual time associated with each log message. Determine the actual time as follows:

  1. Locate the log message whose actual time you want to determine.
  2. Search backward through the dump record until you come to a current time message.
  3. If the current time message's timestamp is smaller than the log message's timestamp, subtract the former from the latter. If the current time message's timestamp is larger than the log message's timestamp, add 1024 to the latter and subtract the former from the result.
  4. Add the resulting number to the current time message's unix_time to determine the log message's actual time.
Because log data is stored in a finite, circular buffer, some of the data can be overwritten before being read. If this happens, the following message appears at the appropriate place in the dump:

Log wrapped; data missing.

Note: If this message appears in the middle of a dump, which can happen under a heavy work load, it indicates that not all of the log data is being written to the log or some data is being overwritten. Increasing the size of the log with the fstrace setlog command can alleviate this problem.
You must be ``root'' on the local machine to use this command.

Example:

The following example creates a dump file with the name cmfx.dump.file.1. Issue the command as a continuous process by adding the -follow and -sleep arguments. Setting the -sleep argument to 10 dumps output from the kernel trace log to the file every 10 seconds.

# fstrace dump -follow cmfx -file cmfx.dump.file.1 -sleep
10

AFS Trace Dump -  
Date: Fri Apr 7 10:54:57 1995
Found 1 logs.
time 32.965783, pid 0: Fri Apr 7 10:45:52 1995
time 32.965783, pid 33657: Close 0x5c39ed8 flags 0x20 
time 32.965897, pid 33657: Gn_close vp 0x5c39ed8 flags 0x20 (returns0x0)

time 35.159854, pid 10891: Breaking callback for 5bd95e4 states 1024(volume 0)
time 35.407081, pid 10891: Breaking callback for 5c0fadc states 1024(volume 0)
>

8.5. Listing Information about Trace Logs

Back to Table of Contents

The fstrace lslog command displays information about the cmfx trace log. By default, the fstrace lslog command lists only the name of the log. It optionally displays size and allocation information when issued with the -long flag. The syntax is as follows:

fstrace lslog [-set <set_name>+] [-log <log_name>+] [-long] [-help]

Arguments:

-set
Specifies the name of the event set whose corresponding log you want to display. The only valid argument is cm. You may specify the -set argument or the -log argument, but not both. If you omit both arguments, the default trace log to be displayed is cmfx.
-log
Specifies the name of each log you want to display. The only valid argument is cmfx. You may specify the -log argument or the -set argument, but not both. If you omit both arguments, the default trace log to be displayed is cmfx.
-long
Displays the size of the log in 1-kilobyte units and the allocation state of the log. There are two allocation states for the kernel trace log:
  1. allocated - Space is reserved for the log in the kernel. This indicates that the event set that writes to this log is either active (tracing is enabled for the event set) or inactive (tracing is temporarily disabled for the event set; however, the event set continues to reserve space occupied by the log to which it sends data).
  2. unallocated - Space is not reserved for the log in the kernel. This indicates that the event set that writes to this log is dormant (tracing is disabled for the event set; furthermore, the event set releases the space occupied by the log to which it sends data).
When issued without the -long flag, the fstrace lslog command displays only the name of the log.

You must be ``root'' on the local machine to use this command.

Example:

The following example uses the -long flag to display additional information about the cmfx trace log.

# fstrace lslog cmfx -long Available logs:
cmfx : 60 kbytes (allocated)

8.6. Listing Information about Event Sets

Back to Table of Contents

The fstrace lsset command displays information about the state of the cm event set.The syntax of the command is as follows:

fstrace lsset [-set <set_name>+] [-help]

The -set argument specifies the name of the event set about which information is to be specified. The only valid argument is cm. If you omit the -set argument, the default is cm.

The output from this command lists the event set and its states. The three event states for the cm event set are:

active
Tracing is enabled for the event set.
inactive
Tracing is temporarily disabled for the event set; however, the event set continues to claim space occupied by the log to which it sends data.
dormant
Tracing is disabled for the event set; furthermore, the event set releases the space occupied by the log to which it sends data. When the cm event set that sends data to the cmfx trace log is in this state, the space allocated for that log is freed or unallocated.
You must be ``root'' on the local machine to use this command.

Example:

The following example displays the event set and its state on the local machine.

# fstrace lsset cm
Available sets:
cm active

8.7. Clearing Trace Logs

Back to Table of Contents

The fstrace clear command clears trace log data by logname or event set. Space is still allocated for the trace log in the kernel. When you are no longer concerned with the information in a trace log, you can clear the log if you need to conserve space in the kernel. The syntax of the command is as follows:

fstrace clear [-set <set_name>+] [-log <log_name>+] [-help]

If the cmfx kernel trace log already exists and you wish to change the size of the trace log, the fstrace setlog command automatically clears the trace log when a new log of the given size is created.

Arguments:

-set
Specifies the name of the event set whose log you wish to clear. The only valid argument is cm. You can specify the -set argument or the -log argument, but not both. If you omit both arguments, the cmfx log is cleared by default.
-log
Specifies the name of the log to be cleared. The only valid argument is cmfx. You can specify the -log argument or the -set argument, but not both. If you omit both arguments, the cmfx log is cleared by default.
You must be ``root'' on the local machine to use this command.

Examples:

The following example clears the cmfx log used by the cm event set on the local machine.

# fstrace clear cm

The following example also clears the cmfx log on the local machine.

# fstrace clear cmfx

8.8. Getting Help for Command Usage

Back to Table of Contents

The fstrace apropos command and the fstrace help command display the name and a short description for every fstrace command. If the -topic argument is specified, the commands provide the short description for only the command names listed. The fstrace help command provides the syntax along with the short description when the -topic argument is specified. The syntax of the commands is as follows:

fstrace apropos -topic <help string> [-help]

fstrace help [-topic <help string>+] [-help]

8.9. A Sample Kernel Tracing Session

Back to Table of Contents

This section contains a detailed example of the use of the fstrace command suite. Assume that the Cache Manager on the local AFS client machine is having difficulty accessing a volume on one of your cell's file servers. As a result of the problem, you contacted your Transarc Product Support Representative, who requested that you start collecting data in a kernel trace log using the fstrace facility. After collecting a reasonable amount of data in the log, you can send the log contents to Transarc for evaluation. Your Transarc Product Support Representative will provide you with guidelines for setting up the trace log and, after discussing your situation with you, will determine how long you should continue collecting data for a trace.

Before starting the kernel trace log, try to isolate the Cache Manager on the AFS client machine that is experiencing the problem accessing the file. You may need to instruct users to move to another machine to minimize the Cache Manager traffic on this machine. Ensure that you have the fstrace binary in the local file space, and not in AFS, and also place the dump file in the local file space. It is recommended that you use tracing in this manner to minimize the amount of unnecessary AFS traffic that will be logged by the trace log. You must be "root" on the local client machine to use the fstrace command suite. If you attempt to use an fstrace command other than fstrace apropos and fstrace help without being "root," you will see the following error:

fstrace must be run as root

Before starting a kernel trace, check the state of the event set using the fstrace lsset command.

# fstrace lsset cm

    If tracing has not been enabled previously or if tracing has been turned off on the client machine, the following output is displayed:

    Available sets:
    cm inactive

    If tracing has been turned off and kernel memory is not allocated for the trace log on the client machine, the following output is displayed:

    Available sets:
    cm inactive (dormant)

If the current state of the cm event set is inactive or inactive (dormant), turn on kernel tracing by issuing the fstrace setset command with the -active flag.

# fstrace setset cm -active

If tracing is enabled currently on the client machine, the following output is displayed:

Available sets:
cm active

If tracing is enabled currently, you do not need to use the fstrace setset command. However, you should issue the fstrace clear command to clear the contents of the trace log. This action ensures that you will remove data from the trace log that is not related to the problem that you are currently experiencing with the Cache Manager.

# fstrace clear cm

After checking on the state of the event set, you should check the current state of the kernel trace log using the fstrace lslog command. Use the -long flag with this command to determine the size of the trace log.

# fstrace lslog cmfx -long

If tracing has not been enabled previously or the cm event set was set to active or inactive previously, output similar to the following is displayed:

Available logs:
cmfx : 60 kbytes (allocated)

The fstrace tracing utility allocates 60 kilobytes of memory to the trace log by default. You can increase or decrease the amount of memory allocated to the kernel trace log by setting it with the fstrace setlog command. The number specified with the -buffersize argument represents the number of kilobytes allocated to the kernel trace log. If you want to increase the size of the kernel trace log to 100 kilobytes, issue the following command:

# fstrace setlog cmfx 100

After ensuring that the kernel trace log is configured for your needs, you can set up a file into which you can dump the kernel trace log. For example, create a dump file with the name cmfx.dump.file.1 using the following fstrace dump command. Issue the command as a continuous process by adding the -follow and -sleep arguments. Setting the -sleep argument to 10 dumps output from the kernel trace log to the file every 10 seconds.

# fstrace dump -follow cmfx -file cmfx.dump.file.1 -sleep 10

AFS Trace Dump - 
Date: Fri Apr 7 10:54:57 1995
Found 1 logs.time 32.965783, pid 0: Fri Apr 7 10:45:52 1995
time 32.965783, pid 33657: Close 0x5c39ed8 flags 0x20 
time 32.965897, pid 33657: Gn_close vp 0x5c39ed8 flags 0x20 (returns0x0)

time 35.159854, pid 10891: Breaking callback for 5bd95e4 states 1024(volume 0)
time 35.407081, pid 10891: Breaking callback for 5c0fadc states 1024(volume 0)
...
...
...
time 71.440456, pid 33658: Lookup adp 0x5bbdcf0 name g3oCKs fid (7564fb7e:588d240.2ff978a8.6) 
time 71.440569, pid 33658: Returning code 2 from 19 
time 71.440619, pid 33658: Gn_lookup vp 0x5bbdcf0 name g3oCKs (returns0x2) 
time 71.464989, pid 38267: Gn_open vp 0x5bbd000 flags 0x0 (returns 0x0)
AFS Trace Dump - Completed
After dumping the trace log to the file cmfx.dump.file.1, send the file to your Transarc Product Support Representative for evaluation.

If you want to clear the trace log, use the fstrace clear command:

# fstrace clear cm

If you want to reclaim the space allocated in the kernel for the cmfx log, issue the following command:

# fstrace setset cm -dormant

9. The kas Commands

Back to Table of Contents

This chapter describes changes to the kas command suite for AFS 3.4a. In particular, AFS 3.4a contains changes to the kas examine command.

AFS 3.4a also contains a change to the kas command ticket lifetime. These changes are marked with the heading ``AFS 3.4a Changes.''

9.1. The kas Command Ticket Lifetime

Back to Table of Contents

AFS 3.4a Changes

The kas command ticket is the ticket you receive from the Authentication server when using any command in the kas command suite. Previously, the ticket lifetime was set to 1 hour. In AFS 3.4a, the ticket lifetime has been changed to 6 hours to enable you to work on extended operations such as large Authentication Database listings.

9.2. The kas examine Command

Back to Table of Contents

AFS 3.4a Changes 

The kas examine command, which displays information for an Authentication Database entry, has been updated to display whether a user can reuse any of his or her last twenty passwords. This value is set by the -reuse argument of the kas setfields command.

The following example shows the privileged user smith examining her own Authentication Database entry with the updated output as it appears in AFS 3.4a. Note the information provided in the last line of output about smith's password reuse status.

% kas examine smith
Password for smith:
User data for smith (ADMIN)
key (0) cksum is 3414844392, last cpw: Thu Dec 23 16:05:44 1993
password will expire: Fri Jul 22 20:44:36 1994
5 consecutive unsuccessful authentications are permitted.
The lock time for this user is 25.5 minutes.
User is not locked.
entry never expires. Max ticket lifetime 100.00 hours.
last mod on Thu Jul 1 08:22:29 1993 by admin
permit password reuse

10. The package Command

Back to Table of Contents

This chapter describes changes to the package command and configuration file lines for AFS 3.4a. In particular, AFS 3.4a allows relative pathnames and contains changes to the following arguments on configuration lines:

     
  1. The minor device number argument
  2. The owner argument
  3. The group argument
These changes are marked with the heading ``AFS 3.4a Changes.''

10.1. The package Command Allows Relative Pathnames

Back to Table of Contents

AFS 3.4a Changes 

In AFS 3.4a, the package command interprets relative pathnames beginning with ``./'', ``../'', or ``/'' specified by the actual file argument of the ``L'' configuration line. The package command also interprets ``:'' and ``!'' characters contained within a pathname.

10.2. Changes to minor device number Argument

Back to Table of Contents

AFS 3.4a Changes 

The minor device number argument is specified on the ``B'' and ``C'' configuration file lines with the package command. In AFS 3.4a, the package command interprets the number specified by the minor device number argument as a hexadecimal number, an octal number, or a decimal number.

Previously, the package command interpreted the minor device number as a decimal number only.

The package command continues to interpret the major device number as a decimal number only.

10.3. Changes to owner Argument

Back to Table of Contents

AFS 3.4a Changes 

The owner argument (formerly known as the owner name argument) is specified on the ``B,'' ``C,'' ``D,'' ``F,'' ``L,'' and ``S'' configuration file lines with the package command. In AFS 3.4a, the package command interprets the owner argument as a user name or a user ID (see the ``user'' named in the device's ``owner'' field in the output from the ls -l command). Previously, if the package command could not locate the user name, the package command failed.

10.4. Changes to group Argument

Back to Table of Contents

AFS 3.4a Changes 

The group argument (formerly known as the group name argument) is specified on the ``B,'' ``C,'' ``D,'' ``F,'' ``L,'' and ``S'' configuration file lines with the package command. In AFS 3.4a, the package command interprets the group argument as a group name or a group ID (see the ``group'' named in the device's ``group'' field in the output from the ls -l command). Previously, if the package command could not locate the group name, the package command failed.

11. The uss Commands

Back to Table of Contents

This chapter describes changes to the uss command suite for AFS 3.4a. In particular, AFS 3.4a contains changes to the uss bulk command.

These changes are marked with the heading ``AFS 3.4a Changes.''

This chapter also contains changes from the AFS 3.3 release that have not been incorporated into the full AFS documentation set. These changes are marked with the heading ``AFS 3.3 Changes.''

11.1. The uss bulk Command

Back to Table of Contents

AFS 3.4a Changes 

A new flag, -pipe, has been added to the uss bulk command. The -pipe flag has been added to assist you in running batch jobs without displaying the password prompt. The -pipe flag allows the uss bulk command to accept input piped in from another program.

The new syntax for the uss bulk command follows:

uss bulk -file <bulk input file> [-template <pathname of template file>] [-verbose]
[-cell <cell name>] [-admin <administrator to authenticate>] [-dryrun] [-skipauth]     [-overwrite] [-pwexpires <password expires in [0..254] days (0 => never)>] [-pipe] [-help]

AFS 3.3 Changes

The documentation correctly states that each type of line in the uss bulk command has a syntax order similar to its corresponding uss command. The syntax of the delete line corresponds to the syntax of the uss delete command and the syntax of the add line corresponds to the syntax of the uss add command.

However, both the AFS System Administrator's Guide and the AFS Command Reference Manual provide incorrect information on the syntax of the add line. The correct syntax of the add line follows:

add <login name> [:<full name>][:<initial passwd>][:<password expires>]
[:<FileServer for home volume>][:<FileServer's disk partition for home volume>]
[:<home directory mount point>][:<uid to assign the user>][:<var1>][:<var2>]
[:<var3>][:<var4>][:<var5>][:<var6>][:<var7>][:<var8>][:<var9>]

11.2. The uss add Command

Back to Table of Contents

AFS 3.3 Changes 

The syntax of the uss add command is incorrect in the AFS documentation. The correct syntax follows:

uss add -user <login name> [-realname <full name in quotes>] [-pass <initial password>]     [-pwexpires <password expires in [0..254] days (0 => never)>]     [-server <FileServer for home volume>] [-partition <FileServer's disk partition for home volume>]     [-mount <home directory mount point>] [-uid <uid to assign the user>]     [-template <pathname of template file>] [-verbose] [-var <auxiliary argument pairs (Num val)>+]     [-cell <cell name>] [-admin <administrator to authenticate>]     [-dryrun] [-skipauth] [-overwrite] [-help]

12. The vos Commands

Back to Table of Contents

This chapter describes changes to the vos command suite for AFS 3.4a. In particular, AFS 3.4a contains changes to the following vos commands:

     
  1. The vos restore command
  2. The vos backup command
  3. The vos create command
  4. The vos release command
  5. The vos rename command
  6. The vos syncserv command
In AFS 3.4a, the vos command also can perform dump and restore operations from a named pipe.

These changes are marked with the heading ``AFS 3.4a Changes.''

This chapter also contains changes from the AFS 3.3 release that have not been incorporated into the full AFS documentation set. These changes are marked with the heading ``AFS 3.3 Changes.''

12.1. The vos restore Command

Back to Table of Contents

AFS 3.4a Changes 

In AFS 3.3, the vos restore command determined whether the volume specified by the -name argument already existed on the partition specified by the -server and -partition arguments. If the volume existed on the specified partition, the vos restore command asked whether you wanted to overwrite the volume. If you entered a yes response, the vos restore command completely overwrote the existing volume; if you entered a no response, the command aborted. It was impossible to perform an incremental restore operation. If the volume did not exist on the specified partition, the vos restore command aborted.

In AFS 3.4a, the vos restore command determines whether the volume specified by the -name argument already exists on the partition specified by the -server and -partition arguments. If the volume exists, the vos restore command prompts you to determine which of the following actions it is to perform:

     
  1. abort - aborts the vos restore command.
  2. full - performs a full restore by completely overwriting the existing volume.
  3. incremental - performs an incremental restore over the existing volume.
The following abbreviations are valid responses to the prompt:
     
  1. a for abort
  2. f for full
  3. inc or increment for incremental
If the volume exists, but not on the specified partition, the vos restore command prompts you to either fully restore or to abort the restore operation. An incremental restore cannot be done.

If standard input cannot be used for a prompt, the default action is to abort the restore operation.

The vos restore command also includes a new -overwrite argument for situations where you do not want to be prompted or where standard input (stdin) is redirected and cannot be used for a prompt. The new command syntax follows:

vos restore -server <machine name> -partition <partition name>     -name <name of volume to be restored> [-file <dump file>] [-id <volume ID>]     [-overwrite <abort | full | incremental>] [-cell <cell name>]     [-noauth] [-localauth] [-verbose] [-help]

The valid abbreviations for the -overwrite argument are the same as those listed as valid responses to the prompt. The default action for the -overwrite argument is abort.

The following are rules for using the vos restore command:

     
  1. If the volume specified with the -name argument exists on the specified partition and the -overwrite argument is specified on the command line, the command performs the action indicated by the -overwrite argument.
  2. If the volume specified with the -name argument does not exist, the -overwrite argument has no effect and a full restore is done. If the -overwrite argument was set to incremental or abort on the command line, a message is displayed stating that a full restore will be done instead.
  3. If the volume specified with the -name argument exists, but is not on the specified partition, and the -overwrite argument is set to full, the command performs a full restore. If the -overwrite argument is set to incremental or abort, the command fails to replace the volume and a message is displayed stating that the operation is being aborted.
  4. If the volume specified with the -name argument exists, but is not on the specified partition, and the -overwrite argument is not specified, the command prompts you for a restore action (abort or full).
  5. If the volume specified with the -name argument exists on the specified partition and the -overwrite argument is not specified, the command performs one of the following actions:

    1. If the -file argument was omitted, the vos restore command aborts and no restore occurs.
    2. If the -file argument was specified, the vos restore command prompts you for a restore action (abort, full, or incremental). The default action is to abort the restore operation.

12.2. The vos backup Command

Back to Table of Contents

AFS 3.4a Changes 

When the vos backup command creates a Backup volume successfully, it returns the following message:

Created backup volume for ReadWrite volume name

However, if the VL server cannot locate the ReadWrite volume at the site listed in the VLDB, the command exits without creating the Backup volume. The command displays the following message telling you that the operation aborted without creating the Backup volume:

vos: can't find volume ID or name 'volumeID or volume name'

Previously, the vos backup command exited without indicating that the Backup volume was not created.

12.3. The vos create Command

Back to Table of Contents

AFS 3.4a Changes 

The vos create command has a new -maxquota argument. The new syntax for the command follows:

vos create -server <machine name> -partition <partition name>     -name <volume name> [-maxquota <initial quota (KB)>]     [-cell <cell name>] [-noauth] [-localauth] [-verbose] [-help]

The -maxquota argument specifies the maximum amount of disk space the volume can use. Express the -maxquota argument in kilobyte blocks (a value of 1024 is one megabyte). A value of 0 grants an unlimited quota, but the size of the disk partition that houses the volume places an absolute limit on the volume's maximum size. The default value for the -maxquota argument is 5000.

Previously, when creating a volume, you had to use the vos create command to create a volume, use the fs mkmount command to create the mount point for the volume, and then use the fs setquota command to set the quota for the volume. The -maxquota argument has been added to the vos create command to allow you to create the volume and set the quota in the same step. The -maxquota argument does not replace the fs setquota command; you can still use the fs setquota command to set or change the quota of a mounted volume.

There is no change in the requirement for creating a mount point for a volume; that is, after creating the volume with the vos create command, you still need to create a mount point for the volume using the fs mkmount command after creating it with the vos create command.

12.4. The vos release Command

Back to Table of Contents

AFS 3.4a Changes 

In AFS 3.4a, the vos release command can now update up to half of a ReadWrite volume's replicas simultaneously. This is done automatically and internally; no arguments have been added to the vos release command. Previously, the vos release command updated one replica at a time.

12.5. The vos rename Command

Back to Table of Contents

AFS 3.4a Changes 

In AFS 3.4a, a message has been added to inform you that the vos rename command failed because the specified volume does not exist. The message follows:

vos: Could not find entry for volume <oldname>

Previously, if you specified a nonexistent volume with the vos rename command, the command did not inform you that it had failed; the command appeared to have executed properly.

12.6. The vos syncserv Command

Back to Table of Contents

AFS 3.4a Changes 

In AFS 3.4a, the vos syncserv command continues to check all remaining servers even if it cannot contact one or more of the servers. Each time a server cannot be contacted, a message identifying that server is displayed, as in the following example:

Transaction call timed out for server 'fs1.abc.com'

Previously, the vos syncserv command attempted to contact every file server on which a volume resided. If the command could not contact a particular file server, it failed without attempting to contact the remaining file servers. The command displayed a message stating that it could not contact a file server without specifying which file server it attempted to contact.

12.7. Restoring from a Named Pipe

Back to Table of Contents

AFS 3.4a Changes

The vos dump and vos restore commands allow you to dump and restore volumes from a named pipe without timing out the fileserver process. This feature allows AFS to interoperate with third-party backup systems.

12.8. The vos changeaddr Command

Back to Table of Contents

AFS 3.3 Changes 

The vos changeaddr command changes a file server's IP address. Changing the IP address of a file server was a difficult task in earlier versions of AFS. After changing the IP address, you had to run the vos syncserv and vos syncvldb commands. Then you had to issue the vos remsite command to remove site information associated with the ReadOnly volumes under the old IP address. A new vos command, vos changeaddr, allows you to change a simple file server's IP address easily.

Note: If you are using AFS 3.4a VL servers, the vos changeaddr command has no effect on file server addresses. AFS 3.4a VL servers automatically register the IP addresses of file server machines upon restarting the fileserver process.
The syntax of the new command follows:

vos changeaddr -oldaddr <original IP address> -newaddr <new IP address> [-cell <cell name>]     [-noauth] [-localauth] [-verbose] [-help]

Arguments:

-oldaddr
Specifies the old IP address of the file server whose IP address you are changing.
-newaddr
Specifies the new IP address of the file server.
-cell
Specifies the cell in which to run the command; the default is the local cell. Do not use this flag with the -localauth flag; the -cell argument and the -localauth flag are mutually exclusive.
-noauth
Tells the Volume Location (VL) servers to assign the identity system:anyuser to the issuer.
-localauth
Constructs a server ticket using a key from the /usr/afs/etc/KeyFile file. Do not use this flag with the -cell argument; the -cell argument and the -localauth flag are mutually exclusive.
-verbose
Tells the Volume and VL servers to report on what they are doing as they execute the command.
-help
Prints the online help for this command. All other valid options specified with this option are ignored.
Note: This command does not change IP addresses contained in any protection groups that you have defined with the pts creategroup command. Use the pts rename command to change IP addresses in existing groups.         Changing the IP address of a Ubik database server involves additional changes. Refer to the AFS System Administrator's Guide for more information.
Examples:

The following command changes the IP address of a simple file server from 128.21.16.214 to 128.21.16.221:

% vos changeaddr -oldaddr 128.21.16.214 -newaddr 128.21.16.221 -localauth

13. Miscellaneous AFS Commands

Back to Table of Contents

This chapter describes changes to miscellaneous (non-suite) AFS commands for AFS 3.4a. In particular, AFS 3.4a contains changes to the following miscellaneous commands:

     
  1. The afsd command
  2. The butc command
  3. The fileserver command
  4. The klog command
  5. The knfs command
  6. The pagsh command
  7. The salvager command
  8. The scout command
  9. The upclient command
  10. The vldb_convert command
  11. The vlserver command
  12. The volserver command
These changes are marked with the heading ``AFS 3.4a Changes.''

This chapter also contains changes from the AFS 3.3 release that have not been incorporated into the full AFS documentation set. These changes are marked with the heading ``AFS 3.3 Changes.''

13.1. The afsd Command

Back to Table of Contents

AFS 3.4a Changes 

AFS 3.4a contains four changes to the afsd command:

     
  1. AFS compares the cache size to the partition size so that the cache cannot be set to a value that is too large.
  2. The afsd command correctly interprets white space in the /usr/vice/etc/cacheinfo file.
  3. The -waitclose flag of the AFS 3.4a afsd command is redundant.

13.1.1. AFS Compares Cache Size to Partition Size

AFS clients can panic and create warnings and error messages if the cache size is set too close to or higher than the size of the underlying partition. The AFS Command Reference Manual recommends the following cache sizes: for a disk cache, you should devote no more than 95% of the partition on which the cache resides; for a memory cache, you should determine the maximum amount of memory required to run processes and commands and require at least this amount of memory for processes and commands to run.

To avoid problems resulting from a disk cache that is too large, AFS now compares the disk cache size to the partition size when you issue the afsd command. If the disk cache size is greater than 95% of the partition size, AFS returns an appropriate message to standard output (stdout) and exits without starting the Cache Manager. You cannot start the Cache Manager until you reduce the size of the disk cache to less than 95% of the partition size.

13.1.2. Correct Interpretation of White Space in the cacheinfo File

Changes have been made to the afsd command's interpretation of the /usr/vice/etc/cacheinfo file, which contains all of the information needed to run the Cache Manager. Previously, the afsd command could not interpret spaces, carriage returns, tabs, or blank lines that were inadvertently inserted into the file. The afsd command failed if it found extra white space while attempting to read the /usr/vice/etc/cacheinfo file. In AFS 3.4a, the afsd command ignores extra white space.

13.1.3. The -waitclose Flag Has No Effect on the afsd Command

In AFS 3.4a, the default for the Cache Manager (afsd) operation is to complete the transfer of a closed file to the file server before returning control to the application invoking the close. In AFS 3.3, the default for the Cache Manager operation was to return control to a closing application program before the final chunk of a file was completely written to the file server.

The default functionality of the fs storebehind command in AFS 3.4a (delayed asynchronous writes) was previously provided by the default setting of the afsd command. The functionality of the AFS 3.4a Cache Manager (complete the transfer of a closed file to the file server) was previously provided by the -waitclose flag of the afsd command; for this reason, the -waitclose flag has no effect on the operation of the Cache Manager in AFS 3.4a.

13.2. The butc Command

Back to Table of Contents

AFS 3.4a Changes 

AFS 3.4a contains two enhancements to the butc command:

     
  1. A -localauth flag has been added to the butc command.
  2. There is a new debugging level associated with the -debuglevel argument.

13.2.1. New -localauth Flag

The butc command now includes the -localauth flag, which assigns the issuer a token that never expires and displays an expiration date of NEVER. It is useful when the issuer wants to run a backup process in the background.

The new syntax of the butc command follows:

butc [-port <port offset>] [-debuglevel < 0 | 1 | 2 >] [-cell <cell name>]     [-aixscsi] [-noautoquery] [-localauth] [-help]

The -localauth flag assigns the butc command a token that never expires. You need to run the butc command with the -localauth flag from a file server machine as ``root.'' This flag instructs the butc command interpreter running on the local file server machine to construct a server ticket using the server encryption key with the highest key version number in the /usr/afs/etc/KeyFile file on the local file server machine. The butc command presents the ticket to the Volume and/or Volume Location (VL) server to use in mutual authentication. This flag is only useful for commands issued on file server machines, since client workstations do not have a /usr/afs/etc/KeyFile file. It is intended for cron-type processes or jobs included in the machine's /usr/afs/local/BosConfig file. The flag can also be used if the issuer is unable to authenticate to AFS but is logged into the local file system as ``root.''

13.2.2. Change to the -debuglevel Argument

In AFS 3.4a, the -debuglevel argument of the butc command, which determines the amount of information the Tape Coordinator displays in the Tape Coordinator window, has three legal values: 0, 1, and 2. The following describes the information supplied by the three legal values:
     
  1. 0 displays the minimum level of detail required to describe Backup Tape Coordinator (butc) operations. The information includes error messages, tape start and finish messages, and prompts for placing new tapes in the drive. This is the default value.
  2. 1 displays names of volumes as they are being dumped to and restored from tape.
  3. 2 displays all messages sent to the tape log file (TL_<device_name>).
In AFS 3.3, the -debuglevel argument had two legal values: 0 and 1.

13.3. The fileserver Command

Back to Table of Contents

AFS 3.4a Changes 

AFS 3.4a contains four enhancements to the fileserver command:

     
  1. Change in default value of implicit rights for the system:administrators group
  2. New -implicit argument to change the default value for the system:administrators group
  3. Change in default value for the -m argument
  4. Change in options shown in usage output

13.3.1. Change in Default Value of Implicit Rights

In AFS 3.4a, the fileserver command gives members of the system:administrators group implicit ``lookup'' (l) and ``administer'' (a) rights on all files in an AFS cell; this is analogous to having an entry of ``system:administrators la'' on the ACL of each file on the affected file server.

Previously, the fileserver command gave members of the system:administrators group only implicit ``administer'' rights on all files. If a member of the system:administrators group wanted to have access to a directory path where he or she did not have explicit ``lookup'' rights, the system administrator had to add ``lookup'' rights to each directory level on the path.

13.3.2. New -implicit Argument

A new argument, -implicit, has been added to the fileserver command. The -implicit argument determines the rights that members of the system:administrators group have for the files on the file server on which the command is issued. The default value for this argument is implicit ``lookup'' (l) and ``administer'' (a) rights for members of the system:administrators group on the files on the affected file server. The -implicit argument allows you to establish different implicit rights for the system:administrators group on a file-server-by-file-server basis.
Note: The -implicit argument always sets a minimum of ``administer'' (a) rights for the system:administrators group. If you issue the -implicit argument with the value ``none,'' the implicit rights for the system:administrators group will be ``administer'' (a).
The new syntax of the fileserver command follows:

fileserver [-d <debug level>] [-p <number of processes>] [-spare <number of spare blocks>]     [-pctspare <percentage spare>] [-b <buffers>] [-l <large vnodes>] [-s <small vnodes>]     [-vc <volume cachesize>] [-w <call back wait interval>] [-cb <number of call backs>]     [-banner <print banner every 10 minutes>] [-novbc <whole volume cbs disabled>]     [-implicit <admin mode bits: rlidwka>]     [-hr <number of hours between refreshing the host cps>] [-m <min percentage spare in partition>]     [-L <large server conf>] [-S <Small server conf>] [-k <stack size>] [-help]

13.3.3. Change in Default Value of the -m Argument

The -m argument of the fileserver command has been modified; the -m argument only affects machines running the AIX version (rs_aix32, rs_aix41) of AFS. The -m argument specifies the percentage by which the fileserver process allows partitions on the file server machine to exceed their quotas. Previously, the default value for this argument was 5. The default value has been increased in AFS 3.4a to 10. This change was necessary because AIX does not use the BSD standard of keeping a disk reserve.

A disk reserve is a portion of the disk space that is reserved in the event that a fileserver process puts the file server temporarily over its disk space quota.

Note: The AIX version of the fileserver process creates a 10% disk reserve automatically. This is necessary because AIX does not use the BSD standard of keeping a disk reserve.
The fileserver process now alerts you sooner when partitions on the file server machine are approaching their quotas by returning the following error message:

No space left on device

13.3.4. Change in Usage Options

Several options are now reflected in the fileserver command's usage output. The options are as follows:
     
  1. The -rxpck option specifies the number of rx extra packets
  2. The -rxdbg option enables rx debugging
  3. The -rxdbge option enables rxevent debugging
  4. The -lock option keeps the file server from swapping (SGI only)
These options are included for debugging purposes and should only be used with the help of an AFS Product Support Representative.

AFS 3.3 Changes 

Each File Server (fileserver) process generates a key with an infinite lifetime (using the AFS key), which it uses to communicate with the Protection Server (ptserver) process. In earlier versions of AFS, if the AFS key on which the File Server key was based was removed, the File Server could not communicate with the Protection Server because the File Server was still using the old key, which the Protection Server could no longer access. The only way to break this deadlock was to restart the File Server. (When the File Server was restarted, it generated a new key based on the latest AFS key.)

The fileserver program has been changed to remove this deficiency. Now, if a fileserver process is unable to authenticate with the ptserver process, the fileserver process generates a new key based on the latest AFS key and attempts to authenticate again. This change affects cells whose administrators followed Transarc's recommendations on AFS key changes and retirement but did not restart the fileserver processes on a regular basis (if ever). These cells whose administrators no longer need to restart their fileserver processes as a result of an AFS key change.

This change does not affect cells whose administrators

     
  1. Never changed AFS keys (not recommended)
  2. Never retired old AFS keys (not recommended)
  3. Restarted the fileserver processes on a regular basis and retained old AFS keys for at least as long a period of time as the time between consecutive fileserver process restarts.

13.4. The klog Command

Back to Table of Contents

AFS 3.4a Changes 

The -tmp flag has been removed from the klog command. The -tmp flag is no longer necessary because there is a klog.krb program available to authenticate to AFS from a Kerberos database. The new syntax of the klog command follows:

klog [-x] [-principal <user name>] [-password <user's password>] [-cell <cell name>]     [-servers <explicit list of servers>+] [-pipe] [-silent]     [-lifetime <ticket lifetime in hh[:mm[:ss]]>] [-setpag] [-help]

Use the klog.krb program for Kerberos authentication rather than the klog command with the -tmp flag.

AFS 3.3 Changes

A new flag, -setpag, has been added to the klog command. When run with this flag, the klog command creates a process authentication group (PAG) prior to requesting authentication. The tokens created are then placed in this newly created PAG.

13.5. The knfs Command

Back to Table of Contents

AFS 3.4a Changes 

In AFS 3.4a, if you run the knfs command without the -id argument, the command uses the getuid() function to identify the issuer and grant appropriate permissions to the issuer of the command.

Previously, if you omitted the -id argument from the knfs command, the command defaulted to granting system:anyuser permissions to the issuer.

13.6. The pagsh Command

Back to Table of Contents

AFS 3.4a Changes 

The pagsh command invokes the Bourne shell by default. If you prefer the C shell over the Bourne shell, issue the following command to invoke the C shell:

# pagsh -c /bin/csh

13.7. The salvager Command

Back to Table of Contents

AFS 3.4a Changes 

Two new flags have been added to the salvager command.

 
-showlog
Instructs the Salvager to display on standard output (stdout) all log data that is being written to the /usr/afs/logs/SalvageLog file. This is useful if the user wants to use pipes to search for certain log output or wants to avoid the additional step of looking at the log file.
-showsuid
Displays a list, for each partition, of the pathname of all setuid and setgid files that reside on that partition. This information is useful for administrative purposes.
The new syntax of the salvager command follows:

salvager [initcmd] [-partition <Name of partition to salvage>]     [-volumeid <Volume Id to salvage>] [-debug] [-nowrite] [-inodes]     [-force] [-oktozap] [-rootinodes] [-salvagedirs] [-blockreads]     [-parallel <# of max parallel partition salvaging>] [-tmpdir <Name of dir to place tmp files>]     [-showlog] [-showsuid] [-help]

13.8. The scout Command

Back to Table of Contents

AFS 3.4a Changes 

In AFS 3.4a, the scout command includes the name of the file server in a message when a problem exists on a partition. An example of the new message for a partition named /vicepx on a file server named fs1.abc.com follows:

Could not get information on server fs1.abc.com partition /vicepx

Previously, when a problem existed on a partition, the scout command displayed the following message:

Could not get information on partition /vicepx

If the server name listed at the top of the screen had scrolled off, the user might not know which server was involved.

13.9. The upclient Command

Back to Table of Contents

AFS 3.4a Changes 

The -level argument of the upclient command has been removed because its functionality is duplicated by the -clear and -crypt flags. The new syntax for the command follows:

upclient <hostname> [-clear] [-crypt] [-t <retry time>] [-verbose] <dir>+[-help]

Note: The -crypt flag is not available in the international version of this command.

13.10. The vldb_convert Command

Back to Table of Contents

AFS 3.4a Changes 

In addition to its previous Volume Location Database (VLDB) conversion values, the vldb_convert command now converts the VLDB from AFS version 3.4a (4) format to AFS version 3.3 (3) format. The value of 4 is only used with the -from argument.

VLDB upgrade conversions from AFS version 3.3 format to AFS version 3.4a format are not necessary. The version 3.3 VLDB is automatically converted to a version 3.4a VLDB when you upgrade the vlserver binaries.

AFS 3.3 Changes

A new flag, -dumpvldb, has been added to the vldb_convert command. The flag directs the command to produce debugging output. The new syntax of the vldb_convert command follows:

vldb_convert [initcmd] [-to <goal version>] [-from <current version>] [-path <pathname>]     [-showversion] [-dumpvldb] [-help]

13.11. The vlserver Command

Back to Table of Contents

AFS 3.4a Changes 

AFS 3.4a contains two changes to the vlserver command:

     
  1. A change in the values for the -p argument
  2. A new log file (/usr/afs/logs/VLLog) for the vlserver process

13.11.1. Change in Values for -p Argument

The -p argument of the vlserver command allows you to set the number of server lightweight processes to run. The default value for the -p argument of the vlserver command has been changed from 4 to 9. The minimum value for this argument is 4, and the maximum value is 16.

13.11.2. New Log File for the vlserver Process

AFS 3.4a supports a log file for the Volume Location (VL) Server (vlserver process). When the vlserver process is started, the VL Server creates an activity log file named /usr/afs/logs/VLLog, if the file does not already exist. When the vlserver process creates a new VLLog file, it copies the existing VLLog file to a file named VLLog.old. You can examine this log file using the bos getlog command. By default, no logging is done by the vlserver process.

The VLLog file can be set to record three different information levels. You can enable logging in the VLLog file by using the following command:

# kill -TSTP <process id for vlserver>

In the following example, the ps command is run to find the process id of the vlserver process and the kill -TSTP command is run to enable logging in the VLLog file:

# ps -axwu | more     USERPID%CPU%MEMSZRSSTTSTATSTARTTIMECOMMAND     root930.00.0600?IWFeb 270:00vlserver     # kill -TSTP 93

Use the same command to increase the current level of logging information (that is, to change from the first level of logging information to the second level or from the second level to the third level). A log entry is created in the VLLog file to indicate any change in the VLLog file detail level.

The first level of information contained in the VLLog file can include the following messages:

     
  1. Create Volume <volume-id>
  2. Delete Volume <volume-id>
  3. Change Addr <addr1> <addr2>
  4. Replace Volume <volume-id>
  5. Update Volume <volume-id>
  6. SetLock Volume <volume-id>
  7. ReleaseLock Volume <volume-id>
  8. GetNewVolid newmax=<volume-id>
The second level of information contained in the VLLog file can include messages related to standard lookup operations, such as the following messages:
     
  1. GetVolumeById <volume-id> (id)
  2. GetVolumeByName <volume-id> (id)
  3. ListAttrs nentries=<count>
  4. GetStats
  5. GetAddrs
The third level of information contained in the VLLog file can include messages related to infrequent lookup operations, such as ListEntry index=<id>.

You can disable logging for the vlserver process with the following command:

# kill -HUP <process id for vlserver>

You can decrease the level of logging for the vlserver process by issuing the following command:

# kill -HUP <process id for vlserver>

Afterwards, issue the following command to obtain the desired level of logging:

# kill -TSTP <process id for vlserver>

13.12. The volserver Command

Back to Table of Contents

AFS 3.4a Changes 

AFS 3.4a contains two changes to the volserver command:

     
  1. A change to the -log flag
  2. A new -p argument to set the number of lightweight processes (LWPs) that the volserver command is to run

13.12.1. Change to the -log Flag

The -log flag causes the Volume Server to record the names in the /usr/afs/logs/VolserLog file of all users who successfully initiate a vos command. In AFS 3.4a, the VolserLog file also contains file entries for any file removal activity resulting from using the vos release command with the -f flag.

13.12.2. New -p Argument

A new argument has been added to the volserver command. The -p argument of the volserver command sets the number of server lightweight processes (LWPs) to run. The minimum value for this argument is 4, and the maximum value is 16. The default is 9.

The new syntax of the volserver command follows:

/usr/afs/bin/volserver [-log] [-p <lwp processes>] [-help]

AFS 3.3 Changes

The -verbose flag has been removed from the volserver command because the flag generates only two possible messages. The functionality of the flag is now part of the base functionality of the command. In other words, the AFS 3.3 version of the volserver command, when run without any flags or arguments, behaves like the AFS 3.2 version of the command when run with the -verbose flag.

13.13. The dlog Command

Back to Table of Contents

AFS 3.3 Changes 

The new dlog command is for use with Transarc Corporation's AFS/DFS Migration Toolkit. The dlog command authenticates the AFS user specified with the -principal argument to the DCE Security Service in the DCE cell specified with the -cell argument. DCE authentication allows the user to access the DCE cell from an AFS client via the Translator Server. The command provides no functionality outside of the Migration Toolkit.

The syntax of the new command follows:

dlog [-principal <user name>] [-cell <cell name>] [-password <user's password>]     [-servers <explicit list of servers>+] [-lifetime <ticket lifetime in hh[:mm[:ss]]>]     [-setpag] [-pipe] [-help]

Refer to the AFS/DFS Migration Toolkit Administration Guide and Reference for more information on the dlog command.

13.14. The dpass Command

Back to Table of Contents

AFS 3.3 Changes 

The new dpass command is for use with Transarc Corporation's AFS/DFS Migration Toolkit. The dpass command returns the DCE password created for a user with the dm pass command. The command provides no functionality outside of the Migration Toolkit.

The syntax of the new command follows:

dpass [-cell <original AFS cell name>] [-help]

Refer to the AFS/DFS Migration Toolkit Administration Guide and Reference for more information on the dpass command.

13.15. The up Command

Back to Table of Contents

AFS 3.3 Changes 

The up command now returns a 0 only if it succeeds; otherwise, the command returns a 1. Formerly, the command always returned a 0, regardless of success or failure.

13.16. The xstat Utility

Back to Table of Contents

AFS 3.3 Changes 

Two new arguments, -frequency and -period, have been added to the two xstat programs (xstat_cm_test and xstat_fs_test):

     
  1. The -frequency argument sets the frequency in seconds at which the program initiates probes to the Cache Manager or file server; formerly, this value was hard coded at 60 seconds; 60 seconds is now the default.
  2. The -period argument sets the program's run time duration in minutes. At the end of this period of time, the program exits. Formerly, this value was hard coded at 10 minutes; 10 minutes is now the default.
The new syntax of the two programs follows:

xstat_cm_test [initcmd] -cmname <Cache Manager name(s) to monitor>+     -collID <Collection(s) to fetch>+[-onceonly] [-frequency <poll frequency, in seconds>]     [-period <data collection time, in minutes>] [-debug] [-help]

xstat_fs_test [initcmd] -fsname <File Server name(s) to monitor>+     -collID <Collection(s) to fetch>+[-onceonly] [-frequency <poll frequency, in seconds>]     [-period <data collection time, in minutes>] [-debug] [-help]

13.17. The -help Flag

Back to Table of Contents

AFS 3.3 Changes

In previous versions of AFS, the miscellaneous commands were inconsistent in their use of the -help flag. These commands now consistently use the -help flag to provide information on their syntax:

  1. afsd
  2. afsmonitor
  3. budb_convert
  4. butc
  5. fileserver
  6. fms
  7. kaserver
  8. kdb
  9. klog
  10. knfs
  11. kpasswd
  12. ptserver
  13. runntp
  14. scout
  15. tokens
  16. unlog
  17. upclient
  18. upserver
  19. vldb_convert
  20. vlserver
  21. volserver

14. Additional Functional Changes

Back to Table of Contents

This chapter lists additional functionality added to AFS for the 3.4a release, including:

     
  1. Multihomed file servers
  2. Support for unlinking open files
  3. The fileserver process checks for FORCESALVAGE flag
  4. 8-bit character support for international characters on file and directory names
  5. Support for file server partitions larger than 2 GB
  6. Improved message when the host name is missing from the CellServDB file
  7. The volume name in response to an AFS command, if known by the Cache Manager
  8. New rights for the system:administrators group
  9. Improved database access during Ubik elections
  10. An increase in the number of server partitions
  11. Additional AIX 3.2 support
  12. Improvements to the NFS/AFS Translator
These changes are marked with the heading ``AFS 3.4a Changes.''

This chapter also contains changes from the AFS 3.3 release that have not been incorporated into the full AFS documentation set. These changes are marked with the heading ``AFS 3.3 Changes.''

14.1. Multihomed File Servers

Back to Table of Contents

AFS 3.4a Changes 

Multihomed file servers have multiple IP addresses. A multihomed file server can respond to an RPC via a different IP address than the one initially addressed by a client machine. By providing multiple paths through which a client machine's Cache Manager can communicate with it, a multihomed file server can increase the availability of computing resources and improve performance. A multihomed file server makes several addresses available to service requests for client machines. These additional addresses provide multiple paths with which a client machine's Cache Manager can communicate with the multihomed file server.

A multihomed file server could choose to service an RPC through a different IP address if there is heavy network traffic at the original IP address which serviced a previous RPC. For example, assume a multihomed file server originally responds to a client machine's service request at the IP address of 199.206.34.562. When the client machine sends to the file server another service request, the file server's IP address of 199.206.34.562 is busy servicing the requests of other client machines. When the client machine's Cache Manager realizes that this IP address is busy, it selects another IP address, 199.206.34.564, belonging to the multihomed file server. The client machine's Cache Manager then attempts to send the RPC to that IP address with the service request.

AFS 3.4a supports up to 16 addresses per multihomed file server machines. File servers register their network addresses with the Volume Location Database (VLDB) upon startup. In AFS 3.3 and earlier versions, file servers were identified in the VLDB by a single IP address. In AFS 3.4a, file servers are represented in the VLDB by a unique host identifier, which is created by the fileserver process during the startup process. This file server host identifier contains information about all known IP addresses for that file server. These IP addresses are updated whenever the fileserver process is restarted.

Note: You can specify a unique preference for any of the multihomed addresses available at a file server machine using the fs setserverprefs command.
Note: AFS 3.4a does not support multihomed clients or database (Authentication, Protection, Volume Location, and Backup Databases) servers.
For more information about starting a multihomed file server, refer to Section 3.3.

14.2. Support for Unlinking Open Files

Back to Table of Contents

AFS 3.4a Changes 

AFS 3.4a allows you to unlink open files in the AFS file space. Unlinking an open file is a technique for creating short-lived temporary files when you want to keep these files hidden from other users or do not want to keep a permanent record of the file in your file system. When you unlink an open file, the file server renames the file with a unique file name,.__.afsxxxx, where xxxx is a random numeric or numeric/alphabetic string generated by the file server. The unlinked file's V file in the AFS cache maintains the credentials from the former file and the new filename created by the file server.

The unlinked file does not appear to users that can view the contents of the directory using the ls command. The renamed, unlinked file appears in the output of the ls -la command, but can be accessed only by those users that have the correct credentials to view the file in AFS (that is, only ``root'' users). 

When the unlinked temporary file is closed, the file server removes the file from the disk on the file server machine permanently.

14.3. The fileserver Process Checks for FORCESALVAGE Flag

Back to Table of Contents

AFS 3.4a Changes

When the vfsck process determines that a partition needs to be salvaged, vfsck creates a FORCESALVAGE flag on that partition. Previously, the fileserver process did not check for a FORCESALVAGE flag when rebooting the file server machine after a clean shutdown. The file server attached all volumes even if a partition had a FORCESALVAGE flag.

In AFS 3.4a, when the file server machine is rebooting, the fileserver process looks for a FORCESALVAGE flag. If the fileserver process detects such a flag in a partition, it detaches all of the volumes in the partitions already attached and aborts, sending an appropriate message to the /usr/afs/logs/FileLog file. The fileserver process passes responsibility to the bosserver process, which causes the salvager process to run. After the salvage is complete, the fileserver process attaches all of the volumes properly.

14.4. AFS Supports 8-Bit Characters in Filenames and Directories

Back to Table of Contents

AFS 3.4a Changes 

For international character support, AFS 3.4a supports 8-bit characters in file and directory names. AFS file and directory names were previously restricted to 7-bit characters (ASCII).

14.5. AFS Supports Partitions Larger Than 2 GB

Back to Table of Contents

AFS 3.4a Changes 

AFS 3.4a allows you to create /vicepx partitions larger than 2 GB. The maximum size of a /vicepx partition is the same as the maximum partition size of the local file system supported by the operating system. Refer to your operating system vendor's documentation for details.

Although you can create /vicepx partitions larger than 2 GB, AFS 3.4a does not fully support volumes larger than 2 GB. Files in an AFS volume are still limited to a maximum of 2 GB.

Note: You can read from and write to AFS volumes that are larger than 2 GB, but you cannot perform typical AFS volume operations, such as dumping, restoring, moving, or replicating the volume.
Previous versions of AFS restricted partition sizes to under 2 GB.

14.6. New CellServDB Error Message

Back to Table of Contents

AFS 3.4a Changes

AFS commands with the -cell argument produce an error message when the host name is missing from the /usr/vice/etc/CellServDB file. In AFS 3.4a, these commands produce an error message indicating that the command failed and that there is a problem with the CellServDB file. The new message also tells which line of the CellServDB file caused the failure. For example, the klog command now issues the following message:

Can't properly parse host line xxx.xx.xx.xx in configuration file /usr/vice/etc/CellServDB     klog: error reading cell database Can't get local cell name!

Previously, this message explained that the command had failed but did not indicate that there was a problem with the CellServDB file.

14.7. Cache Manager May Show Volume Name

Back to Table of Contents

AFS 3.4a Changes 

In AFS 3.4a, if the volume name information is available in the cache, the Cache Manager displays the volume name along with the volume ID when reporting information about a particular volume. The following message is an example of how the Cache Manager may display volume information:

Waiting for busy volume XXX (name) in cell XXX.

Previously in AFS, the Cache Manager displayed only the volume ID when reporting information about a particular volume.

14.8. New Rights for the system:administrators Group

Back to Table of Contents

AFS 3.4a Changes 

In AFS 3.4a, members of the system:administrators group have both administer (a) and lookup (l) rights on the access control list of every directory in the system.

Previously, members of the system:administrators group had only implicit administer (a) rights on the access control list of every directory in the system.

14.9. Improved Database Access During Elections

Back to Table of Contents

AFS 3.4a Changes 

In AFS 3.4a, users are now able to access ReadOnly data from all database servers, even when Ubik cannot attain a quorum; however, they cannot update the data until Ubik has a quorum.

Previously, all database servers except the Protection Server provided ReadOnly data, even when no quorum existed. Users were able to read data but were not able to update the data until Ubik established a quorum.

14.10. Increase in Server Partitions

Back to Table of Contents

AFS 3.4a Changes 

AFS 3.4a supports up to 256 partitions per server. The names of these partitions range from /vicepa to /vicepiv.

In earlier versions of AFS, each server had a maximum of 26 partitions. The names of these partitions ranged from /vicepa to /vicepz.

14.11. Additional AIX 3.2 Support

Back to Table of Contents

AFS 3.4a Changes 

For AIX 3.2 systems only, Transarc has updated the AIX remote (r*) commands. AFS 3.4a now supports all group permission features. For example, entries of the following type are supported:

+@NetGroup     -@NetGroup     -@HostName

Previously, AFS did not support group permission features in the /.rhosts or /etc/hosts.equiv file.

14.12. Changes to the NFS/AFS Translator

Back to Table of Contents

AFS 3.4a Changes

In AFS 3.4a, it is no longer necessary to load the AFS kernel extensions before starting the NFS daemons for the NFS/AFS Translator.

In AFS 3.4a, source customers can build AFS Cache Managers that function as translators.

14.13. Improved Networking Support

Back to Table of Contents

AFS 3.3 Changes 

In AFS 3.3, the Rx Remote Procedure Call (RPC) system can take better advantage of networks with large Maximum Transfer Unit (MTU) values. Previously, the Ethernet MTU of 1500 bytes limited the efficiency of AFS running on high-speed networks such as FDDI. The modifications allow for higher throughput between machines directly attached to the high-speed network.

14.14. Modification to fsync()

Back to Table of Contents

AFS 3.3 Changes

Previously, executing fsync(2) on an AFS file caused changes to the file to be written to the cache device and to the file server machine, but it did not cause the changes to be written to the file server's non-volatile storage. To provide maximum security for user data, fsync(2) now does the latter. This modification further reduces the amount of changed user data that can be jeopardized by a file server crash.

With these changes, fsync(2) consumes slightly more CPU and considerably more disk I/O resources on the file server machine than it previously did. In practice, this facility is infrequently used and the impact of the change is negligible; however, any application that uses fsync(2) heavily will suffer a performance penalty.

14.15. Version Strings in Binaries

Back to Table of Contents

AFS 3.3 Changes 

Every AFS binary file includes a version string identifying the configuration of sources used to produce the binary. This allows AFS Product Support to more quickly determine which AFS release is being used and which patches (if any) have been applied. Use the what command to display the version string of an AFS binary file. If you do not have the what command, type the following command:

`strings filename | grep afs3`

where filename is the name of the appropriate binary file.

14.16. File Locking Operations

Back to Table of Contents

AFS 3.3 Changes 

AFS 3.3 is limited with respect to file locking, as follows:

     

    AFS does not support byte-range locks. This includes all lockf() calls and those fcntl() calls that specify a byte offset to a file. However, all operations on byte-range locks return a success value of 0. In addition, the first time a byte-range locking operation is called by a program, AFS displays the following message:

    afs: byte-range lock/unlock ignored; make sure no one else     is running this program.

  1. File locking semantics in AFS are not distributed. Processes from the same workstation competing for locks on the same file obey proper locking semantics. However, processes from different machines competing for locks on the same file would get EWOULDBLOCK.
  2. Deadlock avoidance does not work in AFS. The fcntl() lock calls that can result in deadlock do not return EDEADLK.

15. Bug Fixes

Back to Table of Contents

AFS 3.4a Changes

AFS 3.4a includes fixes for many bugs in AFS 3.3, a subset of which are described in this chapter. This chapter describes only the most visible fixes included with AFS 3.4a. Unless otherwise noted, these bug fixes do not affect the documentation.

The following backup bugs have been fixed:

  1. The Backup System now reduces the number of calls made to the Volume Location Database (VLDB) and the Backup Database (BUDB). This fix improves the performance of the Backup System when starting a dump operation (especially full dumps).
  2. The Backup System does not read information from the Backup Database until it requires the information. This fix improves the startup time of the backup process.
  3. In AFS 3.4a, backup dump operations now use the most recent dump hierarchy. If the -dump (dump level) argument is skipped, the Backup System will recognize it and base the dump on the most recently created dump hierarchy. It previously based the dump on a dump hierarchy that was not the most recently created dump hierarchy.
  4. The error message produced when the Backup System cannot locate a host entry in the Backup Database has been improved. Previously, the error message stated:

    backup: Unable to connect to tape coordinator at port TC_port_offset

    Now, the error message states:

    backup: No such host/port entry; Can't connect to tape coordinator at port TC_port_offset

  5. The backup interactive command now displays the Waiting for job message once and waits silently until the backup job completes. Previously, the backup interactive command displayed the Waiting for job message continually while waiting for the backup job to complete. 
  6. The Backup System now waits for text configuration locks. Previously, the Backup System returned an error when it encountered configuration locks.
  7. When restoring a volume by date using the -date argument with the backup volrestore command, the Backup System restores the volume that was cloned closest to the specified date (and not past it). Previously, the Backup System restored the volume whose dump started closest to the specified date (and not past it). 
  8. The maximum number of volumes that you now can dump or restore using the Backup System is unlimited. In previous versions of AFS, the maximum number of volumes that you could dump or restore was 10,000.
  9. The Backup System now can display tapes with an unlimited number of volumes per dump and volumes per tape. In previous versions of AFS, the Backup System displayed only tapes with less than 1000 volumes per tape.
  10. Before dumping volumes to tape, the Backup System now sorts the volumes to be dumped by server and partition. This fix reduces the number of tape changes required to complete disk restores or restores of volumes on the same partition.
  11. You now can abort the backup scantape command using a command that can kill the operation, such as ^c (Control-c). In previous versions of AFS, it was impossible to abort the backup scantape command. 
  12. If you specify the -dbadd flag with the backup scantape command, the command now adds minor entries to the Backup Database as it encounters the information. Previously, the command added entries to the Backup Database after completing a full scan of the tape. 
  13. If a volume fails to dump, the Tape Coordinator checks to see if the volume was moved. If the Tape Coordinator determines that the volume was moved, the Tape Coordinator now dumps the volume from its new location. Previously, the Tape Coordinator did not check to see if the volume was moved and omitted that volume from the dump.
  14. Unmounting a tape no longer causes the entire tape operation to fail. Previously, unmounting a tape could cause the entire tape operation to fail.
  15. The Tape Coordinator now recognizes and correctly reports error messages associated with the volserver process. Previously, the Tape Coordinator did not recognize and correctly report error messages associated with the volserver process. 
  16. The Backup System now determines a volume's clone date at the time of the dump. Previously, the Backup System did not take into account a change of date.
  17. The Tape Coordinator no longer times out the connection during a rewind-on-close tape operation. Previously, the Tape Coordinator did time out the connection during a rewind-on-close tape operation.
  18. The Backup Server now returns the correct host ID for DEC Paramax machines with the backup dbverify command. 
  19. The backup dbverify command now prints clearer status and error messages in the Tape Coordinator's log and error files. 
  20. The Backup Server no longer times out a database dump too early with the backup savedb command. 
The following fs command bugs have been fixed:
  1. Previously in AFS, the fs newcell command allowed setuid privileges by default. No other commands defaulted to allow these privileges. In AFS 3.4a, the fs newcell command does not default to allow these privileges. The fs newcell command defaults to no setuid privileges. 
  2. The fs mariner command is an alias for the fs monitor command. 
The following package command bug has been fixed: 
  1. The -rebootfiles flag of the package command now works as described in the AFS Command Reference Manual.
The following pts command bug has been fixed:
    The pts createuser command does not allow you to create a user with an ID of 0. If you attempted to specify an ID of 0 using the -id argument prior to AFS 3.4a, the pts createuser command created an ID other than 0 and reported the created ID on standard output. For example, if you issued the following command: 

    % pts createuser -name root -id 0

    the command returned a message similar to the following:

    User root has id 100232323

    In AFS 3.4a, the previous command does not allocate an ID, but rather displays the following error message and aborts without creating a user:

    0 isn't a valid user id; aborting

The following bugs in the vos command have been fixed:
    Previously, the vos release command did not take into account any files deleted from the ReadWrite volume. When the ReadWrite volume was released for replication, these deleted files became zero-length files with the same disk allocation given to the original files in the ReadOnly volumes. This resulted in lost disk space. The only way to recover the lost disk space was removing the ReadOnly volume and generating it again.

    As a workaround to this problem in AFS 3.4a, issuing the vos release command with the -f flag takes into account any files deleted from the ReadWrite volume and does not copy them into the ReadOnly volume; this recovers some of the lost disk space without removing the ReadOnly volume and releasing it again.

    The -log argument of the volserver command has changed so that the VolserLog file now contains all removal activity as a result of the vos release command if the user specified the -f flag.

    The -server and -partition arguments of the vos backupsys command now work as described in the AFS Command Reference Manual.

    The vos backupsys command has been fixed so that now when a user specifies a volume with the -server and -partition arguments, the command checks whether the volume is a ReadOnly or a ReadWrite and only creates a Backup for the ReadWrite version. If the user does not specify a volume with the -server and -partition arguments, the command automatically creates backup volumes for every ReadWrite volume for specific servers and partitions.

    Previously in AFS, if a user issued the vos backupsys command specifying a volume with the -server and -partition arguments, the command did not check whether the specified volume was a ReadOnly or a ReadWrite. If the user issued the command specifying a ReadOnly volume, the command may have backed up the ReadOnly volume to the site of the ReadWrite version. The user may think that the backup copy is the most recent version when it may not be, depending on when the ReadOnly version was last updated.

The following bugs have been fixed for the AFS miscellaneous commands:
     

    In AFS 3.4a, the default value for the afsd command's -volumes argument is 128. The afsd command now accepts values within the range of 50 to 3000 for the -volumes argument.

    Previously, the -volumes argument of the afsd command used the value of 128, regardless of the value indicated on the command line. Even if you did not specify the -volumes argument on the command line and wanted to use the default value of 50 (as stated in the AFS Command Reference Manual), the option used the value of 128.

  1. The butc command now contains a safety check and informs the operator when important data may be destroyed with a requested backup dump operation. The butc command checks the following conditions when recycling a tape:
    1. The dump is overwriting a tape within the most recent dump set. The Backup System displays a warning on standard output (stdout) and in the TE_<device_name> and TL_<device_name> log files:

      Warning: Overwriting most recent dump before current one has finished
      The Backup System then proceeds with the dump operation.

      The dump is overwriting a tape belonging to the current dump set. The Backup System does not allow you to overwrite a tape belonging to the current dump set.

      Can't overwrite tape containing the dump in progress
      Instead, the Backup System prompts you for another tape.

      The dump is overwriting a tape in this dump set's hierarchy. The Backup System displays a warning on standard output (stdout) and in the TE_<device_name> and TL_<device_name> log files:

      Warning: Overwriting parent dump (DumpID number)
      The Backup System then proceeds with the dump operation.

      Any master dump is not found in the database:

      Warning: Can't find parent dump number in backup database

AFS 3.3 Changes

AFS 3.3 includes fixes for many bugs in AFS 3.2, a subset of which are described in this chapter. This chapter describes only the most visible fixes included with AFS 3.3. Unless otherwise noted, these bug fixes do not affect the documentation.

The following bug has been fixed for the AFS miscellaneous commands:

     
  1. The maximum number of afsd stat entries has been set to 3600 on AIX 3.2 machines. It was found that if a site specified a large number of stat entries for the afsd command, the Cache Manager on an AIX 3.2 machine was more likely to panic. To protect against these Cache Manager panics, a maximum limit of 3600 stat entries has been established. If you attempt to exceed this limit by specifying a number greater than 3600 with the -stat argument of the afsd command, the command automatically sets the number of entries to 3600. Although this limit does not eliminate panics when large numbers of stat entries are requested, it should substantially reduce the number of panics.
The following bugs have been fixed for modified 
  • The rlogin command must be located in the /usr/bin directory rather than the /usr/ucb directory on HP-UX machines. 
  • 16. Documentation Corrections

    Back to Table of Contents

    AFS 3.4a Changes

    Previous versions of the AFS documentation contained some incorrect or misleading information. Unless otherwise noted, these documentation errors have not been corrected in the AFS 3.4a documentation:

       

      Step 2 of Section 2.4.1, ``Using the Kernel Extension Facility on AIX Systems,'' on page 2-9 of the AFS Installation Guide states, ``If the machine is not going to act as an NFS/AFS translator:

      # cd /usr/vice/etc     # ./cfgexport -a export.ext.nonafs''

      Change the command lines to

      # cd /usr/vice/etc/dkload     # ./cfgexport -a export.ext.nonfs''

      The section on the operation of the AFS login program on page 2-40 of the AFS System Administrator's Guide states, ``If no AFS token was granted [because of an incorrect password], the login program attempts to log the user into the local file system...'' The AFS login program and documentation is not consistent with some vendors' operating systems. If you cannot authenticate into AFS for any reason, the rules for logging into your local system apply. See the documentation for your particular operating system.

    1. The AFS 3.4a login program includes support for secondary authentication for AIX 3.2; however, if you cannot authenticate to AFS for any reason, the rules for logging into your local system apply. See the AIX vendor's operating system documentation for additional information.
    2. The REQUIREMENTS/RESTRICTIONS section of the inetd, rcp, and rsh command descriptions on pages 10-5, 10-12, and 10-16, respectively, of the AFS Command Reference Manual contain the following information in a bulleted list:

      1. ``The following two lines must appear in the /etc/services file on the local machine (as well as on the remote machine running modified [inetd]). On NeXT machines, this information must appear in the NetInfo database rather than in /etc/services.
      2. auth 113/tcp authentication     ta-rauth 601/tcp rauth''
      TCP ports 113 and 601 no longer are used by AFS, so this description is obsolete.
    3. AFS does not support sockets. (This note is added for the purpose of clarification.)
    AFS 3.3 Changes

    Previous versions of the AFS documentation contained some incorrect or misleading information. Unless otherwise noted, these documentation errors have not been corrected in the AFS 3.3 documentation:

       

      On page 17-15 of the AFS System Administrator's Guide and page 4-38 of the AFS Command Reference Manual , the following recommendation is given for limiting consecutive failed login attempts: ``For most cells, Transarc Corporation recommends setting the limit on authentication to 5 attempts and the lockout time to 25 minutes.''

      This is not a good recommendation. Instead, you should follow the recommendation listed in the AFS 3.3 Release Notes :

      Recommendation: Transarc Corporation recommends a limit of 9 consecutive failed authentication attempts and a 25-minute lockout time. Although some cells may want to use other limits, these should suffice for most cells.


    Local Index for This Document

    Back to Table of Contents
      8-bit characters
      -cell argument
      -help flag
      -localauth flag
      afs_dynamic_auth program
      AFSCELL environment variable
          (Alternate Entry)
      afsd command
          (Alternate Entry)
          (Alternate Entry)
          (Alternate Entry)
          (Alternate Entry)
          (Alternate Entry)
      ALTSHELL variable
      ASK parameter
      AUTOQUERY parameter
      backup dbverify command
          (Alternate Entry)
      backup interactive command
      backup labeltape command
          (Alternate Entry)
      backup readlabel command
          (Alternate Entry)
      backup savedb command
      backup scantape command
          (Alternate Entry)
          (Alternate Entry)
          (Alternate Entry)
      backup volsetrestore command
          (Alternate Entry)
      backup command suite
      bos addkey command
      bos create command
      bos status command
      bos command suite
      BUFFERSIZE parameter
      butc command
          (Alternate Entry)
          (Alternate Entry)
          (Alternate Entry)
      CONSOLE variable
      dlog command
      dpass command
      FILE parameter
      fileserver command
          (Alternate Entry)
      fs checkservers command
          (Alternate Entry)
      fs copyacl command
      fs exportafs command
      fs getcellstatus command
      fs getserverprefs command
      fs listacl command
      fs mkmount command
      fs monitor command
      fs newcell command
          (Alternate Entry)
      fs setacl command
      fs setcell command
      fs setserverprefs command
      fs storebehind command
      fs command suite
      fs_conv_osf30 program
      fstrace apropos command
      fstrace clear command
      fstrace dump command
      fstrace help command
      fstrace lslog command
      fstrace lsset command
      fstrace setlog command
      fstrace setset command
      fstrace command suite
      HZ variable
      IDLEWEEKS variable
      kas examine command
      kas setfields command
      kas command suite
      klog command
      knfs command
      login program
      ls -la command
      ls command
      MOUNT parameter
      NAME_CHECK parameter
      pagsh command
      PASSREQ variable
      PATH variable
      pts createuser command
      registry variable
      rlogin command
      rsh command
      rstar commands
      salvager command
      scout command
      SLEEPTIME variable
      SUPATH variable
      SYSTEM file
          (Alternate Entry)
      TIMEOUT variable
      TIMEZONE variable
      ULIMIT variable
      UMASK variable
      UNMOUNT parameter
      up command
      upclient command
      uss add command
      uss bulk command
      vldb_convert command
      vlserver command
          (Alternate Entry)
          (Alternate Entry)
      volserver command
          (Alternate Entry)
          (Alternate Entry)
      volserver process
      vos backup command
      vos backupsys command
      vos changeaddr command
      vos create command
      vos release command
          (Alternate Entry)
          (Alternate Entry)
      vos rename command
      vos restore command
      vos syncserv command
      xstat command
      /etc/default/login file
      /etc/passwd file
          (Alternate Entry)
      /etc/security/login.cfg file
          (Alternate Entry)
      /etc/security/passwd file
      /etc/security/user file
          (Alternate Entry)
      /usr/afs/backup/CFG_<tape_device> file
      /usr/vice/etc directory
      @sys variable
      AFS 3.3 Release Notes
      AFS Command Reference Manual
          (Alternate Entry)
      AFS Installation Guide
      AFS System Administrator's Guide
          (Alternate Entry)
          (Alternate Entry)
      AFS/DFS Migration Toolkit Administration Guide and Reference
          (Alternate Entry)
          (Alternate Entry)
          (Alternate Entry)
      AFS authentication method
      CellServDB file
      DCE authentication method
      syslog file
      system:administrators group
      ``B'' configuration file line
          (Alternate Entry)
          (Alternate Entry)
      ``C'' configuration file line
          (Alternate Entry)
          (Alternate Entry)
      ``D'' configuration file line
          (Alternate Entry)
      ``F'' configuration file line
          (Alternate Entry)
      ``L'' configuration file line
          (Alternate Entry)
      ``S'' configuration file line
          (Alternate Entry)
      acceptable abbreviations
          fs setserverprefs command
      AFS
          file size
          partition size
          volume size
      AFS Backup System
          error messages
          modifications
      AFS/DFS Migration Toolkit
          dpass command
      AIX
      AIX 3.2
          secondary authentication support
          support
      aliases
          fs setserverprefs command
      arguments
          -cell
          backup volsetrestore command
          fs exportafs command
          fs getserverprefs command
          fs getserverprefs command
          fs setserverprefs command
          fstrace clear command
          fstrace dump command
          fstrace lslog command
          fstrace lsset command
          fstrace setlog command
          fstrace setset command
          vos changeaddr command
          group
          major device number
          minor device number
          owner
      Authentication Database
      authentication method
          DCE
      backup configuration file
      backup parameters
          AUTOQUERY
          BUFFERSIZE
          FILE
          NAME_CHECK
          UNMOUNT
      binary version strings
      bug fixes
          afsd command
          afsd command
          backup command suite
          butc command
          fs command suite
          package command
          pts command suite
          rlogin command
          rsh command
          vos command suite
      Cache Manager
      Cache Manager preferences
      characters
          international
      chunk size
      commands
          afsd
          afsd
          afsd
          afsd
          afsd
          backup dbverify
          backup dbverify
          backup dump
          backup interactive
          backup labeltape
          backup labeltape
          backup readlabel
          backup readlabel
          backup savedb
          backup savedb
          backup scantape
          backup scantape
          backup scantape
          backup scantape
          backup volsetrestore
          backup volsetrestore 
          bos addkey
          bos create
          bos status
          butc
          butc
          butc
          butc
          dlog
          dpass
          fileserver
          fileserver
          fs checkservers
          fs checkservers
          fs copyacl
          fs exportafs
          fs getcellstatus
          fs getserverprefs
          fs listacl
          fs mkmount
          fs monitor
          fs newcell
          fs newcell
          fs setacl
          fs setcell
          fs setserverprefs
          fs storebehind
          fstrace apropos
          fstrace clear
          fstrace dump
          fstrace help
          fstrace lslog
          fstrace lsset
          fstrace setlog
          fstrace setset
          kas examine
          kas setfields
          klog
          knfs
          package
          package
          pagsh
          pts createuser
          rlogin
          rsh
          rstar
          salvager
          scout
          up
          upclient
          uss add
          uss bulk
          vldb_convert
          vlserver
          vlserver
          vlserver
          volserver
          volserver
          volserver
          vos backup
          vos backupsys
          vos changeaddr
          vos create
          vos release
          vos release
          vos release
          vos rename
          vos restore
          vos syncserv
          xstat
      configuration file lines
          ``B''
          ``B''
          ``C''
          ``C''
          ``C''
          ``D''
          ``D''
          ``F''
          ``F''
          ``L''
          ``L''
          ``S''
          ``S''
      data conversion
      database access
      DEC AXP
      description
          fs getserverprefs command
          fs setserverprefs command
      Digital UNIX 
      Digital UNIX data conversion
      directories
      documentation corrections
          inetd command
          login program
          login program
          rcp command
          rsh command
          AFS Installation Guide
          AFS System Administrator's Guide
          AIX 3.2
          installation
          local operating system
          sockets
      enhanced systems
      environment variable
          AFSCELL
      error handling
      error message
      error messages
      event set states
      examples
          fs getserverprefs command
          fs setserverprefs command
          fs storebehind command
          vos changeaddr command
      file locking
      file servers
      files
          /etc/passwd
          /etc/passwd
          /etc/security/login.cfg
          /etc/security/login.cfg
          /etc/security/passwd
          /etc/security/user
          /etc/security/user
          /usr/afs/backup/CFG_<tape_device>
          CellServDB
          syslog
          backup configuration
          unlinking
      flags
      getting
          Volume Location (VL) server preferences
          Volume Location (VL) server preferences
      groups
      international characters
      Kerberos authentication
      machines
          DEC AXP
      Maximum Transfer Unit (MTU)
      miscellaneous AFS commands
      modification
      modifications
      MTU
      multihomed
      names
          permanent tape
          permanent tape
          permanent tape
      networking
      new rights
      new systems
      NFS/AFS Translator
      notes
      obsolete systems
      operating systems
      operations
      output
          fs getserverprefs command
      parameters
          AUTOQUERY
          BUFFERSIZE
          FILE
          MOUNT
          NAME_CHECK
          UNMOUNT
      partition size
      partitions
      partitions larger than 2 GB
      pathnames
      permanent tape names
          (Alternate Entry)
          (Alternate Entry)
          (Alternate Entry)
      preferences
          Volume Location (VL) servergetting
          Volume Location (VL) servergetting
          Volume Location (VL) serversetting
          Volume Location (VL) serversetting
          Volume Location (VL) server
          Volume Location (VL) server
          Volume Location (VL) server
          Volume Location (VL) server
          Volume Location (VL) server
      privilege required
          fs setserverprefs command
          fs storebehind command
      processes
          volserver
      program
          SGI login
          Solaris login
      programs
          fs_conv_osf30
          login
          AIX login
          Digital UNIX login
      recommendations
      relative pathnames
      Remote Procedure Call
      RPC
      server
      setting
          Volume Location (VL) server preferences
      SGI
      SGI File System Reorganizer (fsr)
      sockets
      Solaris
      Solaris variables
          CONSOLE
          HZ
          IDLEWEEKS
          PASSREQ
          PATH
          SLEEPTIME
          SUPATH
          TIMEOUT
          TIMEZONE
          ULIMIT
          UMASK
      specifying preferences
      support
          AIX 3.2secondary authentication
          AIX 3.2
          international character
          Kerberos authentication
          networking
      supported systems
          (Alternate Entry)
      system name
      systems
          new
          obsolete
          supported
          supported
      Tape Coordinator
      tape scanning
      ticket lifetime
      Ubik database
      UDP port 750
      UDP port 88
      UNIX commands
          ls
      unlinking
      variables
          SYSTEM
          SYSTEM
          SolarisALTSHELL
          SolarisCONSOLE
          SolarisHZ
          SolarisIDLEWEEKS
          SolarisPASSREQ
          SolarisPATH
          SolarisSLEEPTIME
          SolarisSUPATH
          SolarisTIMEOUT
          SolarisTIMEZONE
          SolarisULIMIT
          SolarisUMASK
          @sys
      version strings
      VL server
          see Volume Location server
          see Volume Location server
      volume name
      volumes larger than 2 GB

    © 1990-1996, Transarc Corporation