4    NHD for Version 5.1B Installation Instructions

This chapter tells you how to prepare for installation, where to get the NHD-7 kit, and how to install it on your Version 5.1B system. It includes the following topics:

See Chapter 5 for instructions on installing NHD-7 on Version 5.1A systems.

4.1    Preparing for an NHD-7 Installation

This section describes preliminary steps to take when installing the NHD-7 kit. It also describes the following:

4.1.1    Preliminary Steps

Follow these steps before you install NHD-7:

  1. If your system already is running Version 5.1B of the operating system, perform a full backup of your system.

  2. Get the NHD-7 kit as described in Section 1.2.

  3. Create an NHD-7 kit CD image as described in Section 4.1.2.

  4. If you are installing from a RIS server, perform the following tasks:

    1. Set up the RIS area as described in Section 4.1.3.1.

    2. Register your system as a RIS client as described in Section 4.1.3.2.

    See the Sharing Software on a Local Area Network manual for more information about RIS.

  5. If your system already is running a version of the operating system, shut down your system.

  6. Upgrade your system to the latest version of firmware for your processor.

  7. Determine the console name of your system disk and any devices you will use for software distributions, such as the NHD-7 kit, the Tru64 UNIX operating system distribution, and the Associated Products, Volume 2, distribution for TruCluster software. This could include the following:

  8. At the console prompt, set the value of the bootdef_dev variable to null:

    >>> set bootdef_dev ""
    

  9. At the console prompt, set the value of the auto_action variable to halt:

    >>> set auto_action halt
    

  10. At the console prompt, set the value of the boot_osflags variable to a:

    >>> set boot_osflags a
    

  11. Power down your system.

  12. Review your hardware documentation and install your new hardware.

    Note

    If you add supported hardware after NHD-7 is already installed on your system, follow the instructions in Section 4.2.5 to include support for the new hardware in your custom kernel on either the single system or the cluster member where you install the new hardware.

  13. Power up your system.

  14. Install NHD-7 according to the instructions in Section 4.2.

Caution

Before you install NHD-7 onto a system that includes SA5300A series RAID controllers, see the release notes in Section 3.5. Failure to follow these instructions can cause your NHD-7 installation to fail.

4.1.2    Creating an NHD-7 Kit CD Image

If you downloaded an NHD-7 kit from the Web, you must first create a CD image of the kit on disk. (See Section 1.2 for information about downloading a kit.)

The following steps assume that you have downloaded the NHD-7 kit to /usr/tmp and that you have a spare disk with at least 750 MB of free space to use for the CD image.

Note

This procedure creates a CD image of the NHD-7 kit distribution for installation purposes. It does not allow you to burn a CD-ROM from this image.

  1. Log in as root.

  2. Select a disk of at least 750 MB that is empty or can be overwritten. In this example, this disk will be referred to as dskX.

  3. Use the disklabel -e command to make the a partition on /dev/disk/dskXc the same size as the c partition.

  4. Create a new UFS file system on the spare disk. For example:

    # newfs /dev/rdisk/dsk2c
    

    You will see output similar to this:

    Warning: /dev/rdisk/dsk2c and overlapping partition(s) are marked in use.
    If you continue with the operation you can 
    possibly destroy existing data.
    CONTINUE? [y/n]
    

  5. Enter y to continue. You will see output similar to this:

    /dev/rdisk/dsk2c:       8380080 sectors in 3708 cylinders of 20 tracks, \
        113 sectors
    4091.8MB in 232 cyl groups (16 c/g, 17.66MB/g, 4288 i/g)
    super-block backups (for fsck -b #) at:
    32, 36320, 72608, 108896, 145184, 181472, 217760, 252048,
    290336, 326624, 362912, 399200, 435488, 471776, 508064, 544352,
    
    .
    .
    .
    7813664, 7849952, 7886240, 7922528, 7958816, 7995104, 8031392, 8067680, 8099872, 8136160, 8172448, 8208736, 8245024, 8281312, 8317600, 8353888,

  6. Mount a new file system. For example:

    # mount /dev/disk/dskXc /mnt
    

  7. Change directory to the new file system:

    # cd /mnt
    

  8. Enter the following command to extract the NHD-7 kit into the CD image:

    # gzcat /usr/tmp/NHD7_REV_X7.1.tar.gz | tar xvf -
    

    You will see a list of files as they are extracted.

  9. Return to the root directory and unmount the CD image:

    # cd /
    # umount /mnt
    

You have created an NHD-7 CD image on the disk at /dev/disk/dskXc.

4.1.3    Preparing for a RIS installation

If you are installing NHD-7 from a RIS server, you first must do the following:

  1. Set up the RIS area on the RIS server (Section 4.1.3.1).

  2. Register the RIS client (Section 4.1.3.2).

Note

Although the examples in this section show the NHD-7 distribution on CD-ROM, you can use a CD image created from the downloaded NHD-7 kit, as described in Section 4.1.2.

See the Sharing Software on a Local Area Network manual for more information about RIS. The Troubleshooting RIS chapter is especially helpful if you encounter difficulties.

4.1.3.1    Setting Up the RIS Area

Follow these steps to create a RIS area for NHD-7 on your RIS server:

  1. Use the ris utility to install Version 5.1Bof the base operating system into a new RIS area.

    Caution

    Use the standard method to create the RIS area, not the bootlink method.

    Extract the base operating system; do not use symbolic links.

    Optionally, you can install TruCluster Server and Worldwide Language Support in the same RIS area.

  2. Load the NHD-7 CD-ROM into the RIS server's CD-ROM drive.

  3. Mount the NHD-7 distribution. For example:

    # mount /dev/disk/cdrom0a /mnt
    

  4. Run the update_ris script to install the NHD-7 kit into the RIS area. For example:

    # /mnt/tools/update_ris
    

    You will see messages similar to the following:

    Please select one of the following products to add NHD support to
     
     
        1)  /usr/var/adm/ris/ris9.alpha
    	  'Tru64 UNIX V5.1x Operating System (Rev nnnn)'
     
        2)  /usr/var/adm/ris/ris6.alpha
    	  'Tru64 UNIX V5.1x Operating System ( Rev nnnn )'
     
    Enter your selection or press <return> to quit:
    

    Note

    The RIS areas you see depend upon your RIS server.

  5. In this example, enter 2 and press Return. You will see messages similar to the following:

    You are updating ris area /usr/var/adm/ris/ris6.alpha for:
    	V5.1x Operating System ( Rev 1885 )
    with NHD support.
    Is this correct? (y/n):
    

  6. In this example, enter y and press Return. You will see messages similar to the following:

    'Tru64 UNIX New Hardware for V5.1x'
       1    'Tru64 UNIX New Hardware for V5.1x'
    Building new network bootable kernel
    /usr/var/adm/ris/ris6.alpha/kit has been updated with NHD-7 support
     
    

4.1.3.2    Registering the RIS Client

See the Sharing Software on a Local Area Network manual for instructions on how to register RIS clients for a RIS area.

Note

When you register a cluster as a RIS client, remember to register both the cluster alias and the lead cluster member. During client registration, you see the following prompt:

Is this client a cluster alias? (y/n) [n]:
 

4.2    Installing the NHD-7 Kit

This section tells you how to install the NHD-7 kit on a system in one of the following configurations:

Note

You can install NHD-7 from CD-ROM or from a CD image that you create from the downloaded kit.

If you are installing NHD-7 during a Full Installation, you can install from a RIS area.

4.2.1    Installing on a Single System Running Version 5.1B

Before you start this procedure, you must have the NHD-7 distribution. See Section 1.2 for information about how to get the NHD-7 kit. See Section 4.1.2 on how to create an NHD-7 kit CD image for kits downloaded from the Web.

Note

You cannot use RIS to install NHD-7 on a single system running Version 5.1B

Caution

Before you install NHD-7 onto a system that includes SA5300A series RAID controllers, see the release notes in Section 3.5. Failure to follow these instructions can cause your NHD-7 installation to fail.

Follow these steps to install NHD-7 on a single system that already is running Version 5.1B of the Tru64 UNIX operating system:

  1. Log in as root.

  2. Mount the NHD-7 CD-ROM or the CD image you created in Section 4.1.2. For example:

    # mount /dev/disk/cdrom0a /mnt
    

    or

    # mount /dev/disk/dskXc /mnt
    

  3. Change directory to the mounted NHD-7 kit. For example:

    # cd /mnt
    

  4. Run the nhd_install script:

    # ./nhd_install
    

    You will see output similar to the following:

    Using kit at /mnt/540
     
    "Would you like to install the V5.1x Tru64 Unix Patches: (y/n)
     
     
    

  5. Answer y and press Return. You will see a list of the patches as they are installed, followed by output similar to the following:

    Checking file system space required to install specified subsets:
     
    File system space checked OK.
     
    1 subsets will be installed.
     
    Loading subset 1 of 1 ...
     
    New Hardware Base System Support V7.0
       Copying from /mnt/540/kit (disk)
            Working....Mon Jul 21 13:59:55 EDT 2003
     
       Verifying
     
    1 of 1 subsets installed successfully.
     
    Configuring "New Hardware Base System Support V7.0" (OSHHWBASE540)
     
    Rebuilding the /GENERIC file to include the kernel modules for the
    new hardware.  This may take a few minutes.
     
    Successful setting of the new version identifier
    Successful switch of the version identifiers
    

  6. At the shell prompt, shut down the system:

    # shutdown -h now
    

  7. At the console prompt, boot the generic kernel. For example:

    >>> boot -fi genvmunix dqb0
    

  8. After the system boots, log in as root.

  9. At the shell prompt, use the doconfig utility to rebuild the custom kernel:

    # doconfig
    

    You will see messages similar to the following:

    *** KERNEL CONFIGURATION AND BUILD PROCEDURE ***
     
    Enter a name for the kernel configuration file. [SYSNAME]: 
     
    A configuration file with the name 'SYSNAME' already exists.
    Do you want to replace it? (y/n) [n]:
    

  10. Enter y and press Return. You will see messages similar to the following:

    Saving /sys/conf/SYSNAME as /sys/conf/SYSNAME.bck
     
    *** KERNEL OPTION SELECTION ***
     
    Selection   Kernel Option
    --------------------------------------------------------------
    1       System V Devices
    2       NTP V3 Kernel Phase Lock Loop (NTP_TIME)
    3       Kernel Breakpoint Debugger (KDEBUG)
    4       Packetfilter driver (PACKETFILTER)
    5       IP-in-IP Tunneling (IPTUNNEL)
    6       IP Version 6 (IPV6)
    7       Point-to-Point Protocol (PPP)
    8       STREAMS pckt module (PCKT)
    9       X/Open Transport Interface (XTISO, TIMOD, TIRDWR)
    10      Digital Versatile Disk File System (DVDFS)
    11      ISO 9660 Compact Disc File System (CDFS)
    12      Audit Subsystem
    13      ATM UNI 3.0/3.1 ILMI (ATMILMI3X)
    14      IP Switching over ATM (ATMIFMP)
    15      LAN Emulation over ATM (LANE)
    16      Classical IP over ATM (ATMIP)
    17      ATM UNI 3.0/3.1 Signalling for SVCs (UNI3X)
    18      Asynchronous Transfer Mode (ATM)
    19      All of the above
    20      None of the above
    21      Help
    22      Display all options again
    --------------------------------------------------------------
     
    Enter your choices.
     
    Choices (for example, 1 2 4-6) [20]:
    

  11. Select the kernel options you want built into your new custom kernel. Include the same options you were already running on your system. For example, if you want to select all listed kernel options, enter 19 and press Return.

    You will see messages similar to the following:

    You selected the following kernel options:
    System V Devices
    NTP V3 Kernel Phase Lock Loop (NTP_TIME)
    Kernel Breakpoint Debugger (KDEBUG)
    Packetfilter driver (PACKETFILTER)
    IP-in-IP Tunneling (IPTUNNEL)
    IP Version 6 (IPV6)
    Point-to-Point Protocol (PPP)
    STREAMS pckt module (PCKT)
    X/Open Transport Interface (XTISO, TIMOD, TIRDWR)
    Digital Versatile Disk File System (DVDFS)
    ISO 9660 Compact Disc File System (CDFS)
    Audit Subsystem
    ATM UNI 3.0/3.1 ILMI (ATMILMI3X)
    IP Switching over ATM (ATMIFMP)
    LAN Emulation over ATM (LANE)
    Classical IP over ATM (ATMIP)
    ATM UNI 3.0/3.1 Signalling for SVCs (UNI3X)
    Asynchronous Transfer Mode (ATM)
     
    Is that correct? (y/n) [y]:
    

  12. Press Return to confirm your selection. You will see the following prompt:

    Do you want to edit the configuration file? (y/n) [n]: 
     
     
    

  13. Press Return. You will see messages similar to the following:

    *** PERFORMING KERNEL BUILD ***
     
    A log file listing special device files is located in /dev/MAKEDEV.log
    Working....Thu Jun 20 14:59:36 EDT 2003
    Working....Thu Jun 20 15:01:53 EDT 2003
    Working....Thu Jun 20 15:05:32 EDT 2003
     
    The new kernel is /sys/SYSNAME/vmunix
    

  14. Copy the new custom kernel to /vmunix. For example:

    # cp /sys/SYSNAME/vmunix /vmunix
    

  15. Shut down the system:

    # shutdown -h now
    

  16. At the console prompt, boot the system with the new custom kernel. For example:

    >>> boot -fi "vmunix" dqb0
    

4.2.2    Installing on a Single System During a Full Installation of Version 5.1B

You can install NHD-7 on a single system during a Full Installation of the operating system from either of the following sources:

See Section 1.2 for information on getting the NHD kit, creating a CD image, and setting up a RIS area.

Caution

Before you install NHD-7 onto a system that includes SA5300A series RAID controllers, see the release notes in Section 3.5. Failure to follow these instructions can cause your NHD-7 installation to fail.

4.2.2.1    Installing from a CD-ROM or CD Image

Before you start this procedure, see the Installation Guide for information about the Full Installation process. You must have both the NHD-7 kit and the Tru64 UNIX Operating System distribution. See Section 1.2 for information about how to get the NHD-7 kit and, if necessary, create an NHD-7 kit CD image.

This process requires multiple CD-ROM swaps, which are documented in the directions that follow. The disk swaps are required because the addition and deletion of some global symbols in the operating system requires that some of the updated kernel .mod files contained in the NHD kit are boot linked in a specific order relative to the .mod files that are used from the Version 5.1B installation CD-ROMs.

You will be prompted in one of the following ways when the CD-ROM needs to be swapped:

The following steps describe what you must do if you use two CD-ROMs and one CD drive. If you are using two CD drives or a CD image, adjust these steps as follows:

  1. If your system already is running a version of the operating system, log in as root and shut down the system.

  2. Insert the Version 5.1B installation media in the CD drive. For the following steps, the drive is assumed to be dqb0.

  3. At the console prompt, type the following:

    >>> set bootdef_dev ""
    

  4. At the console prompt, boot the generic kernel. For example:

    >>> boot -fl fa -fi "GENERIC" dqb0
    

    You will see messages similar to the following:

    (boot dqb0.0.1.16.0 -file GENERIC -flags fa)
    block 0 of dqb0.0.1.16.0 is a valid boot block
    reading 15 blocks from dqb0.0.1.16.0
    bootstrap code read in
    base = 200000, image_start = 0, image_bytes = 1e00
    initializing HWRPB at 2000
    initializing page table at 3ff48000
    initializing machine state
    setting affinity to the primary CPU
    jumping to bootstrap code
     
    UNIX boot - Wednesday, August 01, 2001
     
    Loading GENERIC ...
    Loading at fffffc0000250000
     
    Enter all Foreign Hardware Kit Names.
    Device Names are entered as console names (e.g. dkb100).
     
    Enter Device Name, or <return> if done:
    

    Note

    The message to enter foreign kit names starts the phase of the process where you specify kits and their locations. Do not enter a kit name here. Look at the actual prompt and enter the device name where the NHD-7 kit is located.

  5. Enter the console device name of the CD-ROM drive at the prompt and press Return:

    Enter Device Name, or <return> if done: dqb0 [Return]
    

    You will see a prompt similar to the following:

    Enter Hardware Kit Name, or <return> if done with dqb0:
     
    

  6. Enter the NHD-7 kit name and press Return:

    /540/usr/sys/hardware/base.kit [Return]
    

    You will see a prompt similar to the following:

    Insert media for kit 'dqb0:/540/usr/sys/hardware/base.kit'
    hit <return> when ready, or 'q' to quit this kit:
    

  7. Remove the Tru64 UNIX Operating System CD-ROM, load the New Hardware Delivery CD-ROM, and press Return.

  8. Press Return at each of the following prompts you see:

    Enter Hardware Kit Name, or <return> if done with dqa0: [Return]
    Enter Device Name, or <return> if done: [Return]
     
     
    

    You then see a prompt similar to the following:

    Insert boot media, hit <return>  when ready:
     
    

  9. Remove the New Hardware Delivery CD-ROM, load the Tru64 UNIX Operating System CD-ROM, and press Return. You will see messages similar to the following:

    Linking 207 objects: 207 
    Insert media for kit 'dqb0:/540/usr/sys/hardware/base.kit'
    

  10. Remove the Tru64 UNIX Operating System CD-ROM, load the New Hardware Delivery CD-ROM, and press Return. You will see messages similar to the following:

    Linking 206 objects: 206 
    Insert media for kit 'dqb0:/540/usr/sys/hardware/base.kit'
    

  11. Remove the New Hardware Delivery CD-ROM, load the Tru64 UNIX Operating System CD-ROM, and press Return. You will see messages similar to the following:

    205 204 203 202 201 200 199 198 197 
    Insert media for kit 'dqb0:/540/usr/sys/hardware/base.kit' 
    hit <Return> when ready or 'q' to quit:
    

  12. Remove the Tru64 UNIX Operating System CD-ROM, load the New Hardware Delivery CD-ROM, and press Return. You will see messages similar to the following:

    196 
    Insert boot media, hit <Return> when ready:
    

  13. Remove the New Hardware Delivery CD-ROM, load the Tru64 UNIX Operating System CD-ROM, and press Return. You will see messages similar to the following:

    195 194 193 192 191 190 189 188 187 186 185 184 
    Insert media for kit 'dqb0:/540/usr/sys/hardware/base.kit' 
    hit <Return> when ready or 'q' to quit:
    

  14. Remove the Tru64 UNIX Operating System CD-ROM, load the New Hardware Delivery CD-ROM, and press Return. You will see messages similar to the following:

    183 
    Insert boot media, hit <Return> when ready:
    

  15. Remove the New Hardware Delivery CD-ROM, load the Tru64 UNIX Operating System CD-ROM, and press Return. You will see messages similar to the following:

    182 181 180 179 178 177 176 175 174 173 172 171 170 169 168 167 166
    165 164 163 162 161 160 159 158 157 156 155 154 153 152 151 150 149
    
    .
    .
    .
    50 49 48 47 46 45 44 43 42 41 Insert media for kit 'dqb0:/540/usr/sys/hardware/base.kit' hit <Return> when ready or 'q' to quit:

  16. Remove the Tru64 UNIX Operating System CD-ROM, load the New Hardware Delivery CD-ROM, and press Return. You will see messages similar to the following:

    40 39 38 37 36 35 34 33 32 31 30 29 28 27 26 25 24 23 22 21 20 19 18
    17 16 15 14 13 12 11 10 9 8 7 6 5 4 3 2 1 
    Insert boot media, hit <Return> when ready:
    

  17. Remove the New Hardware Delivery CD-ROM, load the Tru64 UNIX Operating System CD-ROM, and press Return. You will see messages similar to the following:

    Sizes: 
    text =  8359680 
    data =  2151232 
    bss  =  2433568 
    Starting at 0xfffffc0000262430
    

  18. You will see the Tru64 UNIX installation screens. Enter host information, select subsets and target disks, and continue the Full Installation process as described in the Installation Guide.

    After the final reboot, the Full Installation process configures the system and reboots the system.

    You will see messages similar to the following:

    UNIX boot - Wednesday, August 01, 2003
     
    Loading /GENERIC ...
    Loading at fffffc0000250000
     
    Enter all Foreign Hardware Kit Names.
    Device Names are entered as console names (e.g. dkb100).
     
    Enter Device Name, or <return> if done:
    

  19. Enter the console device name for the NHD-7 kit, for example: dqb0. You will see the following prompt:

    Enter Hardware Kit Name, or <return>  if done with dqb0:
    

  20. Enter the NHD-7 kit name:

    /540/usr/sys/hardware/base.kit
    

    You will see a prompt similar to the following:

    Insert media for kit 'dqb0:/540/usr/sys/hardware/base.kit'
    hit <return> when ready, or 'q' to quit this kit:
    

  21. Remove the Tru64 UNIX Operating System CD-ROM, load the New Hardware Delivery CD-ROM, and press Return. (You will not have to change media again.)

  22. Press Return. You will see a prompt similar to the following:

    Enter Hardware Kit Name, or <return> if done with dqb0:
    

  23. Because there are no other kits included in NHD-7, press Return. You will see the following prompt:

    Enter Device Name, or <return> if done:
     
    

  24. Again, because there are no other kits to install, press Return. You will see the following prompt:

    Insert boot media, hit <return>  when ready:
     
    

    Note

    Although this prompt asks you to insert the boot media, do not insert the Tru64 UNIX Operating System CD-ROM. At this point in the installation process you are booting from the system disk, and no media change is necessary.

  25. Press Return. You will see a prompt similar to the following:

    Linking 207 objects: 207 
    
    .
    .
    .
    Insert media for kit 'dka400:/540/usr/sys/hardware/base.kit' hit <return> when ready or 'q' to quit:

  26. Press Return. You will see messages similar to the following:

    206
    Insert boot media, hit <return>  when ready:
     
     
    

    Note

    Although this prompt asks you to insert the boot media, do not insert the Tru64 UNIX Operating System CD-ROM. At this point in the installation process you are booting from the system disk, and no media change is necessary.

  27. Press Return. You will see messages similar to the following:

    205 204 203 202 201 200 199 198 197 
    Insert media for kit 'dqa0:/540/usr/sys/hardware/base.kit' 
    hit <Return> when ready or 'q' to quit:
    

  28. Press Return. You will see messages similar to the following:

    196 
    Insert boot media, hit <Return> when ready:
    

  29. Press Return. You will see messages similar to the following:

    195 194 193 192 191 190 189 188 187 186 185 184 
    Insert media for kit 'dqa0:/540/usr/sys/hardware/base.kit' 
    hit <Return> when ready or 'q' to quit:
    

  30. Press Return. You will see messages similar to the following:

    183 
    Insert boot media, hit <Return> when ready:
    

  31. Press Return. You will see messages similar to the following:

    182 181 180 179 178 177 176 175 174 173 172 171 170 169 168 167
          
    .
    .
    .
    61 60 59 58 57 56 55 54 53 52 51 50 49 48 47 46 45 44 43 42 41 Insert media for kit 'dqb0:/540/usr/sys/hardware/base.kit' hit <Return> when ready or 'q' to quit:

  32. Press Return. You will see messages similar to the following:

    Insert boot media, hit <Return> when ready: 
     
    Sizes: 
    text =  8359680 
    data =  2151232 
    bss  =  2433568 
    Starting at 0xfffffc0000262430
    

    You will now see common system confguration messages and the installation of the NHD-7 and Version 5.1B patch kit. When the system boots one more time, the installation is complete.

  33. Log in as root and configure your system from the System Setup Checklist. See the System Setup Checklist online help for more information.

4.2.2.2    Installing from RIS

Before you start this procedure, see the Installation Guide for information about the Full Installation process. You must have both the NHD-7 kit and the Tru64 UNIX Operating System distribution. See Section 1.2 for information about how to get the NHD-7 kit and how to prepare for RIS installation.

  1. If your system already is running a version of the operating system, log in as root and shut down the system.

  2. At the console prompt, type the following:

    >>> set bootdef_dev ""
    

  3. At the console prompt, boot from the RIS server. For example:

    >>> boot ewa0
    

    You will see the operating system boot and the Full Installation user interface start.

  4. Enter host information, select subsets and target disks, and continue the Full Installation process as described in the Installation Guide.

    The following list describes differences you may see when you install NHD-7 from a RIS server:

  5. You will see messages similar to the following as the kernel is rebuilt before the final reboot:

    The system name assigned to your machine is 'sysname'.
    *** KERNEL CONFIGURATION AND BUILD PROCEDURE ***
     
     
           The system will now automatically build a kernel
           with all options and then reboot.  This can take
           up to 15 minutes, depending on the processor type.
     
           When  the login prompt appears after the system
           has rebooted, use 'root' as the  login name and
           the SUPERUSER  password that was entered during
           this procedure, to log into the system.
     
            *** PERFORMING KERNEL BUILD ***
    	Working....Thu Jun 20 14:06:34 EDT 2003
     
    The new version ID has been successfully set on this system.
    The entire set of new functionality has been enabled.
     
    This message is contained in the file /var/adm/smlogs/it.log for
    future reference.syncing disks... done
    rebooting.... (transferring to monitor)
    

    The system reboots with the custom kernel, and you see the login prompt.

  6. Log in as root and configure your system from the System Setup Checklist. See the System Setup Checklist online help for more information.

4.2.3    Installing on a Cluster Running Version 5.1B

Before you install NHD-7 on an existing cluster, see the Rolling Upgrade chapter in the TruCluster Server Cluster Installation manual. You must have the NHD-7 kit distribution, the Tru64 UNIX Operating System CD-ROM, and the Associated Products, Volume 2, CD-ROM, which includes the TruCluster Server software. See Section 1.2 for information about how to get the NHD-7 kit and, if necessary, create an NHD-7 kit CD image or prepare for RIS installation.

Caution

Before you install NHD-7 onto a system that includes SA5300A series RAID controllers, see the release notes in Section 3.5. Failure to follow these instructions can cause your NHD-7 installation to fail.

Perform a Rolling Upgrade as described in the following sections to install NHD-7 on an existing cluster. See the clu_upgrade Quick Reference Best Practice and the Rolling Upgrade chapter in the TruCluster Server Cluster Installation manual for more information.

Figure 4-1 shows a simplified flow chart of the tasks and stages that are part of an NHD Rolling Upgrade.

Figure 4-1:  NHD Rolling Upgrade

4.2.3.1    Preparation Stage

Perform the tasks in the Rolling Upgrade Preparation Stage. See the Rolling Upgrade chapter in the TruCluster Server Cluster Installation manual for more information.

4.2.3.2    Setup Stage

Perform the following steps in the Setup Stage:

  1. Use the clu_upgrade command to start the Setup Stage. For example, if the lead member has member ID 1:

    # clu_upgrade -v setup 1
    

    You will see the following messages:

    Retrieving cluster upgrade status.
     
    This is the cluster upgrade program.
    You have indicated that you want to perform the 'setup' stage of the
    upgrade.
     
    Do you want to continue to upgrade the cluster? [yes]:
     
     
    

  2. Press Return. You will see the following messages:

    What type of rolling upgrade will be performed?
     
        Selection   Type of Upgrade
    ----------------------------------------------------------------------
             1      An upgrade using the installupdate command
             2      A patch using the dupatch command
             3      A new hardware delivery using the nhd_install command
             4      All of the above
             5      None of the above
             6      Help
             7      Display all options again
    ----------------------------------------------------------------------
    Enter your Choices (for example, 1 2 2-3):
    

  3. Enter 2 3 and press Return. You will see the following messages:

    You selected the following rolling upgrade options: 2 3
    Is that correct? (y/n) [y]:
    

  4. Enter y and press Return. You will see the following messages:

    Enter the full pathname of the patch kit kit mount point ['???']:
     
    

  5. Enter the NHD kit mount point, /mnt/540/Tru64UNIX_Patches, and press Return. You will see the following messages:

    A patch kit has been found in the following location:
     
    /mnt/nnn/Tru64UNIX_Patches
     
    This kit has the following version information:
     
    '"Tru64 UNIX V5.1x Patch Distribution"
    (T64V51BB24AS0003-20030627PatchTools540,27-Jun-2003:16:04:41)'
     
    Is this the correct nhd kit for the update being performed? [yes]:
     
     
    

  6. Press Return. You will see the following message:

    Enter the full pathname of the nhd kit mount point ['???']:
    

  7. Enter the NHD kit mount point, /mnt, and press Return. You will see the following messages:

    A nhd kit has been found in the following location:
     
    /mnt
     
    This kit has the following version information:
     
    'Tru64 UNIX New Hardware for V5.1x'
     
    Is this the correct nhd kit for the update being performed? [yes]:
     
     
    

  8. Enter yes and press Return . You will see the following messages:

    Checking inventory and available disk space.
    Marking stage 'setup' as 'started'.
    Copying NHD kit '/mnt' to '/var/adm/update/NHDKit/'.
    nhd_install -copy 540 /var/adm/update/NHDKit/
     
    Creating tagged files.
    ......
    The cluster upgrade 'setup' stage has completed successfully.
    Reboot all cluster members except member: '1'
    Marking stage 'setup' as 'completed'.
     
    The 'setup' stage of the upgrade has completed successfully.
    

    Note

    You may see the following message during this step:

    clubase: Entry not found in /cluster/admin/tmp/stanza.stdin.530756
     
    

    This is a known error and can be ignored.

  9. Reboot all your cluster members except the lead member. See the Rolling Upgrade chapter in the TruCluster Server Cluster Installation manual for more information.

4.2.3.3    Preinstall Stage

Perform the following steps in the Preinstall Stage:

  1. Use the clu_upgrade command to start the Preinstall Stage:

    # clu_upgrade -v preinstall
    

    You will see the following messages:

    Retrieving cluster upgrade status.
     
    This is the cluster upgrade program.
    You have indicated that you want to perform the 'preinstall' stage of the
    upgrade.
     
    Do you want to continue to upgrade the cluster? [yes]:
     
     
    

  2. Enter yes and press Return . You will see the following messages:

    clu_upgrade has previously created the required tagged files and would
    normally check and repair any tagged files which may have been modified
    since they where created. If you feel that the tagged files have not changed
    since they where created you may bypass these checks and continue with the
    rolling upgrade.
     
    Do you wish to skip tag file checking? [no]:
    

  3. If the Preinstall Stage is performed immediately after the Setup Stage, you can skip tagged file checking. If time has elapsed between the Setup Stage and Preinstall Stage, you may want to check the tagged files. The prompt asks if you want to skip tagged file checking.

    Note

    If you see the following message you can ignore it.

    . find: bad starting directory
     
    

    This is a known error and can be ignored.

4.2.3.4    Install Stage

Perform the following steps in the Install Stage:

  1. Make sure that the NHD-7 distribution is still mounted.

  2. Change directory to the mounted NHD-7 kit. For example:

    # cd /mnt
    

  3. Use the nhd_install script to install the NHD-7 kit on the lead member:

    # ./nhd_install
    

  4. Answer yes to the following question and press Return:

    NHD installation also requires installing patches
    Would you like to install the NHD Kit, Tru64_UNIX_V5.1B and 
    TruCluster_V5.1B Patches: (y/n)[y]: y [Return]
    

  5. At the shell prompt, shut down the system:

    # shutdown -h now
    

  6. At the console prompt, boot the generic kernel. For example:

    >>> boot -fi genvmunix dqb0
    

  7. After the system boots, log in as root.

  8. At the shell prompt, use the doconfig utility to rebuild the custom kernel:

    # doconfig
    

    You will see messages similar to the following:

    *** KERNEL CONFIGURATION AND BUILD PROCEDURE ***
     
    Enter a name for the kernel configuration file. [SYSNAME]:
     
    

  9. Press Return to accept the default. You will see messages similar to the following:

    A configuration file with the name 'SYSNAME' already exists.
    Do you want to replace it? (y/n) [n]:
    

  10. Enter y and press Return. You will see messages similar to the following:

    Saving /sys/conf/SYSNAME as /sys/conf/SYSNAME.bck
     
    *** KERNEL OPTION SELECTION ***
     
    Selection   Kernel Option
    --------------------------------------------------------------
    1       System V Devices
    2       NTP V3 Kernel Phase Lock Loop (NTP_TIME)
    3       Kernel Breakpoint Debugger (KDEBUG)
    4       Packetfilter driver (PACKETFILTER)
    5       IP-in-IP Tunneling (IPTUNNEL)
    6       IP Version 6 (IPV6)
    7       Point-to-Point Protocol (PPP)
    8       STREAMS pckt module (PCKT)
    9       X/Open Transport Interface (XTISO, TIMOD, TIRDWR)
    10      Digital Versatile Disk File System (DVDFS)
    11      ISO 9660 Compact Disc File System (CDFS)
    12      Audit Subsystem
    13      ATM UNI 3.0/3.1 ILMI (ATMILMI3X)
    14      IP Switching over ATM (ATMIFMP)
    15      LAN Emulation over ATM (LANE)
    16      Classical IP over ATM (ATMIP)
    17      ATM UNI 3.0/3.1 Signalling for SVCs (UNI3X)
    18      Asynchronous Transfer Mode (ATM)
    19      All of the above
    20      None of the above
    21      Help
    22      Display all options again
    --------------------------------------------------------------
     
    Enter your choices.
     
    Choices (for example, 1 2 4-6) [20]:
    

  11. Select the kernel options you want built into your new custom kernel. Include the same options you were already running on your system. In this example, if you want to select all listed kernel options, enter 19 and press Return.

    You will see messages similar to the following:

    You selected the following kernel options:
    System V Devices
    NTP V3 Kernel Phase Lock Loop (NTP_TIME)
    Kernel Breakpoint Debugger (KDEBUG)
    Packetfilter driver (PACKETFILTER)
    IP-in-IP Tunneling (IPTUNNEL)
    IP Version 6 (IPV6)
    Point-to-Point Protocol (PPP)
    STREAMS pckt module (PCKT)
    X/Open Transport Interface (XTISO, TIMOD, TIRDWR)
    Digital Versatile Disk File System (DVDFS)
    ISO 9660 Compact Disc File System (CDFS)
    Audit Subsystem
    ATM UNI 3.0/3.1 ILMI (ATMILMI3X)
    IP Switching over ATM (ATMIFMP)
    LAN Emulation over ATM (LANE)
    Classical IP over ATM (ATMIP)
    ATM UNI 3.0/3.1 Signalling for SVCs (UNI3X)
    Asynchronous Transfer Mode (ATM)
     
    Is that correct? (y/n) [y]:
    

  12. Enter y to confirm your selection and press Return. You will see the following prompt:

    Do you want to edit the configuration file? (y/n) [n]: 
     
     
    

  13. Enter n and press Return. You will see messages similar to the following:

    *** PERFORMING KERNEL BUILD ***
     
    A log file listing special device files is located in /dev/MAKEDEV.log
    Working....Thu Jun 20 14:59:36 EDT 2003
    Working....Thu Jun 20 15:01:53 EDT 2003
    Working....Thu Jun 20 15:05:32 EDT 2003
     
    The new kernel is /sys/SYSNAME/vmunix
    

  14. Copy the new custom kernel to the member-specific directory on the lead member. For example:

    # cp /sys/SYSNAME/vmunix /cluster/members/memberN/boot_partition
    

  15. Shut down the lead member:

    # shutdown -h now
    

  16. At the console prompt, boot the lead member with the new custom kernel. For example:

    >>> boot -fi "vmunix" dqb0
    

  17. Log in as root on the lead member.

  18. Use the clu_upgrade command to check the installation status:

    # clu_upgrade -v
     
     
    

    You will see messages similar to the following:

    Retrieving cluster upgrade status.
                 Upgrade Status        
     
    Stage        Status                Date
     
    setup        started:              Thu Jun 20 16:50:43 EDT 2003
                 lead member:          1
                 nhd kit source:       /mnt
                 completed:            Thu Jun 20 16:52:34 EDT 2003
     
    preinstall   started:              Thu Jun 20 16:54:46 EDT 2003
                 completed:            Thu Jun 20 16:55:16 EDT 2003
     
    nhd          started:              Thu Jun 20 16:55:57 EDT 2003
                 completed:            Thu Jun 20 16:57:42 EDT 2003
     
                 Member Status                       Tagged File Status
      ID Hostname                 State Rolled    Running with   On Next Boot
     
       1 member01.site.place.net  UP    Yes       No             No
     
    

4.2.3.5    Postinstall Stage

Perform the following steps in the Postinstall Stage:

  1. On the lead member, use the clu_upgrade command to start the Postinstall Stage:

    # clu_upgrade -v postinstall
    

    You will see the following messages:

    Retrieving cluster upgrade status.
     
    This is the cluster upgrade program.
    You have indicated that you want to perform the 'postinstall' stage of the
    upgrade.
     
    Do you want to continue to upgrade the cluster? [yes]:
    

  2. Enter yes and press Return . You will see the following messages:

    Marking stage 'postinstall' as 'started'.
    Marking stage 'postinstall' as 'completed'.
     
    The 'postinstall' stage of the upgrade has completed successfully.
    

  3. Use the clu_upgrade command to check the installation status:

    # clu_upgrade -v
     
     
    

    You will see messages similar to the following:

    Retrieving cluster upgrade status.
                 Upgrade Status
     
    Stage        Status                Date
     
    setup        started:              Thu Jun 20 16:50:43 EDT 2003
                 lead member:          1
                 nhd kit source:       /mnt
                 completed:            Thu Jun 20 16:52:34 EDT 2003
     
    preinstall   started:              Thu Jun 20 16:54:46 EDT 2003
                 completed:            Thu Jun 20 16:55:16 EDT 2003
     
    nhd          started:              Thu Jun 20 16:55:57 EDT 2003
                 completed:            Thu Jun 20 16:57:42 EDT 2003
     
    postinstall  started:              Thu Jun 20 16:58:28 EDT 2003
                 completed:            Thu Jun 20 16:58:28 EDT 2003
     
    roll         started:              Thu Jun 20 16:58:29 EDT 2003
                 members rolled:       1
                 completed:            Thu Jun 20 16:58:29 EDT 2003
     
                 Member Status                       Tagged File Status
      ID Hostname                 State Rolled    Running with   On Next Boot
     
       1 member01.site.place.net  UP    Yes       No             No
      10 member10.site.place.net  UP    No        Yes            Yes
     
    

4.2.3.6    Roll Stage

Before running the Roll Stage, see the Rolling Upgrade chapter in the TruCluster Server Cluster Installation manual.

Perform the following steps for each additional cluster member:

  1. Log in to the cluster member as root.

  2. Shut down the cluster member:

    # shutdown -h now
    

  3. At the console prompt, boot the cluster member to single-user mode:

    >>> boot -fl s
    

  4. Use the init s command to initialize process control:

    # init s
    

  5. Use the bcheckrc command to mount and check local file systems:

    # bcheckrc
    

    You will see output similar to the following:

    Checking device naming:
        Passed.
    CNX QDISK: Successfully claimed quorum disk, adding 1 vote.
    Checking local filesystems
    Mounting / (root)
    user_cfg_pt: reconfigured
    root_mounted_rw: reconfigured
    Mounting /cluster/members/member57/boot_partition (boot file system)
    user_cfg_pt: reconfigured
    root_mounted_rw: reconfigured
    user_cfg_pt: reconfigured
    dsfmgr: NOTE: updating kernel basenames for system at /
        scp kevm tty00 tty01 lp0 dsk3 dsk4 dsk5 dsk6 dsk7 dsk8 floppy1 cdrom1 dmapi
    Mounting local filesystems
    exec: /sbin/mount_advfs -F 0x14000 cluster_root#root /
    cluster_root#root on / type advfs (rw)
    exec: /sbin/mount_advfs -F 0x4000 cluster_usr#usr /usr
    cluster_usr#usr on /usr: Device busy
    exec: /sbin/mount_advfs -F 0x4000 cluster_var#var /var
    cluster_var#var on /var: Device busy
    /proc on /proc type procfs (rw)
    

  6. Use the lmf reset command to copy license information into the kernel cache:

    # lmf reset
    

  7. Use the clu_upgrade command to start the Roll Stage:

    # clu_upgrade -v roll
    

    You will see messages similar to the following:

    This is the cluster upgrade program.
    You have indicated that you want to perform the 'roll' stage of the
    upgrade.
     
    Do you want to continue to upgrade the cluster? [yes]:
    

  8. Enter yes and press Return .

    Note

    You may see the following message during this step:

    clubase: Entry not found in /cluster/admin/tmp/stanza.stdin.530756
     
    

    This is a known error and can be ignored.

    You also may see messages similar to the following:

    *** Warning ***
    The cluster upgrade command was unable to find or verify the configuration
    file used to build this member's kernel. clu_upgrade attempts to make a
    backup copy of the configuration file which it would restore as required
    during a clu_upgrade undo command. To use the default configuration file
    or to continue without backing up a configuration file hit return.
    Enter the name of the configuration file for this member [SYSNAME]:
     
     
    

    Press Return to use SYSNAME as the configuration file name.

    You will see messages similar to the following:

    Backing up member-specific data for member: 10
     
    The 'roll' stage has completed successfully.  This
    member must be rebooted in order to run with the newly installed software.
    Do you want to reboot this member at this time? []:
    

  9. Enter y and press Return. You will see the following message:

    You indicated that you want to reboot this member at this time.
    Is that correct? [yes]:
    

  10. Enter y and press Return. You will see messages similar to the following:

    The 'roll' stage of the upgrade has completed successfully.
    Terminated
    # syncing disks... done
    drd: Clean Shutdown
    rebooting.... (transferring to monitor)
     
     
    

    The cluster member reboots and reconfigures.

  11. Use the clu_upgrade command to check the installation status:

    # clu_upgrade -v
     
     
    

    You will see messages similar to the following:

    Retrieving cluster upgrade status.
                 Upgrade Status
     
    Stage        Status                Date
     
    setup        started:              Thu Jun 20 16:40:07 EDT 2003
                 lead member:          1
                 nhd kit source:       /mnt
                 tagged files list:    /cluster/admin/clu_upgrade/tag_files.list
                 tagged files missing: /cluster/admin/clu_upgrade/tag_files.miss
                 completed:            Thu Jun 20 16:42:48 EDT 2003
     
    preinstall   started:              Thu Jun 20 16:51:09 EDT 2003
                 completed:            Thu Jun 20 16:52:32 EDT 2003
     
    nhd          started:              Thu Jun 20 16:54:49 EDT 2003
                 completed:            Thu Jun 20 16:58:08 EDT 2003
     
    postinstall  started:              Thu Jun 20 17:18:12 EDT 2003
                 completed:            Thu Jun 20 17:18:12 EDT 2003
     
    roll         started:              Thu Jun 20 17:22:24 EDT 2003
                 members rolled:       1 10
                 completed:            Thu Jun 20 17:32:42 EDT 2003
     
                 Member Status                       Tagged File Status
      ID Hostname                 State Rolled    Running with   On Next Boot
     
       1 member01.site.place.net  UP    Yes       No             No
      10 member10.site.place.net  UP    Yes       No             No
     
    

Repeat this process for each remaining cluster member.

4.2.3.7    Switch Stage

Perform the following steps in the Switch Stage:

  1. After the Roll Stage is complete, use the clu_upgrade command to start the Switch Stage on any cluster member:

    # clu_upgrade -v switch
    

    You will see the following messages:

    Retrieving cluster upgrade status.
     
    This is the cluster upgrade program.
    You have indicated that you want to perform the 'switch' stage of the
    upgrade.
     
    Do you want to continue to upgrade the cluster? [yes]:
    

  2. Enter yes and press Return . You will see the following messages:

    Initiating version switch on cluster members
    .Marking stage 'switch' as 'started'.
    Switch already switched
     
    Marking stage 'switch' as 'completed'.
    The cluster upgrade 'switch' stage has completed successfully.
    All cluster members must be rebooted before running the 'clean' command.
     
    The 'switch' stage of the upgrade has completed successfully.
     
     
    

  3. After you complete the Switch Stage, reboot all cluster members. After each member reboots, you see the login prompt.

  4. Log in to the system as root.

  5. Use the clu_upgrade command to check the installation status:

    # clu_upgrade -v
     
     
    

    You will see messages similar to the following:

    Retrieving cluster upgrade status.
                 Upgrade Status
     
    Stage        Status                Date
     
    setup        started:              Thu Jun 20 16:40:07 EDT 2003
                 lead member:          1
                 nhd kit source:       /mnt
                 tagged files list:    /cluster/admin/clu_upgrade/tag_files.list
                 tagged files missing: /cluster/admin/clu_upgrade/tag_files.miss
                 completed:            Thu Jun 20 16:42:48 EDT 2003
     
    preinstall   started:              Thu Jun 20 16:51:09 EDT 2003
                 completed:            Thu Jun 20 16:52:32 EDT 2003
     
    nhd          started:              Thu Jun 20 16:54:49 EDT 2003
                 completed:            Thu Jun 20 16:58:08 EDT 2003
     
    postinstall  started:              Thu Jun 20 17:18:12 EDT 2003
                 completed:            Thu Jun 20 17:18:12 EDT 2003
     
    roll         started:              Thu Jun 20 17:22:24 EDT 2003
                 members rolled:       1 10
                 completed:            Thu Jun 20 17:32:42 EDT 2003
     
    switch       started:              Thu Jun 20 16:37:50 EDT 2003
                 completed:            Thu Jun 20 16:38:20 EDT 2003
     
                 Member Status                       Tagged File Status
      ID Hostname                 State Rolled    Running with   On Next Boot
     
       1 member01.site.place.net  UP    Yes       No             No
     
     
    

4.2.3.8    Clean Stage

Perform the following steps in the Clean Stage:

  1. After the Switch Stage is complete, use the clu_upgrade command to start the Clean Stage on any cluster member:

    # clu_upgrade -v clean
    

    You will see the following messages:

    Retrieving cluster upgrade status.
     
    This is the cluster upgrade program.
    You have indicated that you want to perform the 'clean' stage of the
    upgrade.
     
    Do you want to continue to upgrade the cluster? [yes]:
     
     
    

  2. Enter yes and press Return . You will see the following messages:

    .Marking stage 'clean' as 'started'.
     
    Deleting tagged files.
    ....
    Removing back-up and kit files
     
    Marking stage 'clean' as 'completed'.
     
    The 'clean' stage of the upgrade has completed successfully.
    

  3. Use the clu_upgrade command to check the installation status:

    # clu_upgrade -v
     
     
    

    You will see messages similar to the following:

    Retrieving cluster upgrade status.
    There is currently no cluster upgrade in progress.
     
    The last cluster upgrade completed succesfully on:
      Thu Jun 20 17:05:25 EDT 2003
    History for this upgrade can be found in the directory:
      /cluster/admin/clu_upgrade/history/Compaq.Tru64.UNIX.V5.1x.Rev.1885-1
    

4.2.4    Installing on a Cluster During a Full Installation of Version 5.1B

Before you start this procedure, see the TruCluster Server Cluster Installation manual for information about creating a cluster. You must have the NHD-7 kit distribution, the Version 5.1B Tru64 UNIX Operating System CD-ROM, and the Associated Products, Volume 2, CD-ROM, which includes the Version 5.1B TruCluster Server software.

Caution

Before you install NHD-7 onto a system that includes SA5300A series RAID controllers, see the release notes in Section 3.5. Failure to follow these instructions can cause your NHD-7 installation to fail.

Follow these steps to install NHD-7 on a new cluster during a Full Installation:

  1. Install NHD-7 during a Full Installation on the system that will be the first cluster member, as described in Section 4.2.2.

  2. Load the Version 5.1B Associated Products, Volume 2, CD-ROM into the CD-ROM drive.

  3. Mount the Associated Products, Volume 2, CD-ROM. For example:

    # mount /dev/disk/cdrom0a /mnt
    

  4. Use the setld -l command to load the TruCluster Server software:

    # setld -l /mnt/TruCluster/kit
    

    You will see output similar to the following:

    *** Enter subset selections ***
     
    The following subsets are mandatory and will be installed automatically
    unless you choose to exit without installing any subsets:
     
          * TruCluster Base Components
     
    The subsets listed below are optional:
     
         There may be more optional subsets than can be presented on a single
         screen. If this is the case, you can choose subsets screen by screen
         or all at once on the last screen. All of the choices you make will
         be collected for your confirmation before any subsets are installed.
     
     - TruCluster(TM) Software :
         1) TruCluster Migration Components
         2) TruCluster Reference Pages
     
    Estimated free diskspace(MB) in root:269.2 usr:18175.4 var:18665.0
     
    Choices (for example, 1 2 4-6):
     
    Or you may choose one of the following options:
     
         3) ALL mandatory and all optional subsets
         4) MANDATORY subsets only
         5) CANCEL selections and redisplay menus
         6) EXIT without installing any subsets
     
    Estimated free diskspace(MB) in root:269.2 usr:18175.4 var:18665.0
     
    Enter your choices or press RETURN to redisplay menus.
     
    Choices (for example, 1 2 4-6):
    

  5. Enter 3 to select all mandatory and optional subsets. You will see output similar to the following:

    You are installing the following mandatory subsets:
     
            TruCluster Base Components
     
    You are installing the following optional subsets:
     
     - TruCluster(TM) Software :
            TruCluster Migration Components
            TruCluster Reference Pages
     
    Estimated free diskspace(MB) in root:269.2 usr:18173.6 var:18665.0
     
    Is this correct? (y/n):
    

  6. Enter y to confirm your selection. You will see output similar to the following:

    Checking file system space required to install selected subsets:
     
    File system space checked OK.
     
    3 subsets will be installed.
     
    Loading subset 1 of 3 ...
     
    TruCluster Migration Components
       Copying from /mnt/TruCluster/kit (disk)
       Verifying
     
    Loading subset 2 of 3 ...
     
    TruCluster Reference Pages
       Copying from /mnt/TruCluster/kit (disk)
       Verifying
     
    Loading subset 3 of 3 ...
     
    TruCluster Base Components
       Copying from /mnt/TruCluster/kit (disk)
       Verifying
     
    3 of 3 subsets installed successfully.
     
    Configuring "TruCluster Migration Components" (TCRMIGRATE540)
     
    Configuring "TruCluster Reference Pages" (TCRMAN540)
    Running : /usr/lbin/mkwhatis : in the background...
     
    Configuring "TruCluster Base Components" (TCRBASE540)
     
    Use /usr/sbin/clu_create to create a cluster.
    

  7. Change to the root directory and unmount the Associated Products, Volume 2, CD-ROM:

    # cd /
    # umount /mnt
    

  8. Remove the Associated Products, Volume 2, CD-ROM and load the New Hardware Delivery CD-ROM.

  9. Mount the NHD-7 kit. For example:

    # mount /dev/disk/cdrom0a /mnt
    

  10. Change directory to the mounted NHD-7 ki:

    # cd /mnt
    

  11. Enter the following command to install the NHD cluster kit:

    # ./nhd_install
    

    Note

    The nhd_install script checks your system before installing the NHD cluster kit. You do not have to use the -install_cluster argument.

    You will see output similar to the following:

    Checking file system space required to install specified subsets:
     
    File system space checked OK.
     
    1 subsets will be installed.
     
    Loading subset 1 of 1 ...
     
    New Hardware TruCluster(TM) Support V6.0
       Copying from /mnt/540/kit (disk)
            Working....Thu Jun 20 18:16:41 EDT 2003
       Verifying
     
    1 of 1 subsets installed successfully.
     
    Configuring "New Hardware TruCluster(TM) Support V6.0" (OSHTCRBASE540)
     
    The installation of the New Hardware TruCluster(TM) Support V6.0 (OSHTCRBASE540)
    software subset is complete.
    

  12. After installing the NHD cluster kit, use the clu_create command to create a single-member cluster, as described in the TruCluster Server Cluster Installation manual.

  13. Add additional cluster members as needed. See the TruCluster Server Cluster Installation manual for more information.

4.2.5    Rebuilding the Kernel After Adding Supported Hardware

The preceding instructions tell you to install the supported hardware before you install the NHD-7 kit. There may be circumstances where you must add supported hardware after NHD-7 is already installed on your system. For example, you may add a Smart Array 5304 RAID controller to an existing AlphaServer DS25 system.

Follow these instructions to include support for the new hardware in your custom kernel on either the single system or the cluster member where you install the new hardware:

  1. At the shell prompt, shut down the system:

    # shutdown -h now
    

  2. Make sure that the value of the auto_action console variable is set to halt:

    >>> set auto_action halt
    

  3. Power down the system, install the new hardware, and power up the system.

  4. At the console prompt, boot the generic kernel:

    >>> boot -fi genvmunix dqb0
    

  5. After the system boots, log in as root.

  6. At the shell prompt, use the doconfig utility to rebuild the custom kernel:

    # doconfig
    

    You will see messages similar to the following:

    *** KERNEL CONFIGURATION AND BUILD PROCEDURE ***
     
    Enter a name for the kernel configuration file. [SYSNAME]:
     
    

  7. Press Return to accept the default. You will see messages similar to the following:

    A configuration file with the name 'SYSNAME' already exists.
    Do you want to replace it? (y/n) [n]:
    

  8. Enter y and press Return. You will see messages similar to the following:

    Saving /sys/conf/SYSNAME as /sys/conf/SYSNAME.bck
     
    *** KERNEL OPTION SELECTION ***
     
    Selection   Kernel Option
    --------------------------------------------------------------
    1       System V Devices
    2       NTP V3 Kernel Phase Lock Loop (NTP_TIME)
    3       Kernel Breakpoint Debugger (KDEBUG)
    4       Packetfilter driver (PACKETFILTER)
    5       IP-in-IP Tunneling (IPTUNNEL)
    6       IP Version 6 (IPV6)
    7       Point-to-Point Protocol (PPP)
    8       STREAMS pckt module (PCKT)
    9       X/Open Transport Interface (XTISO, TIMOD, TIRDWR)
    10      Digital Versatile Disk File System (DVDFS)
    11      ISO 9660 Compact Disc File System (CDFS)
    12      Audit Subsystem
    13      ATM UNI 3.0/3.1 ILMI (ATMILMI3X)
    14      IP Switching over ATM (ATMIFMP)
    15      LAN Emulation over ATM (LANE)
    16      Classical IP over ATM (ATMIP)
    17      ATM UNI 3.0/3.1 Signalling for SVCs (UNI3X)
    18      Asynchronous Transfer Mode (ATM)
    19      All of the above
    20      None of the above
    21      Help
    22      Display all options again
    --------------------------------------------------------------
     
    Enter your choices.
     
    Choices (for example, 1 2 4-6) [20]:
    

  9. Select the kernel options you want built into your new custom kernel. Include the same options you were already running on your system. In this example, if you want to select all listed kernel options, enter 19 and press Return.

    You will see messages similar to the following:

    You selected the following kernel options:
    System V Devices
    NTP V3 Kernel Phase Lock Loop (NTP_TIME)
    Kernel Breakpoint Debugger (KDEBUG)
    Packetfilter driver (PACKETFILTER)
    IP-in-IP Tunneling (IPTUNNEL)
    IP Version 6 (IPV6)
    Point-to-Point Protocol (PPP)
    STREAMS pckt module (PCKT)
    X/Open Transport Interface (XTISO, TIMOD, TIRDWR)
    Digital Versatile Disk File System (DVDFS)
    ISO 9660 Compact Disc File System (CDFS)
    Audit Subsystem
    ATM UNI 3.0/3.1 ILMI (ATMILMI3X)
    IP Switching over ATM (ATMIFMP)
    LAN Emulation over ATM (LANE)
    Classical IP over ATM (ATMIP)
    ATM UNI 3.0/3.1 Signalling for SVCs (UNI3X)
    Asynchronous Transfer Mode (ATM)
     
    Is that correct? (y/n) [y]:
    

  10. Enter y to confirm your selection and press Return. You will see the following prompt:

    Do you want to edit the configuration file? (y/n) [n]: 
     
     
    

  11. Enter n and press Return. You will see messages similar to the following:

    *** PERFORMING KERNEL BUILD ***
     
    A log file listing special device files is located in /dev/MAKEDEV.log
    Working....Thu Jun 20 14:59:36 EDT 2003
    Working....Thu Jun 20 15:01:53 EDT 2003
    Working....Thu Jun 20 15:05:32 EDT 2003
     
    The new kernel is /sys/SYSNAME/vmunix
    

  12. Copy the new custom kernel.

  13. Shut down your system:

    # shutdown -h now
    

  14. At the console prompt, boot the lead member with the new custom kernel:

    >>> boot -fi "vmunix" dqb0