HP

HP Tru64 UNIX and TruCluster Server Version 5.1B-6

English
  Patch Summary and Release Notes > Chapter 1 Kit Installation and Removal   

Important Kit Installation and Removal Release Notes

The release notes in this section provide information you need to be aware of when installing or removing this kit. Notes indicated as “(new)” do not appear in the release notes that shipped with prior 5.1B versions. Notes indicated as “(revised)” were included in the previous kit but revised for this release.

Also be aware of any Special Instructions that may appear on screen when running the dupatch program.

The following topics are addressed:

Installation Release Notes

The following notes provide important information you need to be aware of before installing the Version 5.1B-6 kit.

Tru64 UNIX NHD-7 Installation Required a Serial Console Connection

HP recommends the following procedure for installing the DS-A5134-AA in an Alpha System GS1280:

  1. Install Tru64 UNIX NHD-7 – must be installed with a serial console connection.

  2. Install and configure HBA (Host Bus Adapter) DS-A5134-AA and/or reconfigure boot disks partitions – use either a graphics console or a serial console connection.

Presence of Some Insight Management Agents Kits May Require Additional Steps (revised)

The following installation-related release notes pertain to the Insight Management Agents.

It is strongly recommended that any existing version of Tru64 UNIX Insight Management Agents kit (CQPIMxxx kit, where xxx=310, 320, 370, and so on) be uninstalled prior to Version 5.1B-5 or higher update and CPQIM370 kit with latest available CPQIM370 patches be re-installed after Version 5.1B-5 or higher update. The CPQIM kit is available at: http://h30097.www3.hp.com/cma/

The CPQIM patches are available at:

http://h30097.www3.hp.com/cma/patches.html

NOTE: See the section “Insight Manager Components DUMP Core” for information about a potential problem with Insight Management Agents that can occur after Version 5.1B-5 or higher is installed.
Some Insight Management Agents Kits May Prevent V5.1B-5 or Higher Installation (revised)

Under certain conditions, you will be prevented from installing Version 5.1B-5 or higher if you are running HP Tru64 UNIX Insight Management Agents Version 3.1 or higher or had a version of the kit previously installed. Those conditions are as follows:

  • Your system contains a pre-Version 5.1B-5 or higher kit and the Insight Management Agents kit.

    In this case, upgrading to this kit gives the following error message:

    Patch 29020.00 - SP07 OSFCLINET540 (SSRT5971 SSRT3653 SSRT2384 ...)
    ./sbin/init.d/snmpd: its origin can not be identified.
    This patch will not be installed.
  • Your system contains Patch Kit 2, Patch Kit 3, or Patch Kit 4 and the Insight Management Agents kit was once installed but has since been removed.

    In this case, upgrading to Version 5.1B-5 or higher gives the following error message:

    Patch 29020.00 - SP07 OSFCLINET540 (SSRT5971 SSRT3653 SSRT2384 ...)
    ./etc/pmgrd_iorate.config: does not exist on your system,
    however, it is in the inventory of installed subsets.
    This patch will not be installed.

To work around this problem you will need to run the dupatch baseline process before installing Version 5.1B-5 or higher. The following steps will guide you through the process:

  1. Create a backup copy of the /sbin/init.d/snmpd script. For example:

    # cp /sbin/init.d/snmpd /tmp 

    An alternative to backing up this file, in which you manually modify it, is provided following step 7.

  2. Run the Version 5.1B-5 or higher dupatch utility and select Option 5, Patch Baseline Analysis/Adjustment. For detailed instructions, see the Patch Kit Installation Instructions .

  3. After Phase 5 of the baseline procedure, answer y to the following question:

     Do you want to enable the installation of any of
    these patches? [y/n]: y

    Phase 5 reports patches that do not pass installation applicability tests due to the current state of your system. The installation of Patch 29020.00 was prevented because of changed system files. The dupatch utility reports the known information about the files contained in each patch and asks if you want to enable the installation. Answering yes, enables dupatch to install patches that were prevented from being installed due to unknown files.

  4. Install Version 5.1B-5 or higher.

  5. After the system is running with 5.1B-5 or higher installed, stop the snmpd and insightd daemons as follows:

    # /sbin/init.d/snmpd stop
    # /sbin/init.d/insightd stop
  6. Replace the /sbin/init.d/snmpd script with the one you copied in step 1; for example:

    # cp /tmp/snmpd /sbin/init.d/snmpd
  7. Start the snmpd and insightd daemons as follows:

    # /sbin/init.d/snmpd start
    # /sbin/init.d/insightd start

If you did not back up the /sbin/init.d/snmpd file in step 1, you can modify it after you install Version 5.1B-5 or higher (step 4) and stop the snmpd and insightd daemons (step 5) as follows (the XXX represents the revision, such as CPQIM360):

  1. Edit the line that reads CPQMIBS=/usr/sbin/cpq_mibs as follows:

    CPQMIBS=/var/opt/CPQIMXXX/bin/cpq_mibs
  2. Edit the line that reads PMGRD=/usr/sbin/pmgrd as follows:

    PMGRD=/var/opt/CPQIMXXX/bin/pmgrd
  3. Edit the line that reads $PMGRD > /dev/console 2>&1 & as follows:

    $PMGRD `$RCMGR get PMGRD_FLAGS`  > /dev/console 2>&1 &
V5.1B-5 or Higher Installation May Overwrite snmpd File (revised)

When you install a newer version of the Insight Management kit, the paths to the cpq_mibs and pmgrd subagents are changed in the snmpd script. By installing Version 5.1B-5 or higher, the snmpd script is replaced by the original version provided in the base version of the Insight Management kit.

Because the use of that snmpd script will cause problems when using Insight Manager, you must restore the script to the latest version. To do this, create a backup file of the snmpd script and restore the backup version after installing V5.1B-5 or higher. (See step 1 of the workaround described in “Some Insight Management Agents Kits May Prevent V5.1B-5 or Higher Installation (revised).”)

If you did not back up the snmpd file before installing V5.1B-5 or higher, you can modify the file after the installation, as described in “Some Insight Management Agents Kits May Prevent V5.1B-5 or Higher Installation (revised).

Stop sendmail Before Installing Kit

It is important that you stop the sendmail mailer daemon before installing this kit. Failing to do so can lead to the loss of queued mails. Lost mails cannot be recovered.

To stop the daemon, enter the following command:

# /sbin/init.d/sendmail stop

Commands Must Be Run on BIND Systems After Kit Installation

After installing this kit on a system configured to be BIND server, run the following command:

# rcmgr set BIND_SERVERARGS "-c /etc/namedb/named.conf"

On a cluster configured to be a BIND server, run the following command:

# rcmgr -c set BIND_SERVERARGS "-c /etc/namedb/named.conf"
NOTE: Note that in BIND 9, the named daemon uses the c option to pass a configuration file parameter, instead of the c option that was used in previous versions of BIND.

Stop the named daemon and restart it in order to have the new named daemon take effect:

  • For standalone systems:

    # /sbin/init.d/named stop
    # /sbin/init.d/named start
  • For clusters:

    # /sbin/init.d/named cluster_stop
    # /sbin/init.d/named start

To verify that your configuration files are compatible with Bind 9, run the following commands:

# named-checkconf /etc/namedb/named.conf
#  named-checkzone example.com /etc/namedb/hosts.db
NOTE: With BIND 9, CNAME entries no longer accept quotes. For example, "hosts-1" IN CNAME A needs to be changed to hosts-1 IN CNAME A

See “BIND Updated to Version 9.2.8” for information about BIND 9.

inetd Daemon Restart Required

Because of changes made to the Internet services daemon introduced in this release, you need to stop and then restart inetd after installing or removing this kit. You can do this from the command line or by using the sysman application. From the command line, enter the following commands:

# /sbin/init.d/inetd stop
# /sbin/init.d/inetd start

Failure to do this will result in an older version of inetd running on your system.

Kit Installation Causes Configuration File Restoration Failure

After installing this kit, attempts to restore the configuration file (config.cdf) saved prior to the installation of this patch will fail due to a checksum error. You can, however, force the operation by using the following sysman command:

# sysman -clone -apply -force config.cdf

For more information, see the note titled Correction to Configuration Cloning Restrictions in the “Corrections to Manuals” section of the online Technical Updates document for Version 5.1B. The following link will take you to the Technical Updates document:

http://h30097.www3.hp.com/docs/updates/V51B/html/index.html

Run ipsec Command After Installing Kit

If you are running IP Security (ipsec) on your system, run the following command after installing this kit to determine if any unsafe connections exist:

# /usr/sbin/sysman ipsec

A warning message will alert you to any potential problems.

Procedure to Update lprsetup.dat File

If you use the /usr/sbin/printconfig application to configure printer queues, run the following command as root to update the /etc/lprsetup.dat file:

# /usr/sbin/lprsetup -c update

AdvFS Domain Differences May Affect Version Upgrades

A difference in the structure of Version 5.1A and early 5.1B AdvFS domains verses later V5.1B domains can cause a problem when upgrading to Version 5.1B-4 or higher.

This potential problem involves a metadata file called the RBMT that exists on each volume of a version 4 domain.

Although an RBMT is generally only one page long, it can be longer on large volumes or domains that have many files. If an RBMT file was larger than one page long under 5.1A or an early 5.1B version and then grows again after a system upgrade to Version 5.1B-4 or higher, the RBMT file can cause a problem in which any command that tries to activate that domain will fail. This includes mounting filesets from the affected domain.

Following a system upgrade to Version 5.1B-4 or higher, the problem can occur after all the filesets in a domain are unmounted. (The problem will not occur as long as the filesets remain mounted.)

The solution is to use the fixfdmn utility to correct the problem. For example:

 # /sbin/advfs/fixfdmn domain_name
fixfdmn: Checking the RBMT.
fixfdmn: Clearing the log on volume /dev/disk/dsk10c.
fixfdmn: Checking the BMT mcell data.
fixfdmn: Checking the deferred delete list.
fixfdmn: Checking the root tag file.
fixfdmn: Checking the tag file(s).
fixfdmn: Checking the mcell nodes.
fixfdmn: Checking the BMT chains.
fixfdmn: Checking the frag file group headers.
fixfdmn: Checking for frag overlaps.
fixfdmn: Checking for BMT mcell orphans.
fixfdmn: Checking for file overlaps.
fixfdmn: Checking the directories.
fixfdmn: Checking the frag file(s).
fixfdmn: Checking the quota files.
fixfdmn: Checking the SBM.
fixfdmn: Completed.

You can use this command proactively before the RBMT grows to prevent the problem from occurring or you can use it after the problem occurs.

In summary, the following domains are not in danger:

  • Version 3 domains

  • Domains created under Version 5.1B-4 or higher

  • Domains with RBMT files that are not longer than one page

The showfile and showdmn commands can provide information about your domains.

Use the showdmn command to find out what volumes a domain has. For example:

# /sbin/showfdmn domain_name
               Id              Date Created  LogPgs  Version  Domain Name
447350cd.000eba90  Tue May 23 11:13:33 2006     512        4  domain_name
Vol  512-Blks        Free  % Used  Cmode  Rblks  Wblks  Vol Name
 1L  71132000    71121632      0%     on    256    256  /dev/disk/dsk4c

Use the showfile command to determine if an RBMT file has more than one page. To do this, select any mounted fileset from the domain in question, find the mount point for the fileset, and enter the following command. (Note that .tags/M-6 represents volume 1. Subsequent volumes are incremented by a factor of six, so that volume 2 uses .tags/M-12, volume 3 uses .tags/M-18, and so on.) For example:

# /usr/sbin/showfile mountpoint/.tags/M-6
           Id  Vol  PgSz  Pages  XtntType  Segs  SegSz  I/O   Perf  File
fffffffa.0000    1    16      1    simple    **     **  ftx   100%  M-6

See the fixfdmn(8), showfile(8), and showfile(8) reference pages for information about using these commands.

Possible Errors Seen After Kit Installation

The following problems have been known to occur after Version 5.1B-4 or higher has been installed:

  • The Common Data Security Architecture (CDSA), IP Security Protocol (IPsec), or Single Sign-On (SSO) do not work.

  • The following error message is displayed during boot time:

    CSSM_ModuleLoad: CSSM error 4107 

If you experience these problems, make sure that the following command line has been executed:

# /usr/sbin/cdsa/mod_install -f -i -s \ 
/usr/lib/cdsa/libt64csp.so -d /usr/lib/cdsa/

Message Seen During Reboot Can Be Ignored

The following error message will be displayed after you reboot your system the first time after installing Version 5.1B-4 or higher:

AllowCshrcSourcingWithSubsystems is not valid
ForcePTTYAllocation is not valid
IdentityFile is not valid
AuthorizationFile is not valid

These messages are caused by a new version of SSH included in Version 5.1B-4 or higher. They do not pose a problem and can be ignored.

Kit Removal Release Notes

The following sections describe actions you have to take if you decided to uninstall Version 5.1B-4 or higher. Read each section before running the patch deletion procedure.

Some Patch Kits Cannot Be Removed

You cannot remove a patch kit on systems that have New Hardware Delivery 7 (NHD-7) kit installed when either of the following conditions exist:

  • The patch kit you want to remove was installed before the NHD (New Hardware Delivery) kit.

    For example, if you installed Patch Kit 2 and then installed NHD-7, you cannot remove that patch kit. However, if you later installed Patch Kit 4, you can remove that patch kit.

  • The patch kit was installed with NHD-7.

    Beginning with the release of Patch Kit 3, patch kits were incorporated into the NHD-7 kits. As a result, when you installed NHD-7, you automatically installed the current patch kit. These patch kits cannot be removed. However, you can remove any subsequent patch kits. For example, if you installed NHD-7 with Patch Kit 4 and later installed Patch Kit 5, you cannot remove Patch Kit 4, but can remove Patch Kit 5.

If you must remove the patch kit, the only solution is to rebuild your system environment by reinstalling the Version 5.1B operating system and then restoring your system to its state before you installed NHD-7 with the unwanted patch kit.

Changes to System May Need to Be Reversed

If you made the following changes to your system after installing this patch kit, you will have to undo those changes before you can uninstall it:

  • If you changed your hardware configuration (for example, by adding a new disk), the system configuration that existed prior to installing this patch kit might not recognize the new devices or may not provide the necessary support for them.

  • If you added new cluster members, the new members will not have an older state to revert to if you attempt to uninstall this patch kit.

To uninstall this kit, do the following:

  1. Remove all new hardware and new cluster members that you added after installing this kit.

  2. Run dupatch to uninstall the patch kit.

  3. Verify that the patch kit was successfully uninstalled.

You can now add the cluster members you removed and reinstall the hardware you removed, as long as the support for it existed in the pre-patched system. You can also reinstall the patch kit.

Script Must Be Run When Returning to Pre-Patched System

If removing this patch kit restores your system to a pre-patched state, you must run the script /etc/dn_fix_dat.sh before rebooting your system during the patch-deletion process.

This situation would occur if Version 5.1B-2 or higher is the only Tru64 UNIX patch kit installed on your 5.1B system.

NOTE: Because the no-roll procedure automatically reboots the system after deleting the patches, you cannot use this method to delete this patch kit if doing so returns your system to a pre-patched state.

Failing to run this script will result in your system being unable to boot normally. If this occurs, do the following:

  1. Boot your system in single-user mode:

      >>> boot -fl s 
  2. Run the script:

    # /etc/dn_fix_dat.sh
  3. Reboot normally.

If you also need to reverse the version switch as described in “Script Required to Reverse Version Switch”, run the /etc/dn_fix_dat.sh script after step 5 in that process.

Because the no-roll procedure automatically boots your system, you cannot use that patch kit removal method if doing so would restore your system to a pre-patched state

Note:

If during the dupatch installation and deletion processes you see a Special Instruction about running this script, you should ignore that instruction unless your system meets the requirement described here.

Cluster-Specific Installation and Removal Release Notes

This section provides information you need to be aware of if you are installing or removing patch kits from a TruCluster Server environment.

dupclone Error Message Can Be Ignored

Installing this kit using the dupclone process on systems that do not have all of the operating system and TruCluster Server base subsets installed may result in a messages similar to the following to be displayed:

Problem installing:

  - Tru64_UNIX_V5.1B:
         Patch 29034.00

requires the existence of the following un-installed/un-selected subset(s):

  - Tru64_UNIX_V5.1B:
         Patch 29023.00

  - Tru64_UNIX_V5.1B:
         Patch 29050.00

You can ignore this message. In all cases, the subsets will be installed correctly.

See “Cluster Cloning Offers Alternative to No-Roll Patching” for an introduction to dupclone.

Installed CSP Could Affect dupatch Cloning Process

If you have installed customer-specific patches (CSPs) on your system, you may see a message similar to the following when installing this kit using the dupatch cloning process, at which time the cloning process will be terminated:

Inspecting 69 patches for possible system conflicts ...  
      ./usr/bin/ls:
               is installed by Customer Specific Patch (CSP):   

  - Tru64_UNIX_V5.1B / Installation Patches:
         Patch 01682.00 - Fix for dupatch command and can not be replaced by this patch
                          . To install this patch,ideally, you must first remove the 
                          CSP using dupatch. Before performing this action, you should
                          contact your HP Service Representative to determine if this
                          patch kit contains the CSP. If it does not, you may need to 
                          obtain a new CSP from HP in order to install the patch kit 
                          and retain the CSP fix. Or, you may use dupatch baselining to
                          enable the patch installation.

The recommended action is to perform dupatch baselining on your existing system to enable the patch installation process and retain the CSP on your system. Removing the CSP (as mentioned in message) could eliminate the fixes made by that CSP.

After running the baselining process on your existing system, you will need to begin the cloning process from the beginning by reduplicating your system on an alternate set of disks and rerunning the dupatch cloning process. See the Patch Kit Installation Instructions for information about performing baselining and on the patch cloning process.

Migrating a Patched Standalone System to a Cluster

Installing only the base patches on a non-cluster system omits various patches (including some security patches) because of dependencies on TruCluster Server patches. Such patches are not needed in standalone systems. However, if the standalone system is then clustered using the clu_create command and you attempt to apply the cluster patches, many patches will fail with errors because various prerequisite patches failed.

These errors do not necessarily indicate that the patch process has failed, but they are numerous, can be confusing and might obscure genuine errors.

The preferred procedure for adding a standalone system into a cluster is as follows:

  1. Reinstall the operating system on the standalone system.

  2. Run the clu_create command and bring the standalone system as a cluster node.

  3. Apply all base and cluster patches.

Disable vfast Utility if Running On Cluster Domains

If the vfast utility is running on the TruCluster domains cluster_root and cluster_var, deactivate it on the domains before installing or removing this kit. To deactivate vfast on the two domains, use the following command:

# vfast deactivate cluster_root
# vfast deactivate cluster_var

See the vfast(8) reference page for more information.

Creation of Some MFS File Systems Depends on Version Switch

During the installation of this kit, MFS file systems that are 4 GB and larger (or 2 GB and larger if a 1024-byte sector size is used) cannot be created until after the version switch is thrown. (See the Patch Kit Installation Instructions for information about the version switch.)

Workaround Saves Files to Allow Patch Kit Removal

If you upgrade the operating system and install a patch kit within the same roll, the contents of the patch backups are inadvertently removed. The result is that the patches most recently installed cannot be removed because the backups are missing.

The following procedure saves then restores backups so they will be available if you later decided to remove the patch kit:

  1. Create backup files of the /backup and /doc directories after the postinstall step (clu_upgrade postinstall) as follows:

    # cd /var/adm/patch/backup
    # tar cvf /var/adm/patch/BACKUP.tar *
    # cd /var/adm/patch/doc
    # tar cvf /var/adm/patch/DOC.tar *
  2. After the switch step (clu_upgrade switch) untar the files you created in step 1:

    # cd /var/adm/patch/backup
    # tar xvf /var/adm/patch/BACKUP.tar
    # cd /var/adm/patch/doc
    # tar xvf /var/adm/patch/DOC.tar 

    This will restore the files under the following directories:

    • /var/adm/patch/backup

    • /var/adm/patch/doc

Enabling the Version Switch After Installation

Some patches require you to run the versw -switch command to enable the new functions delivered in those patches. (See the Patch Kit Installation Instructions for information about version switches.) Enter the command as follows after dupatch has completed the installation process:

# versw -switch

The new functionality will not be available until after you reboot your system. You do not have to run the versw -switch command, but if you do not, your system will not be able to access the functionality provided in the version-switch patches.

Script Required to Reverse Version Switch

If you enabled version switches as described in the section titled “Enabling the Version Switch After Installation”, you must run the /usr/sbin/versw_enable_delete script before attempting to remove Version 5.1B-4 or higher. The steps for running this script require a complete cluster or single system shutdown, so choose a time when a shutdown will have the least impact on your operations. The following steps describe the procedure:

  1. Make sure that all phases of the installation process have been completed.

  2. Run the /usr/sbin/versw_enable_delete script:

    # /usr/sbin/versw_enable_delete
  3. Shut down the entire cluster or the single system.

  4. Reboot the entire cluster or the single system.

  5. Run dupatch on your single system or on a cluster using the rolling upgrade procedure to delete Version 5.1B-4 or higher (as described in the Patch Kit Installation Instructions), up to the point where the kernel is rebuilt and the system must be booted.

  6. Reboot the single system or each member of the cluster.

    NOTE: This step requires that you reboot each cluster member to remove Version 5.1B-4 or higher. Because the no-roll procedure automatically reboots the system after deleting the patches, you would not be able to perform this step as required.

Restriction on Using No-Roll Procedure to Remove Kit

The section titled “Script Must Be Run When Returning to Pre-Patched System” describes actions you need to take before rebooting your system if removing this kit would restore your system to a pro-patched state. Because the no-roll procedure automatically boots your system, you cannot use that patch kit removal method if doing so would restore your system to a pre-patched state

Do Not Install Prior NHD Kits on a Patched System

Do not install the NHD–5 or NHD–6 kits on your TruCluster Server system if you have installed this patch kit or earlier patch kits. Doing so may cause an incorrect system configuration. The installation code for these new hardware delivery kits does not correctly preserve some cluster subset files.