HP Tru64 UNIX and TruCluster Server Version 5.1B-6

  Patch Summary and Release Notes   

Appendix A Setting Up an Enhanced Distance Cluster

An Enhanced Distance Cluster allows a cluster to be extended between two sites up to 100 km apart to assist recovery in the event of a disaster. An Enhanced Distance Cluster provides basic high availability services in the event of the loss of a single component. It is important to note that an Enhanced Distance Cluster is not designed to handle nor is it capable of handling simultaneous cascading failures and therefore can not provide a fully disaster tolerant solution. The following topics are discussed:

Enhanced Distance Cluster Configuration Requirements

This section provides information on the hardware and configurations supported for Enhanced Distance Clusters. Configurations that deviate from these configurations will require a custom support statement from Hewlett Packard in order to be supported. See the TruCluster Server QuickSpecs for information on hardware components supported by the TruCluster Server product.

The following table lists supported hardware.

Table A-1 Hardware Configuration

Number of Nodes2 to 4 nodes
CPU TypeAny node supported in a TruCluster configuration.
Fibre Channel Adapter (HBA)Any Fibre Channel HBA supported by TruCluster. (Parallel SCSI is not supported for any storage that can potentially fail over between locations.)
InterconnectGigabit Ethernet LAN with up to three switches. Cumulative distance must not exceed 100 km.
StorageTruCluster Fibre Channel connected storage (such as HP StorageWorks XP and HP StorageWorks EVA).


The following list describes configuration requirements for the cluster:

  • A two- to four-node cluster with up to three nodes at one site and the other nodes at a different site.

  • The cluster must be configured as a LAN cluster.

  • A least one shared storage array must be present at each site.

  • The cluster root (/), /usr, and /var file systems must be located within the same site.

  • All SAN-attached storage must be shared and directly accessible — not over the cluster inter-site connection but via the SAN — from all nodes at both sites.

  • The storage must be configured with remote data replication software (such as XP Continuous Access). Data replication is required in order to provide the ability to boot the site that does not contain the cluster file systems following a disaster event.

  • A reboot of all nodes at the surviving site is required following any disaster event that requires activation of the secondary replicated volumes. You will need to shut down the system, reconfigure the storage as necessary (perform an XP takeover, for example), and reboot the system. The expected quorum votes and other parameters may need to be modified in order to successfully boot the system.

  • If a site disaster occurs that involves multiple failures, high availability will be lost. Therefore, there needs to be procedures in place for the manual rebooting of the surviving site. The surviving site will work as a normal cluster with minimal or no data loss.

  • A single, combined span of up to 100 km using three switches and two segments of 50 km.

  • The configuration must have at least one physical subnet to which all cluster members and the default cluster alias belong.

  • The cluster must have an extended, dedicated cluster interconnect to which all cluster members are connected to serve as a private communication channel between cluster members. The interconnect must be shielded from any traffic that is not a part of the cluster communication according to the requirements for the LAN based cluster interconnect in the TruCluster Cluster Hardware Configuration manual.

Figure A-1 provides an example of an Enhanced Distance Cluster cluster interconnect configuration. The nodes at Data Center 1 are connected to a switch that is connected to an intermediate switch using a fiber link of up to 50 km. From the intermediate switch, another fiber link of up to 50 km connects to a third switch, to which the remaining two nodes at Data Center 2 are attached, thereby, establishing an overall distance between the sites of up to 100 km.

Figure A-1 Enhanced Distance Cluster Configuration

Enhanced Distance Cluster Configuration