How to Install and Configure a Two-Node Cluster

Using Oracle Solaris Cluster 4.0 on Oracle Solaris 11

by Subarna Ganguly and Jonathan Mellors, December 2011

How to quickly and easily install and configure Oracle Solaris Cluster software for two nodes, including configuring a quorum device.

Introduction

This article provides a step-by-step process for using the interactive scinstall utility to install and configure Oracle Solaris Cluster software for two nodes, including the configuration of a quorum device. It does not cover the configuration of highly available services.

Note: For more details on how to install and configure other Oracle Solaris Cluster software configurations, see the Oracle Solaris Cluster Software Installation Guide.

The interactive scinstall utility is menu-driven. The menus help reduce the chance of mistakes and promote best practices by using default values, prompting you for information specific to your cluster, and identifying invalid entries.

The scinstall utility also eliminates the need to manually set up a quorum device by automating the configuration of a quorum device for the new cluster.

Note: This article refers to the Oracle Solaris Cluster 4.0 release. For more information about the latest Oracle Solaris Cluster release, see the release notes.

Prerequisites, Assumptions, and Defaults

This section discusses several prerequisites, assumptions, and defaults for two-node clusters.

Configuration Assumptions

This article assumes the following conditions are met:

  • You are installing on Oracle Solaris 11 and you have basic system administration skills.
  • You are installing Oracle Solaris Cluster 4.0 software.
  • The cluster hardware is supported with Oracle Solaris Cluster 4.0 software. (See Oracle Solaris Cluster System Requirements. (PDF) )
  • A two-node x86 cluster is installed. However, the installation procedure is applicable to SPARC clusters as well.
  • Each node has two spare network interfaces to be used as private interconnects, also known as transports, and at least one network interface that is connected to the public network.
  • SCSI shared storage is connected to the two nodes.
  • Your setup looks like Figure 1, although you might have fewer or more devices, depending on your system or network configuration.

Note: It is recommended, but not required, that you have console access to the nodes during cluster installation.

Figure 1. Oracle Solaris Cluster Hardware Configuration

Prerequisites for Each System

This article assumes that Oracle Solaris 11 has been installed on both systems.

Initial Preparation of Public IP Addresses and Logical Host Names

You must have the logical names (host names) and IP addresses of the nodes that are to be configured as a cluster. Add those entries to each node's /etc/inet/hosts file or to a naming service if a naming service, such as DNS, NIS, or NIS+ maps, is used.

The example in this article uses the NIS service and the configuration shown in Table 1.

Table 1. Configuration

COMPONENT NAME INTERFACE IP ADDRESS
Cluster Name phys-schost
Node 1 phys-schost-1 nge0 1.2.3.4
Node 2 phys-schost-2 nge0 1.2.3.5

Defaults

The scinstall interactive utility in Typical mode installs the Oracle Solaris Cluster software with the following defaults:

  • Private-network address 172.16.0.0
  • Private-network netmask 255.255.248.0
  • Cluster-transport switches switch1 and switch2

The example in this article has no cluster-transport switches. Instead, the private networking is resolved by using back-to-back cables.

In the example in this article, the interfaces of the private interconnects are nge1 and e1000g1 on both cluster nodes.

Preinstallation Checks

Perform the following steps.

In our example (with the NIS service), the /etc/inet/hosts files are as follows.

On node 1:



# Internet host table
#
::1 phys-schost-1 localhost 
127.0.0.1 phys-schost-1 localhost loghost 

On node 2:



# Internet host table
#
::1 phys-schost-2 localhost 
127.0.0.1 phys-schost-2 localhost loghost

In our example, there are two disks that are shared between the two nodes:

Listing 1. Verifying Shared Storage Is Available



# format
Searching for disks...done

AVAILABLE DISK SELECTIONS:
   0. c4t0d0 
          /pci@7b,0/pci1022,7458@11/pci1000,3060@2/sd@0,0
          /dev/chassis/SYS/HD0/disk
   1. c4t1d0 
          /pci@7b,0/pci1022,7458@11/pci1000,3060@2/sd@1,0
          /dev/chassis/SYS/HD1/disk
   2. c0t600A0B800026FD7C000019B149CCCFAEd0 
          /scsi_vhci/disk@g600a0b800026fd7c000019b149cccfae
   3. c0t600A0B800026FD7C000019D549D0A500d0 
          /scsi_vhci/disk@g600a0b800026fdb600001a0449d0a6d3


# more /etc/release
                           Oracle Solaris 11 11/11 X86
  Copyright (c) 1983, 2011, Oracle and/or its affiliates.  All rights reserved.
                           Assembled 26 September 2011

If the nodes are configured as static, proceed to the section Configuring the Oracle Solaris Cluster Publisher. Otherwise, continue with this procedure and do the following:

a. If the network interfaces are not configured as static IP addresses, on each node, run the command shown in Listing 2 to unconfigure all network interfaces and services.

Listing 2. Unconfigure the Network Interfaces and Services


 
# netadm enable -p ncp defaultfixed 

Enabling ncp 'DefaultFixed'
phys-schost-1: Sep 27 08:19:19 phys-schost-1 in.ndpd[1038]: Interface net0 has been removed from
 kernel. in.ndpd will no longer use it
Sep 27 08:19:19 phys-schost-1 in.ndpd[1038]: Interface net1 has been removed from kernel
. in.ndpd will no longer use it
Sep 27 08:19:19 phys-schost-1 in.ndpd[1038]: Interface net2 has been removed from kernel
. in.ndpd will no longer use it
Sep 27 08:19:20 phys-schost-1 in.ndpd[1038]: Interface net3 has been removed from kernel
. in.ndpd will no longer use it
Sep 27 08:19:20 phys-schost-1 in.ndpd[1038]: Interface net4 has been removed from kernel
. in.ndpd will no longer use it
Sep 27 08:19:20 phys-schost-1 in.ndpd[1038]: Interface net5 has been removed from kernel
. in.ndpd will no longer use it

b. Then, on each node, run the commands shown in Listing 3.

Listing 3. Commands to Run on Both Nodes


			
 # svccfg -s svc:/network/nis/domain setprop config/domainname = hostname: nisdomain.example.com

# svccfg -s svc:/network/nis/domain:default refresh

# svcadm enable svc:/network/nis/domain:default

# svcadm enable  svc:/network/nis/client:default

# /usr/sbin/svccfg -s svc:/system/name-service/switch setprop config/host = astring: \"files nis\"

# /usr/sbin/svccfg -s svc:/system/name-service/switch setprop config/netmask = astring: \"files nis\" 
# /usr/sbin/svccfg -s svc:/system/name-service/switch setprop config/automount = astring: \"files nis\" 
# /usr/sbin/svcadm refresh svc:/system/name-service/switch

c. On each node, bind back to the NIS server:


 
 # beadm create Pre-Cluster-s11

# beadm list

BE                     Active Mountpoint Space   Policy  Created          
--                     ------ ---------- -----   ------  -------          
Pre-Cluster-s11         -      -         179.0K  static  2011-09-27 08:51 
s11                     NR     /          4.06G  static  2011-09-26 08:50 
  1. Temporarily enable rsh or ssh access for root on the cluster nodes.
  2. Log in to the cluster nodes on which you are installing Oracle Solaris Cluster software and become superuser.
  3. On each node, verify the /etc/inet/hosts file entries. If no other name resolution service is available, add the name and IP address of the other node to this file.
  4. On each node, verify that at least one shared storage disk is available, as shown in Listing 1.
    1. c0t600A0B800026FD7C000019B149CCCFAEd0
    2. c0t600A0B800026FD7C000019D549D0A500d0
  5. On each node, ensure the right OS version is installed:
  6. Ensure that the network interfaces are configured as static IP addresses (not DHCP or of type addrconf, as displayed by the command ipadm show-addr -o all).
  7. # ypinit -c

    d. Reboot each node to make sure the new network setup is working fine.

  8. (Optional) On each node, create a boot environment (BE), without the cluster software, as a pre-cluster backup BE, for example:

Configuring the Oracle Solaris Cluster Publisher

There are two main ways to access the Oracle Solaris Cluster package repository, depending on whether the cluster nodes have direct access (or access through a Web proxy) to the Internet:

Using a Repository Hosted on pkg.oracle.com

To access either the Oracle Cluster Solaris Release or Support repositories, obtain the SSL public and private keys, as follows:

  1. Go to http://pkg-register.oracle.com/register/repos/ (login required).
  2. Choose the Oracle Solaris Cluster Release or Support repository.
  3. Accept the license.
  4. Request a new certificate by choosing the Oracle Solaris Cluster software and submitting a request. (A certification page is displayed with download buttons for the key and certificate.)
  5. Download the key and certificate and install them, as described in the certification page.
  6. Configure the ha-cluster publisher with the downloaded SSL keys to point to the selected repository URL on pkg.oracle.com. The following example uses the release repository:

						
# pkg set-publisher \  -k /var/pkg/ssl/Oracle_Solaris_Cluster_4.0.key.pem \  -c /var/pkg/ssl/Oracle_Solaris_Cluster_4.0.certificate.pem \  -g https://pkg.oracle.com/ha-cluster/release/ ha-cluster

Using a Local Copy of the Repository

To access a local copy of the Oracle Solaris Cluster Release or Support repository, download the repository image, as follows.


			
			
# lofiadm -a /tmp/osc4.0-repo-full.iso
/dev/lofi/1
# mount -F hsfs /dev/lofi/1 /mnt
# rsync -aP /mnt/repo /export
# share /export/repo
  • 1. Download the repository image from one of the following sites:

  • 2. On the Media Pack Search page, select Oracle Solaris as the Product Pack and click Go.

  • 3. Choose Oracle Solaris Cluster 4.0 Media Pack and download the file.

  • 4. Mount the repository image and copy the data to a shared file system that all the cluster nodes can access.

  • 5. Configure the ha-cluster publisher. The following example uses node 1 as the system that shared the local copy of the repository:

  • # pkg set-publisher -g file:///net/phys-schost-1/export/repo ha-cluster

Installing the Oracle Solaris Cluster Software Packages


											
				# pkg publisher

PUBLISHER          TYPE     STATUS   URI
solaris            origin   online   
ha-cluster         origin   online   		

Listing 4. Installing the Package Group


						
						# pkg install ha-cluster-full

           Packages to install:  68
       Create boot environment:  No
Create backup boot environment: Yes
            Services to change:   1

DOWNLOAD                                  PKGS       FILES    XFER (MB)
Completed                                68/68   6456/6456    48.5/48.5$<3>

PHASE                                        ACTIONS
Install Phase                              8928/8928 

PHASE                                          ITEMS
Package State Update Phase                     68/68 
Image State Update Phase                         2/2 
Loading smf(5) service descriptions: 9/9
Loading smf(5) service descriptions: 57/57

Configuring the Oracle Solaris Cluster Software

Listing 12. Verifying that Both Nodes Joined the Cluster


						
# cluster status
=== Cluster Nodes ===

--- Node Status ---

Node Name                                 Status
---------                                 ------
phys-schost-1                             Online
phys-schost-2                             Online

=== Cluster Transport Paths ===
Endpoint1                             Endpoint2                 Status
---------                             --------                  ------
phys-schost-1:net3              phys-schost-2:net3              Path online
phys-schost-1:net1              phys-schost-2:net1              Path online

=== Cluster Quorum ===

--- Quorum Votes Summary from (latest node reconfiguration) ---


            Needed   Present   Possible
            ------   -------   --------
            2        3         3

--- Quorum Votes by Node (current status) ---

Node Name         Present     Possible   Status
---------         -------     --------    ------
phys-schost-1     1           1           Online
phys-schost-2     1           1           Online

--- Quorum Votes by Device (current status) ---

Device Name       Present      Possible      Status
-----------       -------      --------      ------
d1                1            1             Online

=== Cluster Device Groups ===

--- Device Group Status ---

Device Group Name     Primary     Secondary     Status
-----------------     -------     ---------     ------

--- Spare, Inactive, and In Transition Nodes ---
Device Group Name   Spare Nodes   Inactive Nodes   In Transition Nodes
-----------------   -----------   --------------   --------------------

--- Multi-owner Device Group Status ---

Device Group Name           Node Name           Status
-----------------           ---------           ------

=== Cluster Resource Groups ===

Group Name       Node Name       Suspended      State
----------       ---------       ---------      -----

=== Cluster Resources ===

Resource Name       Node Name       State       Status Message
-------------       ---------       -----       --------------

=== Cluster DID Devices ===
Device Instance               Node                      Status
---------------               ----                     ------
/dev/did/rdsk/d1              phys-schost-1             Ok
                              phys-schost-2             Ok
/dev/did/rdsk/d2              phys-schost-1             Ok
                              phys-schost-2             Ok
/dev/did/rdsk/d3              phys-schost-1             Ok
/dev/did/rdsk/d4              phys-schost-1             Ok
/dev/did/rdsk/d5              phys-schost-2             Ok
/dev/did/rdsk/d6              phys-schost-2             Ok
                              
=== Zone Clusters ===

--- Zone Cluster Status ---

Name    Node Name    Zone HostName    Status    Zone Status
----    ---------    -------------    ------    -----------

Verification (Optional)

Now, we will create a failover resource group with a LogicalHostname resource for a highly available network resource and an HAStoragePlus resource for a highly available ZFS file system on a zpool resource.

Listing 14. Switching the Resource Group to Node 2


						

# /usr/cluster/bin/clrg switch -n phys-schost-2 test-rg

# /usr/cluster/bin/clrg status

=== Cluster Resource Groups ===

Group Name       Node Name                Suspended      Status
----------       ---------                ---------      ------
test-rg          phys-schost-1           No              Offline
                 phys-schost-2           No              Online



# /usr/cluster/bin/clrs status

=== Cluster Resources ===

Resource Name       Node Name              State       Status Message
-------------       ---------              -----       --------------
hasp-res            phys-schost-1          Offline     Offline
                    phys-schost-2          Online      Online

schost-lhres        phys-schost-1          Offline     Offline - LogicalHostname offline.
                    phys-schost-2          Online      Online - LogicalHostname online.

Summary

This article described how to install and configure a two-node cluster with Oracle Solaris Cluster 4.0 on Oracle Solaris 11. It also explained how to verify that the cluster is behaving correctly by creating and running two resources on one node and then switching over those resources to the secondary node.

For More Information

  1. On each node, ensure the correct Oracle Solaris package repositories are published. If they are not, unset the incorrect publishers and set the correct ones. The installation of the ha-cluster packages is likely to fail if it cannot access the Oracle Solaris publisher.
  2. On each cluster node, install the ha-cluster-full package group, as shown in Listing 4.

    On node 1, run this command.

    
    						
    						# dladm show-phys
    
    LINK              MEDIA                STATE      SPEED  DUPLEX    DEVICE
    net3              Ethernet             unknown    0      unknown   e1000g1
    net0              Ethernet             up         1000   full      nge0
    net4              Ethernet             unknown    0      unknown   e1000g2
    net2              Ethernet             unknown    0      unknown   e1000g0
    net1              Ethernet             unknown    0      unknown   nge1
    net5              Ethernet             unknown    0      unknown   e1000g3
    	

    On node 2, run this command.

    
    						
    # dladm show-phys
    
    LINK              MEDIA                STATE      SPEED  DUPLEX    DEVICE
    net3              Ethernet             unknown    0      unknown   e1000g1
    net0              Ethernet             up         1000   full      nge0
    net4              Ethernet             unknown    0      unknown   e1000g2
    net2              Ethernet             unknown    0      unknown   e1000g0
    net1              Ethernet             unknown    0      unknown   nge1
    net5              Ethernet             unknown    0      unknown   e1000g3						
    	

    In our example, we will be using net1 and net3 on each node as private interconnects.

    
    						
    						# svcprop network/rpc/bind:default | grep local_only
    
    config/local_only boolean false
    
    If it is not set to false, set it as follows:
    
    # svccfg
    svc:> select network/rpc/bind
    svc:/network/rpc/bind> setprop config/local_only=false
    svc:/network/rpc/bind> quit
    
    # svcadm refresh network/rpc/bind:default 
    # svcprop network/rpc/bind:default | grep local_only 
    config/local_only boolean false
    

    In the example shown in Listing 5, the command is run on the second node, phys-schost-2.

    Listing 5. Running the scinstall Command

    
    						
    						# /usr/cluster/bin/scinstall 
    
      *** Main Menu ***
    
        Please select from one of the following (*) options:
    
          * 1) Create a new cluster or add a cluster node
          * 2) Print release information for this cluster node
          * ?) Help with menu options
          * q) Quit
    
        Option:  1
    

    Listing 6. Creating a New Cluster

    
    						
    						*** Create a New Cluster ***
    
    
        This option creates and configures a new cluster.
        Press Control-D at any time to return to the Main Menu.
    
    
        Do you want to continue (yes/no) [yes]?  
    
        Checking the value of property "local_only" of service svc:/network/rpc/bind
     ...
        Property "local_only" of service svc:/network/rpc/bind is already correctly 
    set to "false" on this node.
        
    Press Enter to continue:
    	

    Listing 7. Selecting the Installation Mode

    
    						
    						>>> Typical or Custom Mode <<<
    
        This tool supports two modes of operation, Typical mode and Custom 
        mode. For most clusters, you can use Typical mode. However, you might 
        need to select the Custom mode option if not all of the Typical mode 
        defaults can be applied to your cluster.
    
        For more information about the differences between Typical and Custom 
        modes, select the Help option from the menu.
        Please select from one of the following options:
    
            1) Typical
            2) Custom
    
            ?) Help
            q) Return to the Main Menu
    
        Option [1]:  1
    	
    
    						
     >>> Cluster Name <<<
    
        Each cluster has a name assigned to it. The name can be made up of any
        characters other than whitespace. Each cluster name should be unique 
        within the namespace of your enterprise.
    
        What is the name of the cluster you want to establish? phys-schost 						
    	

    Listing 8. Confirming the List of Nodes

    
    						
    						 >>> Cluster Nodes <<<
    
        This Oracle Solaris Cluster release supports a total of up to 16 
        nodes.
    
        List the names of the other nodes planned for the initial cluster 
        configuration. List one node name per line. When finished, type 
        Control-D:
    
        Node name (Control-D to finish):  phys-schost-1
        Node name (Control-D to finish):  
    
     ^D
    
    
        This is the complete list of nodes:
    
            phys-schost-2
            phys-schost-1
    
        Is it correct (yes/no) [yes]?
     
    

    Listing 9. Selecting the Transport Adapters

    
    						
    						>>> Cluster Transport Adapters and Cables <<<
    
        You must identify the cluster transport adapters which attach this 
        node to the private cluster interconnect.
    
        Select the first cluster transport adapter:
    
            1) net1
            2) net2
            3) net3
            4) net4
            5) net5
            6) Other
    
        Option: 1
    
        Searching for any unexpected network traffic on "net1" ... done
    Unexpected network traffic was seen on "net1".
    "net1" may be cabled to a public network.
    
        Do you want to use "net1" anyway (yes/no) [no]?  yes
    
       Select the second cluster transport adapter:
    
            1) net1
            2) net2
            3) net3
            4) net4
            5) net5
            6) Other
    
        Option: 3
     
    Searching for any unexpected network traffic on "net3" ... done
    Unexpected network traffic was seen on "net3".
    "net3" may be cabled to a public network.
    
        Do you want to use "net3" anyway (yes/no) [no]?  yes
    

    Listing 10. Configuring the Quorum Device

    
    						
    						>>> Quorum Configuration <<<
    
        Every two-node cluster requires at least one quorum device. By 
        default, scinstall selects and configures a shared disk quorum device 
        for you.
    
        This screen allows you to disable the automatic selection and 
        configuration of a quorum device.
    
        You have chosen to turn on the global fencing. If your shared storage 
        devices do not support SCSI, such as Serial Advanced Technology 
        Attachment (SATA) disks, or if your shared disks do not support 
        SCSI-2, you must disable this feature.
    
        If you disable automatic quorum device selection now, or if you intend
        to use a quorum device that is not a shared disk, you must instead use
        clsetup(1M) to manually configure quorum once both nodes have joined 
        the cluster for the first time.
    
        Do you want to disable automatic quorum device selection (yes/no) [no]?  
    
    Is it okay to create the new cluster (yes/no) [yes]?  
    During the cluster creation process, cluster check is run on each of the new cluster nodes. 
    If cluster check detects problems, you can either interrupt the process or check the log 
    files after the cluster has been established.
    
        Interrupt cluster creation for cluster check errors (yes/no) [no]?  
    

    Listing 11 shows the final output, which indicates the configuration of the nodes and the installation log file name. The utility then reboots each node in cluster mode.

    Listing 11. Details of the Node Configuration

    
    						
    
    Cluster Creation
    
        Log file - /var/cluster/logs/install/scinstall.log.3386
    
        Configuring global device using lofi on phys-schost-1: done
    
        Starting discovery of the cluster transport configuration.
    
        The following connections were discovered:
    
            phys-schost-2:net1  switch1  phys-schost-1:net1
            phys-schost-2:net3  switch2  phys-schost-1:net3
    
        Completed discovery of the cluster transport configuration.
    
        Started cluster check on "phys-schost-2".    
        Started cluster check on "phys-schost-1".
    
    .
    .
    .
    
    Refer to the log file for details.
    The name of the log file is /var/cluster/logs/install/scinstall.log.3386.
    
    
        Configuring "phys-schost-1" ... done
        Rebooting "phys-schost-1" ... 
        Configuring "phys-schost-2" ... 
        Rebooting "phys-schost-2" ... 
    
    Log file - /var/cluster/logs/install/scinstall.log.3386
    
    
    
    						
    # svcs -x
    # svcs multi-user-server
    STATE          STIME    FMRI
    online          9:58:44 svc:/milestone/multi-user-server:default
    
    
    1. On each node of the cluster, identify the network interfaces that will be used for the private interconnects, for example:
    2. On both nodes, ensure that SMF services are not disabled.
    3. # svcs -x
    4. On each node, ensure that the service network/rpc/bind:default has its local_only configuration set to false.
    5. From one of the nodes, start the Oracle Solaris Cluster configuration utility by running the scinstall command, which will configure the software on the other node as well, and then type 1 from the Main menu to choose to create a new cluster or add a cluster node.
    6. In the Create a New Cluster screen, shown in Listing 6, answer yes and then press Enter.
    7. In the installation mode selection screen, select the default option (Typical), as shown in Listing 7.
    8. Provide the name of the cluster (in our example, phys-schost).
    9. Provide the name of the other node (in our example, phys-schost-1), press Control-D to finish the node list, and answer yes to confirm the list of nodes, as shown in Listing 8.
    10. The next screen configures the cluster's private interconnects, also known as the transport adapters. In our example, we are selecting interfaces net1 and net3, as determined previously. If the tool finds network traffic on those interfaces, it will ask for confirmation to use them anyway. Ensure that those interfaces are not connected to any other network, and then confirm their use as transport adapters, as shown in Listing 9.
    11. Next, configure the quorum device by selecting the following default answers, as shown in Listing 10:

      a. Disable automatic quorum devices selection by answering no.
      b.Confirm that it is okay to create the new cluster by answering yes.
      c.When asked whether to interrupt cluster creation for cluster check errors, answer no.

    12. When the scinstall utility finishes, the installation and configuration of the basic Oracle Solaris Cluster software is complete. The cluster is now ready for you to configure the components you will use to support highly available applications. These cluster components can include device groups, cluster file systems, highly available local file systems, and individual data services and zone clusters. To configure these components, consult the documentation library.
    13. On each node, verify that multi-user services for the Oracle Solaris Service Management Facility (SMF) are online. Also ensure that the new services added by Oracle Solaris Cluster are all online.
    14. From one of the nodes, verify that both nodes have joined the cluster, as shown in Listing 12.

    In the following example, schost-lh is used as the logical host name for the resource group. This resource is of the type SUNW.LogicalHostname, which is a preregistered resource type.

    On node 1:

    
    											
    	# Internet host table
    #
    ::1 localhost
    127.0.0.1 localhost loghost
    1.2.3.4   phys-schost-1 # Cluster Node
    1.2.3.5   phys-schost-2 # Cluster Node
    1.2.3.6   schost-lh
    
    On node 2:
    
    # Internet host table
    #
    ::1 localhost 
    127.0.0.1 localhost loghost 
    1.2.3.4   phys-schost-1 # Cluster Node
    1.2.3.5   phys-schost-2 # Cluster Node
    1.2.3.6   schost-lh
    
    
    											
    # zpool create  -m /zfs1 pool1 mirror /dev/did/dsk/d1s0 /dev/did/dsk/d2s0
    
    # df -k /zfs1
    Filesystem           1024-blocks        Used   Available Capacity  Mounted on
    pool1                20514816            31    20514722        1%      /zfs1
    

    The zpool will now be placed in a highly available resource group as a resource of type SUNW.HAStoragePlus. This resource type has to be registered before it is used for the first time.

    Listing 13. Checking the Group and Resource Status

    
    						
    
    
    
    # /usr/cluster/bin/clrg  status
    
    === Cluster Resource Groups ===
    
    Group Name       Node Name         Suspended      Status
    ----------       --------          ---------      ------
    test-rg          phys-schost-1     No             Online
                     phys-schost-2     No             Offline
    
    
    
    # /usr/cluster/bin/clrs status
    
    === Cluster Resources ===
    
    Resource Name       Node Name             State       Status Message
    -------------       ---------             -----       --------------
    hasp-res            phys-schost-1         Online      Online
                        phys-schost-2         Offline     Offline
    
    schost-lhres        phys-schost-1         Online      Online - LogicalHostname online.
                        phys-schost-2         Offline     Offline						
    
    

    From the above status, we see that the resources and the group are online on node 1.

    1. Identify the network address that will be used for this purpose and add it to the /etc/inet/hosts file on the nodes.
    2. From one of the nodes, create a zpool with the two shared storage disks: /dev/did/rdsk/d1s0 and /dev/did/rdsk/d2s0. In our example, we have assigned the entire disk to slice 0 of the disks, using the format utility.
    3. Create a highly available resource group to house the resources by doing the following on one node:
    4. # /usr/cluster/bin/clrg create test-rg
    5. Then add the network resource to the group:
    6. # /usr/cluster/bin/clrslh create -g test-rg -h schost-lh schost-lhres
    7. Register the storage resource type:
    8. # /usr/cluster/bin/clrt register SUNW.HAStoragePlus
    9. Add the zpool to the group:
    10. # /usr/cluster/bin/clrs create -g test-rg -t SUNW.HAStoragePlus -p \ zpools=pool1 hasp-res
    11. Bring the group online:
    12. # /usr/cluster/bin/clrg online -eM test-rg
    13. Check the status of the group and the resources, as shown in Listing 13.
    14. To verify availability, switch the resource group to node 2 and check the status of the resources and the group, as shown in Listing 14.
  3. For more information on configuring Oracle Solaris Cluster components, see the resources listed in Table 2.

    Table 2. Resources

    Resource URL
    Oracle Solaris Cluster 4.0 documentation library http://www.oracle.com/pls/topic/lookup?ctx=E23623
    Oracle Solaris Cluster Software Installation Guide http://www.oracle.com/pls/topic/lookup?ctx=E23623&id=CLIST
    Oracle Solaris Cluster Data Services Planning and Administration Guide http://www.oracle.com/pls/topic/lookup?ctx=E23623&id=CLDAG
    Oracle Solaris Cluster 4.0 Release Notes http://www.oracle.com/pls/topic/lookup?ctx=E23623&id=CLREL
    Oracle Solaris Cluster training http://www.oracle.com/solaris/technologies/cluster-overview.html
    Oracle Solaris Cluster downloads http://www.oracle.com/middleware/technologies/solaris-cluster-downloads.html

    Want technical articles like this one delivered to your inbox? Subscribe to the Systems Community Newsletter— only technical content for sysadmins and developers.