How to Build a Secure Cluster

by Subarna Ganguly
Published February 2013 (updated March 2013)

Using Trusted Extensions with Oracle Solaris Cluster 4.1

This article discusses how to enable Trusted Extensions on an Oracle Solaris Cluster 4.1 cluster and configure a labeled-branded Exclusive-IP type of zone cluster.

About Trusted Extensions

Support for Trusted Extensions on Oracle Solaris Cluster 4.1 extends the Trusted Extensions concept of security containers in Oracle Solaris 11 (also known as non-global zones) to zone clusters. These special zone clusters, or Trusted Zone Clusters, are cluster-wide security containers.

Oracle Solaris Trusted Extensions confine applications and data to a specific security label within a non-global zone. To provide high availability, Oracle Solaris Cluster extends that feature to a clustered set of systems in the form of labeled zone clusters. The zones (or nodes) in the zone clusters are a brand of their own and are known as labeled-branded zones.

Oracle Solaris Cluster 4.1 supports both Shared-IP and Exclusive-IP types of labeled-branded zone clusters.

Configuration Assumptions

To enable Trusted Extensions on an Oracle Solaris Cluster 4.1 cluster and configure a labeled-branded Exclusive-IP type of zone cluster using the procedure provided in this article, you must have the following:

  • A two-node cluster must already be installed and configured with Oracle Solaris 11.1 and Oracle Solaris Cluster 4.1. For instructions about installing a two-node cluster, see "How to Install and Configure a Two-Node Cluster." For more details, see the Oracle Solaris Cluster Software Installation Guide.
  • All repositories that are needed for Oracle Solaris and Oracle Solaris Cluster must be configured on the cluster nodes.
  • The cluster hardware must be a supported configuration for the Oracle Solaris Cluster 4.1 software.
  • Each node must have two spare network interfaces or virtual interfaces that are used as private interconnects (also known as transports) and at least one other network interface or virtual interface that is connected to the public network subnet. These interfaces are used by the zone cluster.
  • Shared disk storage must be connected to the two nodes.

Figure 1 illustrates the configuration discussed in this article.

Note: Although not required, it is recommended that you have console access to the nodes during installation, configuration, and administration.

Figure 1
Figure 1

Nomenclature

  • Global cluster name: test
  • Global cluster node names: ptest1 and ptest2
  • Global cluster private interconnects: vnic11 (on net1) and vnic55 (on net5) on each node
  • Global cluster public subnet: 10.134.98.0
  • Global cluster public interface: net0
  • Zone cluster name: TX-zc-xip
  • Zone cluster node names: vztest1d and vztest2d
  • Zone cluster private interconnects: vnic1 (on net1) and vnic5 (on net5) on each node
  • Zone cluster public subnet: 10.134.99.0
  • Zone cluster public interface: net3

Prerequisites

To create a cluster-wide security container, or in other words, to create a labeled-branded zone cluster, ensure that you meet the following prerequisites:

  1. Ensure that the cluster nodes are configured and are healthy.
  2. The command in Listing 1 displays the nodes, quorum status, transport information, and other data that reflect the health of the cluster. Per the configuration shown in Figure 1, the node names are ptest1 and ptest2.

    # cluster show
    
    
    === Cluster Nodes ===
    
    --- Node Status ---
    
    Node Name                                       Status
    ---------                                       ------
    ptest1                                          Online
    ptest2                                          Online
    
    === Cluster Transport Paths ===
    
    Endpoint1               Endpoint2               Status
    ---------               ---------               ------
    ptest1:vnic55           ptest2:vnic55           Path online
    ptest1:vnic11           ptest2:vnic11           Path online
    
    === Cluster Quorum ===
    
    --- Quorum Votes Summary from (latest node reconfiguration) ---
    
                Needed   Present   Possible
                ------   -------   --------
                2        3         3
    
    --- Quorum Votes by Node (current status) ---
    
    Node Name       Present       Possible       Status
    ---------       -------       --------       ------
    ptest1          1             1              Online
    ptest2          1             1              Online
    
    --- Quorum Votes by Device (current status) ---
    
    Device Name       Present      Possible      Status
    -----------       -------      --------      ------
    d1                1            1             Online
    

    Listing 1

  3. Ensure that the appropriate Oracle Solaris and Oracle Solaris Cluster versions are installed on each node.
  4. # more /etc/release
    
                               
                                Oracle Solaris 11.1 SPARC
      Copyright (c) 1983, 2012, Oracle and/or its affiliates.  All rights reserved.
                               Assembled 19 September 2012
    
    # more /etc/cluster/release
     
                    Oracle Solaris Cluster 4.1 0.18.2 for Solaris 11 sparc
      Copyright (c) 2000, 2012, Oracle and/or its affiliates. All rights reserved.
    
  5. Ensure that the correct Oracle Solaris and Oracle Solaris Cluster publishers are set.
  6. You can see how the publishers are set using the command shown in the following example.

    # pkg publisher
    
    PUBLISHER             TYPE     STATUS P LOCATION
    solaris               origin   online F http://solaris-server.xyz.com/solaris11/dev/
    ha-cluster            origin   online F http://cluster-server.xyz.com:1234/
    
  7. Ensure that Network Information Service (NIS) is disabled for switch services and the NIS service is offline. If the Trusted Extensions (TX) LDAP server is available, add ldap after files. If there are no Trusted Extensions LDAP servers in the network, set the switches shown in Listing 2.
  8. # svcs -a | grep nis
    
    disabled       11:36:39 svc:/network/nis/domain:default
    disabled       11:37:11 svc:/network/nis/client:default
    
    # /usr/sbin/svccfg -s svc:/system/name-service/switch listprop config/netmask
    config/netmask astring     "cluster files"
    # /usr/sbin/svccfg -s svc:/system/name-service/switch listprop config/host
    config/host astring     "cluster files"
    # /usr/sbin/svccfg -s svc:/system/name-service/switch listprop config/automount
    config/automount astring     "files"
    # /usr/sbin/svcadm refresh svc:/system/name-service/switch
    # nscfg import -f name-service/switch
    

    Listing 2

  9. Ensure that the /etc/hosts file has the names and addresses of all hosts that the cluster nodes are going to access, including the following:
    1. Package publishers
    2. Default routers
    3. Any required NFS or application servers
  10. Add the cluster private host names to the /etc/hosts file.
    1. First, check the private host name for each cluster node, as shown in Listing 3 and Listing 4.
    2. On the cluster node ptest1, the node ID is 1. Therefore, the private host name is clusternode1-priv and the IP address is 172.16.2.1.

      # more /etc/cluster/nodeid
      
      1
      
      # ipadm show-addr
      ADDROBJ               TYPE     STATE        ADDR
      lo0/v4                static   ok           127.0.0.1/8
      sc_ipmp0/static1      static   ok           10.134.98.214/24
      sc_ipmp0/zoneadmd.v4  static   ok           10.134.98.219/8
      sc_ipmp0/zoneadmd.v4a static   ok           10.134.98.218/24
      vnic11/?              static   ok           172.16.0.65/26
      vnic55/?              static   ok           172.16.0.129/26
      clprivnet0/?          static   ok           172.16.2.1/24
      lo0/v6                static   ok           ::1/128
      

      Listing 3

      On the cluster node ptest2, the node ID is 2. Therefore, the private host name is clusternode2-priv and the IP address is 172.16.2.2.

      # more /etc/cluster/nodeid
      
      2
      
      # ipadm show-addr
      ADDROBJ               TYPE     STATE        ADDR
      lo0/v4                static   ok           127.0.0.1/8
      sc_ipmp0/static1      static   ok           10.134.98.215/24
      sc_ipmp0/zoneadmd.v4  static   ok           10.134.98.221/24
      sc_ipmp0/zoneadmd.v4a static   ok           10.134.98.222/8
      vnic11/?              static   ok           172.16.0.66/26
      vnic55/?              static   ok           172.16.0.130/26
      clprivnet0/?          static   ok           172.16.2.2/24
      lo0/v6                static   ok           ::1/128
      

      Listing 4

    3. Then add the following lines to the /etc/hosts file of each node.
    4. # vi /etc/hosts
      
      
      172.16.2.1  clusternode1-priv
      172.16.2.2  clusternode2-priv
      

Installing and Enabling Trusted Extensions

  1. On each cluster node, install the Trusted Extensions package.
  2. # pkg install system/trusted/trusted-extensions

  3. Verify the installation of the Trusted Extensions package, as shown in Listing 5.
  4. # pkg info trusted-extensions
    
    Name: system/trusted/trusted-extensions
    Summary: Trusted Extensions
    Category: Desktop (GNOME)/Trusted Extensions
    State: Installed
    Publisher: solaris
    Version: 0.5.11
    Build Release: 5.11
    Branch: 0.175.0.0.0.1.0
    Packaging Date: Wed Oct 12 14:36:05 2011
    Size: 5.45 kB
    FMRI: pkg://solaris/system/trusted/trusted-extensions@0.5.11,5.11-0.175.0.0.0.1.0:20111012T143605Z
    

    Listing 5

  5. To enable permission-based access to untrusted systems, such as default routers or NFS servers, allow connections to and from untrusted hosts to the cluster nodes.
    1. Make a copy of the /etc/pam.d/other file before making any changes.
    2. # cp /etc/pam.d/other /etc/pam.d/other.orig

    3. Modify the following entries in the /etc/pam.d/other file.
      1. pam_roles: Allows remote login by roles
      2. pam_tsol_account: Allows unlabeled hosts to contact Trusted Extensions systems
      
      
      # pfedit /etc/pam.d/other
      ...
      account requisite pam_roles.so.1 allow_remote
      ...
      account required pam_tsol_account.so.1 allow_unlabeled
      
  6. Enable Trusted Extensions on each node and reboot.
  7. 
    
    # svcadm enable -s labeld
    # svcs -a | grep labeld
    online 11:53:49 svc:/system/labeld:default
    # init 6
    
  8. Change the hostmodel property to weak on each node, as shown in Listing 6.
  9. When Trusted Extensions is enabled, the hostmodel property for both IPv4 and IPv6 is set to strong.

    
    
    # ipadm show-prop | more
    ...
    ipv4 hostmodel rw strong strong weak strong,
    src-prio
    weak
    ipv4 hostmodel rw strong strong weak strong,
    src-prio
    ...
    # ipadm set-prop -p hostmodel=weak ipv4
    # ipadm set-prop -p hostmodel=weak ipv6
    

    Listing 6

  10. Add the external hosts that the cluster nodes require as admin_low host types, as shown in Listing 7.
  11. These external hosts are used by the cluster nodes to configure zone clusters such as package publishers, network connectivity by using default routers and other servers such as NFS or application servers. Access is required for these hosts even though they do not run Trusted Extensions.

    On each node, type the following commands. The following example shows the command for the default router.

    
    
    # netstat -rn
    Routing Table: IPv4
    Destination   Gateway       Flags Ref   Use        Interface
    ------------- ------------- ----- ----- ---------- ---------
    default       10.134.98.1   UG    5     2810       sc_ipmp0
    10.134.98.0   10.134.98.214 U     7     97         sc_ipmp0
    127.0.0.1     127.0.0.1     UH    2     2058       lo0
    172.16.0.64   172.16.0.65   U     3     26390      vnic11
    172.16.0.128  172.16.0.129  U     3     25821      vnic55
    172.16.2.0    172.16.2.1    U     3     173        clprivnet0
    
    Routing Table: IPv6
    Destination/Mask  Gateway          Flags Ref Use      If
    ---------------- ----------------- ----- --- ------- -----
    ::1              ::1               UH    2   0       lo0
    
    # tncfg -t admin_low add host=10.134.98.1
    # tncfg:admin_low> info
        name=admin_low
        host_type=unlabeled
        doi=1
        def_label=ADMIN_LOW
        min_label=ADMIN_LOW
        max_label=ADMIN_HIGH
        host=10.134.98.1/32
    tncfg:admin_low> exit
    

    Listing 7

  12. Add the cluster transport interface/adapter and the IP addresses of the private hosts to the cipso host template.
    1. On ptest1, type the command shown in Listing 8:
    2. # ipadm show-addr
      
      ADDROBJ               TYPE     STATE        ADDR
      lo0/v4                static   ok           127.0.0.1/8
      sc_ipmp0/static1      static   ok           10.134.98.214/24
      sc_ipmp0/zoneadmd.v4  static   ok           10.134.98.219/8
      sc_ipmp0/zoneadmd.v4a static   ok           10.134.98.218/24
      vnic11/?              static   ok           172.16.0.65/26
      vnic55/?              static   ok           172.16.0.129/26
      clprivnet0/?          static   ok           172.16.2.1/24
      lo0/v6                static   ok           ::1/128
      

      Listing 8

    3. On ptest2, type the command shown in Listing 9:
    4. 
      
      # ipadm show-addr
      ADDROBJ               TYPE     STATE        ADDR
      lo0/v4                static   ok           127.0.0.1/8
      sc_ipmp0/static1      static   ok           10.134.98.215/24
      sc_ipmp0/zoneadmd.v4  static   ok           10.134.98.221/24
      sc_ipmp0/zoneadmd.v4a static   ok           10.134.98.222/8
      vnic11/?              static   ok           172.16.0.66/26
      vnic55/?              static   ok           172.16.0.130/26
      clprivnet0/?          static   ok           172.16.2.2/24
      lo0/v6                static   ok           ::1/128
      

      Listing 9

      The output in Listing 8 and Listing 9 shows that the following addresses are added on the transport end points:

      172.16.0.65, 172.16.0.129, 172.16.0.66, 172.16.0.130

      The following are the private node names hosted on the clprivnet0 interfaces:

      172.16.2.1 and 172.16.2.2

    5. On each node, type the following commands:
    6. 
      
      # tncfg
      tncfg:cipso> add host=172.16.2.1
      tncfg:cipso> add host=172.16.2.2
      tncfg:cipso> add host=172.16.0.65
      tncfg:cipso> add host=172.16.0.66
      tncfg:cipso> add host=172.16.0.129
      tncfg:cipso> add host=172.16.0.130
      

      The entries above are stored in the /etc/security/tsol/tnrhdb file.

Creating and Configuring a Trusted Zone Cluster

To create a Trusted Zone Cluster or labeled-branded zone cluster, use the Zone Cluster wizard of the clsetup utility. The utility is menu-driven and self-explanatory.

If you do not want to use the Zone Cluster wizard of the clsetup utility, follow the steps below:

  1. On each node of the physical interfaces on which the private interconnects of the global cluster are created, create a VNIC for the transport interface of the Exclusive-IP (XIP, henceforth) zone cluster.
  2. 
    
    # dladm create-vnic -l net1 vnic1 
    # dladm create-vnic -l net5 vnic5
    

    net1 and net5 are the physical interfaces on which the transport links of the global cluster are created.

  3. On one of the nodes, configure the zone cluster, as shown in Listing 10.
  4. 
    
    # clzc configure TX-zc-xip
    TX-zc-xip: No such zone cluster configured
    Use 'create' to begin configuring a new zone cluster.
    clzc:TX-zc-xip> create
    clzc:TX-zc-xip> set zonepath=/zones/TX-zc-xip
    clzc:TX-zc-xip> set brand=labeled
    clzc:TX-zc-xip> set enable_priv_net=true
    clzc:TX-zc-xip> set ip-type=exclusive
    clzc:TX-zc-xip> add node
    clzc:TX-zc-xip:node> set physical-host=ptest1
    clzc:TX-zc-xip:node> set hostname=vztest1d
    clzc:TX-zc-xip:node> add net
    clzc:TX-zc-xip:node:net> set physical=net3
    clzc:TX-zc-xip:node:net> end
    clzc:TX-zc-xip:node> add privnet
    clzc:TX-zc-xip:node:privnet> set physical=vnic1
    clzc:TX-zc-xip:node:privnet> end
    clzc:TX-zc-xip:node> add privnet
    clzc:TX-zc-xip:node:privnet> set physical=vnic5
    clzc:TX-zc-xip:node:privnet> end
    clzc:TX-zc-xip:node> end
    clzc:TX-zc-xip> add node
    clzc:TX-zc-xip:node> set physical-host=ptest2
    clzc:TX-zc-xip:node> set hostname=vztest2d
    clzc:TX-zc-xip:node> add net
    clzc:TX-zc-xip:node:net> set physical=net3
    clzc:TX-zc-xip:node:net> end
    clzc:TX-zc-xip:node> add privnet
    clzc:TX-zc-xip:node:privnet> set physical=vnic1
    clzc:TX-zc-xip:node:privnet> end
    clzc:TX-zc-xip:node> add privnet
    clzc:TX-zc-xip:node:privnet> set physical=vnic5
    clzc:TX-zc-xip:node:privnet> end
    clzc:TX-zc-xip:node> end
    clzc:TX-zc-xip> verify
    clzc:TX-zc-xip> commit
    Jul 31 15:43:19 ptest1 Cluster.RGM.rgmdstarter: did_update called
    Jul 31 15:43:19 ptest1 Cluster.RGM.rgmdstarter: new cluster TX-zc-xip added
    

    Listing 10

  5. On each node, type the following commands:
  6. 
    
    tncfg -z TX-zc-xip
    tncfg:TX-zc-xip> set label=PUBLIC
    tncfg:TX-zc-shared> exit
    
  7. From one of the nodes, install the zone cluster TX-zc-xip file. You can do this from the node ptest1.
  8. # clzc install TX-zc-xip

  9. On each node, run the txzonemgr utility. Ensure that the environment variable is set to DISPLAY.
  10. For example:

    
    
    # DISPLAY=scc60:2
    # export DISPLAY
    # txzonemgr
    

    Select the global zone. Then, select to configure a per-zone name service.

  11. To perform the sysid configuration on an Exclusive-IP labeled-brand zone cluster, perform the following steps for one zone cluster node at a time:
    1. Boot the zone cluster node.
    2. # zoneadm -z TX-zc-xip boot

    3. Unconfigure the Oracle Solaris instance and reboot the zone.
    4. 
      
      # zlogin TX-zc-xip
      # sysconfig unconfigure
      # reboot
      

      The zlogin session terminates during the reboot.

    5. Issue the zlogin command and progress through the interactive screens.
    6. # zlogin -C TX-zc-xip

    7. Open the console connections to the zone cluster nodes. Open new terminal windows for each node and follow through the interactive sysconfig screens to set up the host name, IP address, LDAP server (if applicable), DNS, and locale. Ensure that you do not enable NIS. When finished, exit the zone console.
    8. From the global zone, halt the zone cluster node.
    9. # zoneadm -z TX-zc-xip halt

    10. Repeat the preceding steps for the other zone cluster node.
  12. From one of the nodes, boot the zone cluster.
  13. # clzc boot TX-zc-xip

  14. Log in to the zone cluster nodes and set the root password.
  15. 
    
    # zlogin TX-zc-xip
    # passwd
    # exit
    

Setting Up IPMP

  1. Create IPMP groups on both zone cluster nodes.
    1. On zone cluster node vztest1d, create an IPMP group for the public interface.
    2. # ipadm show-addr
      
      # ipadm delete-addr net3/v4
      # ipadm delete-addr net3/v6
      # ipadm create-ipmp XIPZCipmp0
      # ipadm add-ipmp -i net3 XIPZCipmp0
      # ipadm create-addr -T static -a local=10.134.99.192/24 XIPZCipmp0
      # ipadm show-if
      # ipmpstat -g
      
    3. On zone cluster node vztest2d, create an IPMP group for the public interface.
    4. 
      
      # ipadm show-addr
      # ipadm delete-addr net3/v4
      # ipadm delete-addr net3/v6
      # ipadm create-ipmp XIPZCipmp0
      # ipadm add-ipmp -i net3 XIPZCipmp0
      # ipadm create-addr -T static -a local=10.134.99.195/24 XIPZCipmp0
      # ipadm show-if
      # ipmpstat -g
      
    5. On vztest1d, type the following command:
    6. 
      
      # ipmpstat -g
      GROUP       GROUPNAME   STATE     FDT       INTERFACES
      XIPZCipmp0  XIPZCipmp0  ok        --        net3
      
    7. On vztest2d, type the following command:
    8. 
      
      # ipmpstat -g
      GROUP       GROUPNAME   STATE     FDT       INTERFACES
      XIPZCipmp0  XIPZCipmp0  ok        --        net3
      
  2. Add the IP addresses of the transport interfaces and the public network IP addresses from each zone cluster node to the cipso template.
    1. On vztest1d, run the following:
    2. 
      
      # ipadm show-addr
      ADDROBJ           TYPE     STATE        ADDR
      lo0/v4            static   ok           127.0.0.1/8
      XIPZCipmp0/v4     static   ok           10.134.99.192/24
      vnic1/?           static   ok           172.16.4.1/26
      vnic5/?           static   ok           172.16.4.65/26
      clprivnet1/?      static   ok           172.16.3.193/26
      lo0/v6            static   ok           ::1/128
      
    3. On vztest2d, run the following:
    4. 
      
      # ipadm show-addr
      ADDROBJ           TYPE     STATE        ADDR
      lo0/v4            static   ok           127.0.0.1/8
      XIPZCipmp0/v4     static   ok           10.134.99.195/24
      vnic1/?           static   ok           172.16.4.2/26
      vnic5/?           static   ok           172.16.4.66/26
      clprivnet1/?      static   ok           172.16.3.194/26
      lo0/v6            static   ok           ::1/128
      
    5. Log in to the global zone nodes. Add the following addresses to the cipso template using the tncfg command.
    6. 
      
      10.134.99.192
      10.134.99.195
      172.16.4.1
      172.16.4.65
      172.16.3.193
      172.16.4.2
      172.16.4.66
      172.16.3.194
      

      In addition, you can add other external hosts to the cipso template. These externals hosts must be trusted and contacted or communicated by the Trusted Zone Cluster nodes. For two-way communication, add the public interfaces of the zone cluster nodes to the cipso templates of the other external hosts that you added.

      For example:

      
      
      # tncfg -t cipso
      tncfg:cipso> add host=10.134.99.192
      

      The zone cluster is now ready to be configured for a failover application.

Configuring a Failover Application

The procedure for configuring a failover application is similar to that for configuring a regular zone cluster. Note that pxfs file systems cannot be mounted inside a labeled-branded zone cluster in read-write mode.

This following process discusses how to create a failover resource group in the Trusted Zone Cluster with an IP address resource and a storage resource. It uses an example and makes the following assumptions:

  • Solaris Volume Manager is used to create a file system for the storage resource.
  • On each global cluster node (ptest1 and ptest2), a non-shared disk slice must be selected to create the local metadb.
  • In this example, each node has the rpool on the local disk c3t0d0 and another local disk, c3t1d0, on which a slice s4 of size 1 GB is reserved. The metadb is created on that slice.
  1. Create the metadb on the slice.
  2. # metadb -a -c 3 -f c3t1d0s4

  3. Create a device group (testdg) and file system (/testfs) in the global zone. On one of the nodes, ptest1, select the DID disks that are going to be added to the device group.
  4. This example uses the following disks:

    1. /dev/did/rdsk/d6
    2. /dev/did/rdsk/d7
    
    
    # metaset -s testdg -a -h ptest1 ptest2
    # metaset -s testdg -a -m ptest1 ptest2
    # metaset -s testdg -a /dev/did/rdsk/d6 /dev/did/rdsk/d7
    # metainit -s testdg d0 1 1 /dev/did/rdsk/d6s0
    # metainit -s testdg d1 1 1 /dev/did/rdsk/d7s0
    # metainit -s testdg d10  -m d0
    # metattach -s testdg d10 d1
    # newfs /dev/md/testdg/rdsk/d10
    
  5. Select an IP address to use as a logical host name.
  6. This example uses test-5, which is available for use in the zone cluster.

  7. Add the selected IP address and the newly created file system to the Trusted Zone Cluster, as shown in Listing 11.
  8. 
    
    # clzc configure TX-zc-xip
    clzc:TX-zc-xip> add net
    clzc:TX-zc-xip:net> set address=test-5
    clzc:TX-zc-xip:net> verify
    clzc:TX-zc-xip:net> end
    clzc:TX-zc-xip> commit
    clzc:TX-zc-xip> add fs
    clzc:TX-zc-xip:fs> set dir=/testfs
    clzc:TX-zc-xip:fs> set raw=/dev/md/testdg/rdsk/d10
    clzc:TX-zc-xip:fs> set special=/dev/md/testdg/dsk/d10
    clzc:TX-zc-xip:fs> set options=rw,logging
    clzc:TX-zc-xip:fs> set type=ufs
    clzc:TX-zc-xip:fs> info
    fs:
        dir: /testfs
        special: /dev/md/testdg/dsk/d10
        raw: /dev/md/testdg/rdsk/d10
        type: ufs
        options: [rw,logging]
        cluster-control: true
    clzc:TX-zc-xip:fs> verify
    clzc:TX-zc-xip:fs> end
    clzc:TX-zc-xip> commit
    clzc:TX-zc-xip> exit
    

    Listing 11

  9. Log in to the zone cluster nodes and create the file system mount points.
  10. 
    
    # zlogin TX-zc-xip 
    # mkdir /testfs
    # reboot
    
  11. Create a resource group (testrg) with the logical host name resource (test-5) and the storage resource (has-res).
  12. From one of the nodes, log in to the zone cluster.

    
    
    # zlogin TX-zc-xip 
    # cd /usr/cluster/bin
    # ./clrt register SUNW.HAStoragePlus
    # ./clrg create testrg 
    # ./clrslh create -g testrg -h test-5 test-5
    # ./clrs create -g testrg -t SUNW.HAStoragePlus -p FilesystemMountPoints=/testfs has-res
    # ./clrg manage testrg
    # ./clrg online  testrg
    # ./clrg status
    
    === Cluster Resource Groups ===
    Group Name       Node Name       Suspended      Status
    ----------       ---------       ---------      ------
    testrg           vztest1d        No             Online
                     vztest2d        No             Offline
    
    # ./clrs status
    
    === Cluster Resources ===
    Resource Name       Node Name     State       Status Message
    -------------       ---------     -----       --------------
    has-res            vztest1d       Online      Online
                       vztest2d       Offline     Offline
    test-5             vztest1d       Online      Online
                       vztest2d       Offline     Offline - LogicalHostname online.
    

    Listing 12

  13. View the resource group switchover to another node, as shown in Listing 13.
  14. 
    
    # ./clrg switch -n vztest2d testrg
    # ./clrs status
    
    === Cluster Resources ===
    Resource Name       Node Name      State       Status Message
    -------------       ---------      -----       --------------
    has-res             vztest1d       Offline     Offline
                        vztest2d       Online      Online
    test-5              vztest1d       Offline     Offline
                        vztest2d       Online      Online - LogicalHostname online.
    

    Listing 13

    To the above resource group, which contains a network resource and storage resource, you can add an application resource intended for use in a trusted environment.

See Also

Please refer to the following links for more information:

Also see the following resources:

About the Author

Subarna Ganguly has worked at Sun and Oracle for over 12 years, first in the Education and Training group—primarily training customers and internal engineers on Oracle Solaris networking and Oracle Solaris Cluster products—and then as a quality engineer for the Oracle Solaris Cluster product.

Revision 1.4, 03/11/2013