by Subarna Ganguly
Published February 2013 (updated March 2013)
Using Trusted Extensions with Oracle Solaris Cluster 4.1
This article discusses how to enable Trusted Extensions on an Oracle Solaris Cluster 4.1 cluster and configure a labeled-branded Exclusive-IP type of zone cluster.
Support for Trusted Extensions on Oracle Solaris Cluster 4.1 extends the Trusted Extensions concept of security containers in Oracle Solaris 11 (also known as non-global zones) to zone clusters. These special zone clusters, or Trusted Zone Clusters, are cluster-wide security containers.
Oracle Solaris Trusted Extensions confine applications and data to a specific security label within a non-global zone. To provide high availability, Oracle Solaris Cluster extends that feature to a clustered set of systems in the form of labeled zone clusters. The zones (or nodes) in the zone clusters are a brand of their own and are known as labeled-branded zones.
Oracle Solaris Cluster 4.1 supports both Shared-IP and Exclusive-IP types of labeled-branded zone clusters.
To enable Trusted Extensions on an Oracle Solaris Cluster 4.1 cluster and configure a labeled-branded Exclusive-IP type of zone cluster using the procedure provided in this article, you must have the following:
Figure 1 illustrates the configuration discussed in this article.
Note: Although not required, it is recommended that you have console access to the nodes during installation, configuration, and administration.
Figure 1
test
ptest1
and ptest2
vnic11
(on net1
) and vnic55
(on net5
) on each node10.134.98.0
net0
TX-zc-xip
vztest1d
and vztest2d
vnic1
(on net1
) and vnic5
(on net5
) on each node10.134.99.0
net3
To create a cluster-wide security container, or in other words, to create a labeled-branded zone cluster, ensure that you meet the following prerequisites:
The command in Listing 1 displays the nodes, quorum status, transport information, and other data that reflect the health of the cluster. Per the configuration shown in Figure 1, the node names are ptest1
and ptest2
.
# cluster show
=== Cluster Nodes ===
--- Node Status ---
Node Name Status
--------- ------
ptest1 Online
ptest2 Online
=== Cluster Transport Paths ===
Endpoint1 Endpoint2 Status
--------- --------- ------
ptest1:vnic55 ptest2:vnic55 Path online
ptest1:vnic11 ptest2:vnic11 Path online
=== Cluster Quorum ===
--- Quorum Votes Summary from (latest node reconfiguration) ---
Needed Present Possible
------ ------- --------
2 3 3
--- Quorum Votes by Node (current status) ---
Node Name Present Possible Status
--------- ------- -------- ------
ptest1 1 1 Online
ptest2 1 1 Online
--- Quorum Votes by Device (current status) ---
Device Name Present Possible Status
----------- ------- -------- ------
d1 1 1 Online
Listing 1
# more /etc/release
Oracle Solaris 11.1 SPARC
Copyright (c) 1983, 2012, Oracle and/or its affiliates. All rights reserved.
Assembled 19 September 2012
# more /etc/cluster/release
Oracle Solaris Cluster 4.1 0.18.2 for Solaris 11 sparc
Copyright (c) 2000, 2012, Oracle and/or its affiliates. All rights reserved.
You can see how the publishers are set using the command shown in the following example.
# pkg publisher
PUBLISHER TYPE STATUS P LOCATION
solaris origin online F http://solaris-server.xyz.com/solaris11/dev/
ha-cluster origin online F http://cluster-server.xyz.com:1234/
ldap
after files. If there are no Trusted Extensions LDAP servers in the network, set the switches shown in Listing 2.# svcs -a | grep nis
disabled 11:36:39 svc:/network/nis/domain:default
disabled 11:37:11 svc:/network/nis/client:default
# /usr/sbin/svccfg -s svc:/system/name-service/switch listprop config/netmask
config/netmask astring "cluster files"
# /usr/sbin/svccfg -s svc:/system/name-service/switch listprop config/host
config/host astring "cluster files"
# /usr/sbin/svccfg -s svc:/system/name-service/switch listprop config/automount
config/automount astring "files"
# /usr/sbin/svcadm refresh svc:/system/name-service/switch
# nscfg import -f name-service/switch
Listing 2
/etc/hosts
file has the names and addresses of all hosts that the cluster nodes are going to access, including the following:
/etc/hosts
file.
On the cluster node ptest1
, the node ID is 1
. Therefore, the private host name is clusternode1-priv
and the IP address is 172.16.2.1
.
# more /etc/cluster/nodeid
1
# ipadm show-addr
ADDROBJ TYPE STATE ADDR
lo0/v4 static ok 127.0.0.1/8
sc_ipmp0/static1 static ok 10.134.98.214/24
sc_ipmp0/zoneadmd.v4 static ok 10.134.98.219/8
sc_ipmp0/zoneadmd.v4a static ok 10.134.98.218/24
vnic11/? static ok 172.16.0.65/26
vnic55/? static ok 172.16.0.129/26
clprivnet0/? static ok 172.16.2.1/24
lo0/v6 static ok ::1/128
Listing 3
On the cluster node ptest2
, the node ID is 2
. Therefore, the private host name is clusternode2-priv
and the IP address is 172.16.2.2
.
# more /etc/cluster/nodeid
2
# ipadm show-addr
ADDROBJ TYPE STATE ADDR
lo0/v4 static ok 127.0.0.1/8
sc_ipmp0/static1 static ok 10.134.98.215/24
sc_ipmp0/zoneadmd.v4 static ok 10.134.98.221/24
sc_ipmp0/zoneadmd.v4a static ok 10.134.98.222/8
vnic11/? static ok 172.16.0.66/26
vnic55/? static ok 172.16.0.130/26
clprivnet0/? static ok 172.16.2.2/24
lo0/v6 static ok ::1/128
Listing 4
/etc/hosts
file of each node.# vi /etc/hosts
172.16.2.1 clusternode1-priv
172.16.2.2 clusternode2-priv
# pkg install system/trusted/trusted-extensions
# pkg info trusted-extensions
Name: system/trusted/trusted-extensions
Summary: Trusted Extensions
Category: Desktop (GNOME)/Trusted Extensions
State: Installed
Publisher: solaris
Version: 0.5.11
Build Release: 5.11
Branch: 0.175.0.0.0.1.0
Packaging Date: Wed Oct 12 14:36:05 2011
Size: 5.45 kB
FMRI: pkg://solaris/system/trusted/trusted-extensions@0.5.11,5.11-0.175.0.0.0.1.0:20111012T143605Z
Listing 5
/etc/pam.d/other
file before making any changes. # cp /etc/pam.d/other /etc/pam.d/other.orig
/etc/pam.d/other
file.
pam_roles
: Allows remote login by rolespam_tsol_account
: Allows unlabeled hosts to contact Trusted Extensions systems
# pfedit /etc/pam.d/other
...
account requisite pam_roles.so.1 allow_remote
...
account required pam_tsol_account.so.1 allow_unlabeled
# svcadm enable -s labeld
# svcs -a | grep labeld
online 11:53:49 svc:/system/labeld:default
# init 6
hostmodel
property to weak
on each node, as shown in Listing 6.When Trusted Extensions is enabled, the hostmodel
property for both IPv4 and IPv6 is set to strong
.
# ipadm show-prop | more
...
ipv4 hostmodel rw strong strong weak strong,
src-prio
weak
ipv4 hostmodel rw strong strong weak strong,
src-prio
...
# ipadm set-prop -p hostmodel=weak ipv4
# ipadm set-prop -p hostmodel=weak ipv6
Listing 6
admin_low
host types, as shown in Listing 7.These external hosts are used by the cluster nodes to configure zone clusters such as package publishers, network connectivity by using default routers and other servers such as NFS or application servers. Access is required for these hosts even though they do not run Trusted Extensions.
On each node, type the following commands. The following example shows the command for the default router.
# netstat -rn
Routing Table: IPv4
Destination Gateway Flags Ref Use Interface
------------- ------------- ----- ----- ---------- ---------
default 10.134.98.1 UG 5 2810 sc_ipmp0
10.134.98.0 10.134.98.214 U 7 97 sc_ipmp0
127.0.0.1 127.0.0.1 UH 2 2058 lo0
172.16.0.64 172.16.0.65 U 3 26390 vnic11
172.16.0.128 172.16.0.129 U 3 25821 vnic55
172.16.2.0 172.16.2.1 U 3 173 clprivnet0
Routing Table: IPv6
Destination/Mask Gateway Flags Ref Use If
---------------- ----------------- ----- --- ------- -----
::1 ::1 UH 2 0 lo0
# tncfg -t admin_low add host=10.134.98.1
# tncfg:admin_low> info
name=admin_low
host_type=unlabeled
doi=1
def_label=ADMIN_LOW
min_label=ADMIN_LOW
max_label=ADMIN_HIGH
host=10.134.98.1/32
tncfg:admin_low> exit
Listing 7
cipso
host template.ptest1
, type the command shown in Listing 8:# ipadm show-addr
ADDROBJ TYPE STATE ADDR
lo0/v4 static ok 127.0.0.1/8
sc_ipmp0/static1 static ok 10.134.98.214/24
sc_ipmp0/zoneadmd.v4 static ok 10.134.98.219/8
sc_ipmp0/zoneadmd.v4a static ok 10.134.98.218/24
vnic11/? static ok 172.16.0.65/26
vnic55/? static ok 172.16.0.129/26
clprivnet0/? static ok 172.16.2.1/24
lo0/v6 static ok ::1/128
Listing 8
ptest2
, type the command shown in Listing 9:
# ipadm show-addr
ADDROBJ TYPE STATE ADDR
lo0/v4 static ok 127.0.0.1/8
sc_ipmp0/static1 static ok 10.134.98.215/24
sc_ipmp0/zoneadmd.v4 static ok 10.134.98.221/24
sc_ipmp0/zoneadmd.v4a static ok 10.134.98.222/8
vnic11/? static ok 172.16.0.66/26
vnic55/? static ok 172.16.0.130/26
clprivnet0/? static ok 172.16.2.2/24
lo0/v6 static ok ::1/128
Listing 9
The output in Listing 8 and Listing 9 shows that the following addresses are added on the transport end points:
172.16.0.65, 172.16.0.129, 172.16.0.66, 172.16.0.130
The following are the private node names hosted on the clprivnet0
interfaces:
172.16.2.1
and 172.16.2.2
# tncfg
tncfg:cipso> add host=172.16.2.1
tncfg:cipso> add host=172.16.2.2
tncfg:cipso> add host=172.16.0.65
tncfg:cipso> add host=172.16.0.66
tncfg:cipso> add host=172.16.0.129
tncfg:cipso> add host=172.16.0.130
The entries above are stored in the /etc/security/tsol/tnrhdb
file.
To create a Trusted Zone Cluster or labeled-branded zone cluster, use the Zone Cluster wizard of the clsetup
utility. The utility is menu-driven and self-explanatory.
If you do not want to use the Zone Cluster wizard of the clsetup
utility, follow the steps below:
# dladm create-vnic -l net1 vnic1
# dladm create-vnic -l net5 vnic5
net1
and net5
are the physical interfaces on which the transport links of the global cluster are created.
# clzc configure TX-zc-xip
TX-zc-xip: No such zone cluster configured
Use 'create' to begin configuring a new zone cluster.
clzc:TX-zc-xip> create
clzc:TX-zc-xip> set zonepath=/zones/TX-zc-xip
clzc:TX-zc-xip> set brand=labeled
clzc:TX-zc-xip> set enable_priv_net=true
clzc:TX-zc-xip> set ip-type=exclusive
clzc:TX-zc-xip> add node
clzc:TX-zc-xip:node> set physical-host=ptest1
clzc:TX-zc-xip:node> set hostname=vztest1d
clzc:TX-zc-xip:node> add net
clzc:TX-zc-xip:node:net> set physical=net3
clzc:TX-zc-xip:node:net> end
clzc:TX-zc-xip:node> add privnet
clzc:TX-zc-xip:node:privnet> set physical=vnic1
clzc:TX-zc-xip:node:privnet> end
clzc:TX-zc-xip:node> add privnet
clzc:TX-zc-xip:node:privnet> set physical=vnic5
clzc:TX-zc-xip:node:privnet> end
clzc:TX-zc-xip:node> end
clzc:TX-zc-xip> add node
clzc:TX-zc-xip:node> set physical-host=ptest2
clzc:TX-zc-xip:node> set hostname=vztest2d
clzc:TX-zc-xip:node> add net
clzc:TX-zc-xip:node:net> set physical=net3
clzc:TX-zc-xip:node:net> end
clzc:TX-zc-xip:node> add privnet
clzc:TX-zc-xip:node:privnet> set physical=vnic1
clzc:TX-zc-xip:node:privnet> end
clzc:TX-zc-xip:node> add privnet
clzc:TX-zc-xip:node:privnet> set physical=vnic5
clzc:TX-zc-xip:node:privnet> end
clzc:TX-zc-xip:node> end
clzc:TX-zc-xip> verify
clzc:TX-zc-xip> commit
Jul 31 15:43:19 ptest1 Cluster.RGM.rgmdstarter: did_update called
Jul 31 15:43:19 ptest1 Cluster.RGM.rgmdstarter: new cluster TX-zc-xip added
Listing 10
tncfg -z TX-zc-xip
tncfg:TX-zc-xip> set label=PUBLIC
tncfg:TX-zc-shared> exit
TX-zc-xip
file. You can do this from the node ptest1
.# clzc install TX-zc-xip
txzonemgr
utility. Ensure that the environment variable is set to DISPLAY
.For example:
# DISPLAY=scc60:2
# export DISPLAY
# txzonemgr
Select the global zone. Then, select to configure a per-zone name service.
sysid
configuration on an Exclusive-IP labeled-brand zone cluster, perform the following steps for one zone cluster node at a time:
# zoneadm -z TX-zc-xip boot
# zlogin TX-zc-xip
# sysconfig unconfigure
# reboot
The zlogin
session terminates during the reboot.
zlogin
command and progress through the interactive screens. # zlogin -C TX-zc-xip
sysconfig
screens to set up the host name, IP address, LDAP server (if applicable), DNS, and locale. Ensure that you do not enable NIS. When finished, exit the zone console. # zoneadm -z TX-zc-xip halt
# clzc boot TX-zc-xip
root
password.
# zlogin TX-zc-xip
# passwd
# exit
vztest1d
, create an IPMP group for the public interface.# ipadm show-addr
# ipadm delete-addr net3/v4
# ipadm delete-addr net3/v6
# ipadm create-ipmp XIPZCipmp0
# ipadm add-ipmp -i net3 XIPZCipmp0
# ipadm create-addr -T static -a local=10.134.99.192/24 XIPZCipmp0
# ipadm show-if
# ipmpstat -g
vztest2d
, create an IPMP group for the public interface.
# ipadm show-addr
# ipadm delete-addr net3/v4
# ipadm delete-addr net3/v6
# ipadm create-ipmp XIPZCipmp0
# ipadm add-ipmp -i net3 XIPZCipmp0
# ipadm create-addr -T static -a local=10.134.99.195/24 XIPZCipmp0
# ipadm show-if
# ipmpstat -g
vztest1d
, type the following command:
# ipmpstat -g
GROUP GROUPNAME STATE FDT INTERFACES
XIPZCipmp0 XIPZCipmp0 ok -- net3
vztest2d
, type the following command:
# ipmpstat -g
GROUP GROUPNAME STATE FDT INTERFACES
XIPZCipmp0 XIPZCipmp0 ok -- net3
cipso
template.
vztest1d
, run the following:
# ipadm show-addr
ADDROBJ TYPE STATE ADDR
lo0/v4 static ok 127.0.0.1/8
XIPZCipmp0/v4 static ok 10.134.99.192/24
vnic1/? static ok 172.16.4.1/26
vnic5/? static ok 172.16.4.65/26
clprivnet1/? static ok 172.16.3.193/26
lo0/v6 static ok ::1/128
vztest2d
, run the following:
# ipadm show-addr
ADDROBJ TYPE STATE ADDR
lo0/v4 static ok 127.0.0.1/8
XIPZCipmp0/v4 static ok 10.134.99.195/24
vnic1/? static ok 172.16.4.2/26
vnic5/? static ok 172.16.4.66/26
clprivnet1/? static ok 172.16.3.194/26
lo0/v6 static ok ::1/128
cipso
template using the tncfg
command.
10.134.99.192
10.134.99.195
172.16.4.1
172.16.4.65
172.16.3.193
172.16.4.2
172.16.4.66
172.16.3.194
In addition, you can add other external hosts to the cipso
template. These externals hosts must be trusted and contacted or communicated by the Trusted Zone Cluster nodes. For two-way communication, add the public interfaces of the zone cluster nodes to the cipso
templates of the other external hosts that you added.
For example:
# tncfg -t cipso
tncfg:cipso> add host=10.134.99.192
The zone cluster is now ready to be configured for a failover application.
The procedure for configuring a failover application is similar to that for configuring a regular zone cluster. Note that pxfs
file systems cannot be mounted inside a labeled-branded zone cluster in read-write mode.
This following process discusses how to create a failover resource group in the Trusted Zone Cluster with an IP address resource and a storage resource. It uses an example and makes the following assumptions:
ptest1
and ptest2
), a non-shared disk slice must be selected to create the local metadb.c3t0d0
and another local disk, c3t1d0
, on which a slice s4
of size 1 GB is reserved. The metadb is created on that slice.# metadb -a -c 3 -f c3t1d0s4
testdg
) and file system (/testfs
) in the global zone. On one of the nodes, ptest1
, select the DID disks that are going to be added to the device group.This example uses the following disks:
/dev/did/rdsk/d6
/dev/did/rdsk/d7
# metaset -s testdg -a -h ptest1 ptest2
# metaset -s testdg -a -m ptest1 ptest2
# metaset -s testdg -a /dev/did/rdsk/d6 /dev/did/rdsk/d7
# metainit -s testdg d0 1 1 /dev/did/rdsk/d6s0
# metainit -s testdg d1 1 1 /dev/did/rdsk/d7s0
# metainit -s testdg d10 -m d0
# metattach -s testdg d10 d1
# newfs /dev/md/testdg/rdsk/d10
This example uses test-5
, which is available for use in the zone cluster.
# clzc configure TX-zc-xip
clzc:TX-zc-xip> add net
clzc:TX-zc-xip:net> set address=test-5
clzc:TX-zc-xip:net> verify
clzc:TX-zc-xip:net> end
clzc:TX-zc-xip> commit
clzc:TX-zc-xip> add fs
clzc:TX-zc-xip:fs> set dir=/testfs
clzc:TX-zc-xip:fs> set raw=/dev/md/testdg/rdsk/d10
clzc:TX-zc-xip:fs> set special=/dev/md/testdg/dsk/d10
clzc:TX-zc-xip:fs> set options=rw,logging
clzc:TX-zc-xip:fs> set type=ufs
clzc:TX-zc-xip:fs> info
fs:
dir: /testfs
special: /dev/md/testdg/dsk/d10
raw: /dev/md/testdg/rdsk/d10
type: ufs
options: [rw,logging]
cluster-control: true
clzc:TX-zc-xip:fs> verify
clzc:TX-zc-xip:fs> end
clzc:TX-zc-xip> commit
clzc:TX-zc-xip> exit
Listing 11
# zlogin TX-zc-xip
# mkdir /testfs
# reboot
testrg
) with the logical host name resource (test-5
) and the storage resource (has-res
).From one of the nodes, log in to the zone cluster.
# zlogin TX-zc-xip
# cd /usr/cluster/bin
# ./clrt register SUNW.HAStoragePlus
# ./clrg create testrg
# ./clrslh create -g testrg -h test-5 test-5
# ./clrs create -g testrg -t SUNW.HAStoragePlus -p FilesystemMountPoints=/testfs has-res
# ./clrg manage testrg
# ./clrg online testrg
# ./clrg status
=== Cluster Resource Groups ===
Group Name Node Name Suspended Status
---------- --------- --------- ------
testrg vztest1d No Online
vztest2d No Offline
# ./clrs status
=== Cluster Resources ===
Resource Name Node Name State Status Message
------------- --------- ----- --------------
has-res vztest1d Online Online
vztest2d Offline Offline
test-5 vztest1d Online Online
vztest2d Offline Offline - LogicalHostname online.
Listing 12
# ./clrg switch -n vztest2d testrg
# ./clrs status
=== Cluster Resources ===
Resource Name Node Name State Status Message
------------- --------- ----- --------------
has-res vztest1d Offline Offline
vztest2d Online Online
test-5 vztest1d Offline Offline
vztest2d Online Online - LogicalHostname online.
Listing 13
To the above resource group, which contains a network resource and storage resource, you can add an application resource intended for use in a trusted environment.
Please refer to the following links for more information:
Also see the following resources:
Subarna Ganguly has worked at Sun and Oracle for over 12 years, first in the Education and Training group—primarily training customers and internal engineers on Oracle Solaris networking and Oracle Solaris Cluster products—and then as a quality engineer for the Oracle Solaris Cluster product.
Revision 1.4, 03/11/2013