by Lucia Lai
Published September 2012
How to leverage the Oracle Solaris 11 Automated Installer to automate the installation and configuration of an Oracle Solaris Cluster 4.0 cluster.
This article describes how to take advantage of the Automated Installer (AI) in Oracle Solaris 11 to install and configure Oracle Solaris Cluster 4.0.
Without the AI, you would have to manually install the cluster components on the cluster nodes, and then run the scinstall
tool to add the nodes to the cluster. If, instead, you use the AI, both the Oracle Solaris 11 and the Oracle Solaris Cluster 4.0 packages are installed onto the cluster nodes directly from Image Packaging System (IPS) repositories, and the nodes are booted into a new cluster with minimum user intervention.
Note: Currently, only cluster nodes that use IPv4 addressing can be configured using the automated installation method. No nodes can use IPv6 addressing. If any node uses an IPv6 address, use the standard (manual) IPS installation method instead of the automated process described in this article.
The Oracle Solaris Cluster 4.0 automated installation process provides "hands-free" network installation and configuration of a cluster. Figure 1 shows the components and high-level actions involved in the automated installation process.
Figure 1. Components and Actions Involved During the Automated Installation
AI Install Server
The AI install server must be a separate system that is distinct from the cluster nodes, and the AI install server must be on the same subnet as the cluster nodes.
The standard cluster configuration tool, scinstall
, is in the Oracle Solaris Cluster IPS package, ha-cluster/system/install
. You install only this package—no other cluster group packages—on the designated AI install server.
Then you run scinstall
and provide information for the installation parameters and cluster configuration. The installation parameters describe the software publishers, the origins of the repositories, and the cluster components to be installed. At the end, scinstall
creates an install service that includes an AI manifest file and a system configuration profile for each cluster node. It also prints a message that describes the DHCP configuration, the AI install server, and a boot file for each node. You use this information to set up the DHCP server.
The scinstall
utility calls installadm
, which is an Oracle Solaris command for managing AI install services. You can use installadm
to check the install service created by scinstall
for the cluster or do further customization, such as updating the manifest file of the install service.
Ensure that the AI install server meets the following hardware requirements:
Ensure that the AI install server meets the following software requirements:
netstat
command to show the network status. If the server does not have a default route set by discovery, you can set a static default route by populating the /etc/defaultrouter
file with the IP address of a router on your server's network.svcs
command to check the status of the svc:/network/dns/multicast
Oracle Solaris Service Management Facility (SMF) service. If necessary, use the svcadm
command to enable the service.
# svcs /network/dns/multicast
STATE STIME FMRI
disabled 10:19:28 svc:/network/dns/multicast:default
# svcadm enable /network/dns/multicast
# svcs /network/dns/multicast
STATE STIME FMRI
online 13:28:30 svc:/network/dns/multicast:default
DHCP Server
The DHCP server can be a separate server or the same as the AI install server. You use the output of the scinstall
command (for example, the boot file information for each node) to configure the DHCP server. Then, when the cluster nodes boot from the network, the settings specified in the DHCP server configuration enable the cluster nodes to find the right install server and boot file to conduct the installation.
In this article, the Internet Systems Consortium (ISC) DHCP server implementation, which is available in Oracle Solaris 11, is used for the examples.
Repositories
The repositories can be either HTTP-based internet repositories or local package repositories.
Note:
/net/hosts
as an auto-mount point.The Oracle Solaris 11 and Oracle Solaris Cluster software is published into IPS package repositories. Each repository has a publisher. For example, solaris
is the publisher of the Oracle Solaris 11 release repository located at http://pkg.oracle.com/solaris/release.
When you execute scinstall
, it prompts you to specify the publishers and the repository locations. It then lists the group packages that are available in the Oracle Solaris Cluster repository. This information is included in the AI manifest files that are created by scinstall
.
When the nodes boot from the network, the AI installation program reads the AI manifest files, and the packages are directly installed from the repository servers onto the cluster nodes.
Cluster Nodes
Before you start the cluster installation, you need to ensure that the cluster nodes' hardware components are supported and that the hardware configuration has been set up properly, including the shared storage and the network interfaces that are used as private interconnects among the cluster nodes.
The cluster nodes must meet the following requirements in order to be AI installation clients:
After the AI install service is created and you configure the DHCP server, the only thing you need to perform on the cluster nodes is to boot them from the network to start the installation. When the installation is complete, the nodes are configured into the cluster during two automated reboots.
Note: Currently, only cluster nodes that use IPv4 addressing can be configured using the automated installation method. No nodes can use IPv6 addressing. If any node uses an IPv6 address, use the standard (manual) IPS installation method instead of the automated installation process described in this article.
When you boot a cluster node that is being installed from the network, the following high-level actions occur:
boot_archive
file and loads Oracle Solaris.To set up the AI install server, you configure the IPS repositories and publishers and then you install the Oracle Solaris Cluster automated installation support package, ha-cluster/system/install
.
Configuring the IPS Repositories and Publishers
To install the ha-cluster/system/install
package on the AI install server, the AI install server must have access to both an Oracle Solaris 11 IPS package repository and an Oracle Solaris Cluster 4.0 repository. You can use an Oracle Solaris Cluster repository that is hosted on http://pkg.oracle.com or you can use a local package repository.
To use an Oracle Solaris Cluster repository hosted on http://pkg.oracle.com and obtain the required SSL public and private keys, perform the following steps. Steps for using a local repository are provided after this procedure.
A certification page is displayed with download buttons for the key and the certificate.
This following example uses the release repository. The -k
option specifies the SSL key file; the -c
option specifies the SSL certificate file; the -g
option specifies the origin of the repository in URL format; and ha-cluster
specifies the publisher name for this repository.
# pkg set-publisher \
-k /var/pkg/ssl/Oracle_Solaris_Cluster_4.0.key.pem \
-c /var/pkg/ssl/Oracle_Solaris_Cluster_4.0.certificate.pem \
-g https://pkg.oracle.com/ha-cluster/release/ ha-cluster
The Oracle Solaris 11 support repository can be set up using steps similar to the steps above. However, the Oracle Solaris 11 release repository does not need SSL keys, and it can be directly accessed via HTTP:
# pkg set-publisher -g http://pkg.oracle.com/solaris/release solaris
To use a local Oracle Solaris Cluster package repository, download the repository image by performing the following steps:
# gunzip /tmp/osc4.0-repo-full.iso.gz
# lofiadm -a /tmp/osc4.0-repo-full.iso /dev/lofi/1
# mount -F hsfs /dev/lofi/1 /mnt
# rsync -aP /mnt/repo /export
# share /export/repo
ha-cluster
publisher for this local repository.
This example uses myrepositoryserver
as the system that shares the local copy of the repository:
# pkg set-publisher -g file:///net/myrepositoryserver/export/repo ha-cluster
A local Oracle Solaris 11 repository can be set up using similar steps:
# ofiadm -a /tmp/V28916-01.iso /dev/lofi/2
# mount -F hsfs /dev/lofi/2 /solaris_mnt
# rsync -aP /solaris_mnt/repo /solaris_export
# share /solaris_export/repo
solaris
publisher for this local repository.
# pkg set-publisher -g file:///net/myrepositoryserver/solaris_export/repo solaris
Installing the Oracle Solaris Cluster Automated Installation Support Package
On the AI install server, you need to make sure the publishers for both the Oracle Solaris 11 and Oracle Solaris Cluster 4.0 repositories are set up properly. If the publisher of either repository is not set up properly, installation of the ha-cluster/system/install
package will fail.
The ha-cluster/system/install
package has a dependency on the Oracle Solaris automated install tool package, install/installadm
. If the install/installadm
package is not yet installed on the AI install server, installing the ha-cluster/system/install
package will also install the install/installadm
package.
Perform the following procedure:
pkg list
command to check whether the ha-cluster/system/install
package is already installed:
# pkg list ha-cluster/system/install
pkg list: no packages matching 'ha-cluster/system/install' installed
When the publishers are set up for the Oracle Solaris and Oracle Solaris Cluster repositories, use the -a
option to the pkg list
command to check if the repository contains the ha-cluster/system/install
package.
pkg install
command to install the package, as shown in Listing 1.
# pkg publisher
PUBLISHER TYPE STATUS URI
solaris origin online https://pkg.oracle.com/solaris/release
ha-cluster origin online https://pkg.oracle.com/ha-cluster/release
# pkg list -a ha-cluster/system/install
NAME (PUBLISHER) VERSION IFO
ha-cluster/system/install (ha-cluster) 4.0.0-0.22.1 ---
# pkg install ha-cluster/system/install
Packages to install: 2
Create boot environment: No
Create backup boot environment: No
DOWNLOAD PKGS FILES XFER (MB)
Completed 2/2 58/58 0.4/0.4
PHASE ACTIONS
Install Phase 136/136
PHASE ITEMS
Package State Update Phase 2/2
Image State Update Phase 2/2
Listing 1. Installing the Package
Next, you need to perform some customizations.
Downloading the Oracle Solaris 11 Automated Installer Boot Image ISO File
You need to download the automated installer boot image ISO file in the Oracle Solaris 11 11/11 release, which is the release required by Oracle Solaris Cluster 4.0. The binaries contained in the boot image file are later loaded to the cluster nodes during the boot-from-network process, and they allow the installation to proceed.
# ls /export/home/*.iso
/export/home/sol-11-1111-ai-x86.iso
Customizing the Cluster Installation
The scinstall
utility is the tool for setting up the automated cluster installation and configuration. Before you use the scinstall
utility, you need to plan the cluster software installation and configuration, including determining what cluster software components to install and setting up the IP addresses of the cluster nodes in any name services that are used.
The MAC address of each cluster node is used as a client identifier on the servers. Collect the MAC addresses and have them ready when you run scinstall
. You will also need to specify MAC address when setting up the DHCP configuration for each node.
To customize the cluster installation, perform the following steps:
scinstall
utility on the AI install server, and select the option 1 (Install and configure a cluster from this Automated Installer install server), as shown in Listing 2.
# /usr/cluster/bin/scinstall
*** Main Menu ***
Please select from one of the following (*) options:
* 1) Install and configure a cluster from this Automated Installer install server
* 2) Print release information for this Automated Installer install server
* ?) Help with menu options
* q) Quit
Option: 1
Listing 2. Starting the scinstall Utility
>>> Custom Automated Installer Boot Image ISO File <<<
Automated Installer uses a minimal boot image to boot the client. This
boot image ISO file is required to set up the installation. You must
download this file and save it in a directory that can be accessed
from this install server. Refer to your Oracle Solaris Cluster
installation document for instructions to download this file.
Be sure to download the Automated Installer boot image and not the
live CD image or the text install image. Download the SPARC Automated
Installer boot image for SPARC nodes, or download the x86 Automated
Installer boot image for x86 nodes. This file must be the same version
as the Oracle Solaris OS release that you plan to install on the
cluster nodes.
What is the full path name of the AI boot image ISO file? /export/home/sol-11-1111-ai-x86.iso
Listing 3. Specifying the Location of the ISO File
>>> Custom Automated Installer User root <<<
After the automated installation is complete, you will need "root"
user authorization to access to the cluster nodes. A password is
required for the "root" account.
Password for root:
Re-type password to confirm:
>>> Custom Automated Installer Repositories <<<
The Oracle Solaris and Oracle Solaris Cluster packages are published
into IPS package repositories, and the packages in the latest release
are installed directly from the repositories to the cluster nodes.
Refer to Oracle Solaris and Oracle Solaris Cluster documentation for
the locations of the repositories.
You can also create a local package repository rather than using a
Internet http based package repository, and make the repository
accessible to the cluster nodes and this install server with autofs
mount point in directory /net/<host>. Refer to Oracle Solaris
installation documentation for creating local package repository.
What is the publisher for Oracle Solaris [solaris]?
What is the repository of publisher "solaris"? http://pkg.oracle.com/solaris/release
Accessing the repository ...done
What is the publisher for Oracle Solaris Cluster [ha-cluster]?
What is the repository of publisher "ha-cluster"? /net/myrepositoryserver/export/repo
Listing 4. Specifying the Publisher and Repository Information
The cluster software components that are available in the specified repositories are listed for selection.
ha-cluster-full
package, as shown in Listing 5:
Select the Oracle Cluster components that you want to install:
Package Description
1) ha-cluster-framework-minimal Oracle Solaris Cluster Framework minimal group package
2) ha-cluster-framework-full Oracle Solaris Cluster Framework full group package
3) ha-cluster-data-services-full Oracle Solaris Cluster Data Services full group package
4) ha-cluster-geo-full Oracle Solaris Cluster Geographic Edition full group package
5) All All above components
Option(s): 5
Listing 5. Selecting the Components to Install
Next, you will need to choose which mode, Typical or Custom, you will use to specify various cluster configuration. In the Typical mode, some of the cluster configuration values are set to default values, while in the Custom mode, you can customize the values.
You will need to specify the following cluster configuration parameters regardless of the mode:
You can use the dladm
command to query the MAC address of an installed adapter:
# dladm show-phys -m net0
LINK SLOT ADDRESS INUSE CLIENT
net0 primary 0:14:4f:7e:57:9a yes net0
netX
, can be specified. When you use the AI method to install and configure the cluster, both vanity and nonvanity adapter names (such as e1000g1
) can be specified for the transport adapters. If nonvanity transport adapter names are specified, the cluster configuration service automatically converts them to vanity names when it adds the node to the cluster.In Typical mode, except for the configuration items above, which require user input, all other cluster configuration parameters are set to the default values shown in Table 1. To specify nondefault settings, you will need to choose the Custom mode.
Table 1. Configuration Values
Cluster Configuration | Default Values (Typical Mode) | Examples of Other Values That Can Be Specified (Custom Mode) |
---|---|---|
Adding node authentication | sys | des |
Number of private networks | two | one |
Transport switch names | switch1, switch2 | Any string |
Private network address | 172.16.0.0 | |
Private network mask | 255.255.240.0 | |
Maximum number of nodes | 64 | 2 ~ 64 |
Maximum number of private networks | 10 | 2 ~ 128 |
Maximum number of virtual clusters | 12 | Any number |
Global fencing | on | off |
>>> Typical or Custom Mode <<<
This tool supports two modes of operation, Typical mode and Custom
mode. For most clusters, you can use Typical mode. However, you might
need to select the Custom mode option if not all of the Typical mode
defaults can be applied to your cluster.
For more information about the differences between Typical and Custom
modes, select the Help option from the menu.
Please select from one of the following options:
1) Typical
2) Custom
?) Help
q) Return to the Main Menu
Option [1]:
Listing 6. Specifying the Mode
Confirming the Cluster Configuration
When you have answered all the questions, scinstall
transfers all the specifications into command-line options for each node and requests confirmation, as shown in Listing 7. See the scinstall
(1M) man page for more information about the command-line options.
>>> Confirmation <<<
Your responses indicate the following options to scinstall:
-----------------------------------------
For node "phys-schost-1",
scinstall -c /export/home/sol-11-1111-ai-x86.iso -h phys-schost-1 \
-C clusterA \
-F \
-G lofi \
-W solaris=http://pkg.oracle.com/solaris/release::entire,server_install:::
ha-cluster=/net/myrepositoryserver/export/repo::ha-cluster-framework-minimal,ha-cluster-framework-full,
ha-cluster-data-services-full,ha-cluster-geo-full \
-n ip=192.168.100.98/24,mac=0:14:4f:1:dd:c0 \
-T node=phys-schost-1,node=phys-schost-2,authtype=sys \
-w netaddr=172.16.0.0,netmask=255.255.240.0,maxnodes=64,maxprivatenets=10,numvirtualclusters=12 \
-A trtype=dlpi,name=net1 -A trtype=dlpi,name=bge1 \
-B type=switch,name=switch1 -B type=switch,name=switch2 \
-m endpoint=:net1,endpoint=switch1 \
-m endpoint=:bge1,endpoint=switch2 \
-P task=quorum,state=INIT \
-U /var/cluster/run/scinstall/scinstall.pwd.1866
Are these the options you want to use (yes/no) [yes]?
-----------------------------------------
For node "phys-schost-2",
scinstall -c /export/home/sol-11-1111-ai-x86.iso -h phys-schost-2 \
-C clusterA \
-N phys-schost-1 \
-G lofi \
-W solaris=http://pkg.oracle.com/solaris/release::entire,server_install:::
ha-cluster=/net/myrepositoryserver/export/repo::ha-cluster-framework-minimal,ha-cluster-framework-full,
ha-cluster-data-services-full,ha-cluster-geo-full \
-n ip=192.168.100.99/24,mac=0:14:4f:1:da:fc \
-A trtype=dlpi,name=e1000g1 -A trtype=dlpi,name=net3 \
-m endpoint=:e1000g1,endpoint=switch1 \
-m endpoint=:net3,endpoint=switch2 \
-U /var/cluster/run/scinstall/scinstall.pwd.1866
Are these the options you want to use (yes/no) [yes]?
Listing 7. Confirming the Options
At the end of the scinstall
execution, a list of the DHCP definitions that need to be created on the DHCP server is displayed. The DHCP definitions are slightly different for SPARC and x86.
The output in Listing 8 is for x86 nodes:
Do you want to continue with Automated Installer set up (yes/no) [yes]?
Creating Automated Installer install service clusterA-sol-11-1111-ai-x86 ... done
Creating client to install service for node "phys-schost-1" ... done
Adding Automated Installer manifest to install service for node "phys-schost-1" ... done
Creating Automated Installer system configuration profile for node "phys-schost-1" ... done
Before you boot node "phys-schost-1" to install from the network, you must
register the node with a DHCP server using 0100144F01DDC0 as the
client ID, and create DHCP macro for this node with definition
":BootSrvA=192.168.100.1:BootFile=0100144F01DDC0:".
Creating client to install service for node "phys-schost-2" ... done
Adding Automated Installer manifest to install service for node "phys-schost-2" ... done
Creating Automated Installer system configuration profile for node "phys-schost-2" ... done
Before you boot node "phys-schost-2" to install from the network, you must
register the node with a DHCP server using 0100144F01DAFC as the
client ID, and create DHCP macro for this node with definition
":=192.168.100.1:BootFile=0100144F01DAFC:".
Listing 8. Output for x86 Nodes
The output in Listing 9 is for a SPARC node:
Do you want to continue with Automated Installer set up (yes/no) [yes]?
Creating Automated Installer install service clusterB-sol-11-1111-ai-sparc ... done
Creating client to install service for node "phys-schost-3" ... done
Adding Automated Installer manifest to install service for node "phys-schost-3" ... done
Creating Automated Installer system configuration profile for node "phys-schost-3" ... done
Before you boot node "phys-schost-3" to install from the network, you must
register the node with a DHCP server using 0100144F025030 as the
client ID, and create DHCP macro for this node with definition ":Boot-
SrvA=192.168.100.1:BootFile=http://192.168.100.1:5555/cgi-bin/wan-
boot-cgi:".
Listing 9. Output for SPARC Node
One AI install service is created for each cluster. The name of the install service is in the format clustername-ISOfilename
with the .iso
extension not included.
You can list the install service, and the AI manifest and system configuration profile that are associated with the service for each client, using the installadm
command, as shown in Listing 10:
# installadm list -n clusterA-sol-11-1111-ai-x86 -c -p -m
Service Name Client Address Arch Image Path
------------ -------------- ---- ----------
clusterA-sol-11-1111-ai-x86 00:14:4F:01:DA:FC i386 /rpool/ai/target/clusterA-sol-11-1111-ai-x86
00:14:4F:01:DD:C0 i386 /rpool/ai/target/clusterA-sol-11-1111-ai-x86
Manifest Status Criteria
-------- ------ --------
phys-schost-1_manifest mac = 00:14:4F:01:DD:C0
phys-schost-2_manifest mac = 00:14:4F:01:DA:FC
orig_default Default None
Profile Criteria
------- --------
phys-schost-1_profile mac = 00:14:4F:01:DD:C0
phys-schost-2_profile mac = 00:14:4F:01:DA:FC
Listing 10. Listing Information About the Service, Manifest File, and Profile
Performing Additional Installation Customization
The AI manifest file that is created for each client by running scinstall
defines the installation parameters, including the IPS package repositories and the packages to install. On the AI install server, each manifest file is at /var/cluster/logs/install/autoscinstall.d/clustername/nodename/nodename_aimanifest.xml
. These files can be updated if you want to do further customization to install extra software. The updates must be done before you start the network installation on the cluster nodes.
As shown in the software
element in the AI manifest file example in Listing 11, the automated installation installs the Oracle Solaris packages entire
and server_install
. Package entire
is an incorporation package that sets constraints on the version of the specified set of Oracle Solaris packages. Package server_install
is for the AI installation.
<software name="ips" type="IPS">
<source>
<publisher name="solaris">
<origin name="http://pkg.oracle.com/solaris/release/"/>
</publisher>
<publisher name="ha-cluster">
<origin name="file:///net/myrepositoryserver/repos/i386/Sol_11"/>
</publisher>
</source>
<software_data>
<name>entire</name>
<name>server_install</name>
<name>ha-cluster-data-services-full</name>
<name>ha-cluster-framework-full</name>
<name>ha-cluster-framework-minimal</name>
<name>ha-cluster-geo-full</name>
<name>ha-cluster-full</name>
</software_data>
</software>
Listing 11. Example Manifest File
You can update this AI manifest file manually to include other packages that are not included in the Oracle Solaris entire
incorporation package. Such packages can be any extra components in the Oracle Solaris release or third-party software that is delivered in IPS format in another repository, in which case the publisher of that repository also needs to be added as a new publisher element.
The cluster packages can also be updated, but in any case, observe the following restrictions:
solaris
and ha-cluster
publishers.entire
and server_install
packages, which define the necessary packages for automated installation.ha-cluster
group package.Listing 12 is an example of using the XML elements source
and software_data
to add a new repository and packages.
<software name="ips" type="IPS">
<source>
<publisher name="solaris">
<origin name="http://pkg.oracle.com/solaris/release/"/>
</publisher>
<publisher name="ha-cluster">
<origin name="file:///net/myrepositoryserver/repos/i386/Sol_11"/>
</publisher>
<publisher name="my-ips">
<origin name="file:///net/my-ips-repository/"/>
</publisher>
</source>
<software_data>
<name>entire</name>
<name>server_install</name>
<name>ha-cluster-data-services-full</name>
<name>ha-cluster-framework-full</name>
<name>my-ips-group-package</name>
</software_data>
</software>
Listing 12. Adding a New Repository and Packages
After you update the AI manifest file, use the installadm
command to replace the old AI manifest file that is already associated with the install service. If needed, you can list the service name and the manifest name using the installadm list
command, as shown earlier in Listing 10.
# installadm update-manifest \
-n clusterA-sol-11-1111-ai-x86 \
-f /var/cluster/logs/install/autoscinstall.d/clusterA/phys-schost-1/phys-schost-1_aimanifest.xml \
-m phys-schost-1_manifest
Automated installation requires you to configure the install server and boot file for the clients in a DHCP environment. A DHCP server enables you to configure the clients to automatically boot and get information from the network. The DHCP server stores information about the network configuration for the clients, the install server's IP address, and the boot files that conduct the client network installation.
Before you start booting the clients over the network, you need to configure the DHCP sever by creating the installation definitions on the DHCP server.
Installing the ISC DHCP Server Package
The Internet Systems Consortium (ISC) DHCP server has been added to Oracle Solaris 11, but it might not be automatically installed and, if so, it needs to be installed manually.
If you need to install it, on the DHCP server, set the solaris
publisher with an Oracle Solaris 11 repository, and install the package as follows:
# pkg publisher
PUBLISHER TYPE STATUS URI
solaris origin online https://pkg.oracle.com/solaris/release
# pkg install install service/network/dhcp/isc-dhcp
Configuring the ISC DHCP Server
After you install the package, create the /etc/dhcp/dhcpd4.conf
(for DHCPv4) file. This file is an ASCII text file that contains a list of statements that specify the configuration information for the ISC DHCP server daemon, dhcpd
.
# No IP forwarding
option ip-forwarding false;
# The default time in seconds to lease IP to clients
default-lease-time 600;
# Maximum allowed time in seconds to lease IP to clients
max-lease-time 86400;
#
# Domain name and name servers to distribute to clients that
# is common to all supported networks.
#
option domain-name "oracle.com";
option domain-name-servers 192.168.0.1;
# Make this DHCP server the official DHCP server for all the configured local networks.
authoritative;
Listing 13. Global Parameters
subnet
declaration for every subnet that the DHCP server controls and connects to, which tells dhcpd
how to recognize an address that is on that subnet.
#
# Declare subnet with the netmask to use, the IP range for clients, broadcast
# address to use, and the router to use
#
subnet 192.168.0.0 netmask 255.255.255.0 {
range 192.168.0.2 192.168.0.100;
option subnet-mask 255.255.255.0;
option broadcast-address 192.168.0.255;
option routers 192.168.0.1;
}
Listing 14. Subnet Declaration
The cluster nodes must be configured as clients that have static IP addresses. Such clients must have a host
declaration in the dhcpd
configuration file that defines each node's fixed address and a hardware
parameter that uses each node's Ethernet MAC address as an identifier.
You can group the cluster nodes' statements to specify statements that are shared by more than one node, as shown in Listing 15.
# x86 node declaration for its MAC identifier, static IP, and boot file. The install
server statement applies to all the nodes in this group.
group {
next-server 192.168.100.1;
host phys-schost-1 {
hardware ethernet 00:14:4F:01:DD:C0;
fixed-address 192.168.100.98;
filename "0100144F01DDC0";
}
host phys-schost-2 {
hardware ethernet 00:14:4F:01:DA:FC;
fixed-address 192.168.100.99;
filename "0100144F01DAFC";
}
}
# A SPARC node declaration.
host phys-schost-2 {
hardware ethernet 00:14:4F:01:DA:FC;
fixed-address 192.168.100.97;
filename "http://192.168.100.1:5555/cgi-bin/wanboot-cgi";
}
Listing 15. Grouping the Cluster Nodes' Statements
For x86 nodes, use the next-server
statement, as shown in Listing 15, to specify the IP address of the AI install server. The IP address of the AI install server is indicated in the scinstall
output by the value of BootSrvA
, as shown below. Use the filename
statement to specify the boot file, which is indicated by the value of BootFile
in the scinstall
output.
Before you boot node "phys-schost-1" to install from the network, you must
register the node with a DHCP server using 0100144F01DDC0 as the
client ID, and create DHCP macro for this node with definition
":BootSrvA=192.168.100.1:BootFile=0100144F01DDC0:".
For x86 nodes, the boot file name is in the form of 01
followed by the node's MAC address with the hyphen or colon separators removed, and it is used as a link to the install server to the install service created for all the cluster nodes.
# cd /etc/netboot
# ls -l 0100144F01DDC0
lrwxrwxrwx 1 root root 47 Nov 19 08:48 0100144F01DDC0 -> ./clusterA-
sol-11-1111-ai-x86/boot/grub/pxegrub
For SPARC nodes, use the filename
statement, as shown in Listing 15, to specify the value of the BootFile
object in the scinstall
output, as shown below. The BootFile
value includes the install server's IP address, so the next-server
statement is not needed.
Before you boot node "phys-schost-3" to install from the network, you must
register the node with a DHCP server using 0100144F025030 as the
client ID, and create DHCP macro for this node with definition
":BootSrvA=192.168.100.1:BootFile=http://192.168.100.1:5555/cgi-bin/wanboot-cgi:"
Starting the ISC DHCP Server
After creating the dhcpd
configuration file, enable the DHCP services that will start the dhcpd
server, as shown below. The network/dhcp/server:ipv4
service provides DHCP and BOOTP requests from IPv4 clients, and the network/dhcp/relay:ipv4
service relays DHCP and BOOTP requests from IPv4 clients to a network with a DHCP server.
# svcadm enable svc:/network/dhcp/server:ipv4
# svcadm enable svc:/network/dhcp/relay:ipv4
Updating the DHCP Configuration
Any time you make a change to the dhcpd
configuration file, refresh and restart the services to use the newly updated data:
# svcadm refresh svc:/network/dhcp/server:ipv4
# svcadm refresh svc:/network/dhcp/relay:ipv4
After you have configuration set up the AI install server and configured the DHCP server, you can start the cluster nodes' installation over the network. When the installation is complete, the nodes automatically boots to join the cluster.
Installing the Cluster Nodes
ok boot net:dhcp - install
For x86 cluster nodes, start the installation by using one of the following methods to boot from the network:
When the cluster nodes boot up, the GRUB menu is displayed with two menu entries similar to the following:
Oracle Solaris 11 11/11 Text Installer and command line
Oracle Solaris 11 11/11 Automated Install
Automated Install
entry and press Return.
By default, the Text Installer
entry is selected. If the Automated Install
entry is not selected within 20 seconds, the installation proceeds using the default interactive text installer method, which will not install and configure the Oracle Solaris Cluster software.
Verifying the Cluster Configuration
When all the nodes finish rebooting, the cluster installation and configuration is complete.
svcs
commands shown in Listing 16 to check whether any cluster SMF services are in maintenance mode.
The svc:/system/cluster/sc-ai-config:default
service is in the online
state only one time. It disables itself after being in online
status once.
cluster status
to check the overall cluster state, as shown in Listing 16.
# svcs -x
# svcs svc:/system/cluster/sc-ai-config:default
STATE STIME FMRI
disabled Nov_19 svc:/system/cluster/sc-ai-config:default
# /usr/cluster/bin/cluster status
=== Cluster Nodes ===
--- Node Status ---
Node Name Status
-------------- -------
phys-schost-1 Online
phys-schost-2 Online
=== Cluster Transport Paths ===
Endpoint1 Endpoint2 Status
------------ ------------ -------
phys-schost-1:net3 phys-schost-2:net3 Path online
phys-schost-1:net1 phys-schost-2:net1 Path online
=== Cluster Quorum ===
--- Quorum Votes Summary from (latest node reconfiguration) ---
Needed Present Possible
--------- --------- ----------
2 3 3
--- Quorum Votes by Node (current status) ---
Node Name Present Possible Status
-------------- -------- --------- -------
phys-schost-1 1 1 Online
phys-schost-2 1 1 Online
--- Quorum Votes by Device (current status) ---
Device Name Present Possible Status
----------- --------- ---------- -------
d1 1 1 Online
=== Cluster Device Groups ===
--- Device Group Status ---
Device Group Name Primary Secondary Status
----------------- --------- ---------- -------
--- Spare, Inactive, and In Transition Nodes ---
Device Group Name Spare Nodes Inactive Nodes In Transition Nodes
----------------- ----------- -------------- --------------------
--- Multi-owner Device Group Status ---
Device Group Name Node Name Status
------------------ ----------- --------
=== Cluster Resource Groups ===
Group Name Node Name Suspended State
------------- --------- ------------ ------
=== Cluster Resources ===
Resource Name Node Name State Status Message
------------- ---------- ------ ------------------
=== Cluster DID Devices ===
Device Instance Node Status
--------------- ------- -------
/dev/did/rdsk/d1 phys-schost-1 Ok
phys-schost-2 Ok
/dev/did/rdsk/d2 phys-schost-1 Ok
phys-schost-2 Ok
/dev/did/rdsk/d3 phys-schost-1 Ok
/dev/did/rdsk/d4 phys-schost-1 Ok
/dev/did/rdsk/d5 phys-schost-2 Ok
/dev/did/rdsk/d6 phys-schost-2 Ok
=== Zone Clusters ===
--- Zone Cluster Status ---
Name Node Name Zone HostName Status Zone Status
------ ---------- --------------- ------- --------------
Listing 16. Checking the Cluster Status
Using this automated installation method does the initial cluster setup. Other cluster objects—including device groups, storage, cluster file systems, zone clusters, and resources and resource groups for various data services—are not automatically created, and can be configured at any time after the initial cluster setup.
Checking the AI Installation Log Files
When the automated installation starts, progress messages are printed to both the console and a log file on each node. The log files contain more details.
During the installation, you can log in to a node as root using solaris
as the password to check the log file, /system/volatile/install_log
, for the progress. Note that this root account is included in the AI image and it is not the same one as created after the installation is done. Also, be aware that this log file might not be updated by the installer until some time after the IPS packages installation starts.
After the node boots up for the first time, the install log file is moved to be /var/log/install/install_log
.
The AI manifest file that conducted the installation is at /var/log/install/ai.xml
. This file is downloaded from the AI install server to the cluster nodes when the nodes boot from the network.
Checking the Cluster Configuration Log Files
Adding the nodes to the cluster is started by the SMF service svc:/system/cluster/sc-ai-config:default
. This service configures the static IP addresses for the nodes and calls scinstall
to add the nodes to the cluster. The install log is at /var/cluster/logs/install/scinstall.log.*
.
The system log file /var/adm/messages
also has messages related to the installation and cluster configuration.
scinstall
(1M) man page: http://docs.oracle.com/cd/E23623_01/html/E23628/scinstall-1m.html#scrolltocLucia Lai is a software developer in the Oracle Solaris Cluster Group. Her responsibilities include, but are not limited to, Oracle Solaris Cluster system management including installation and configuration, upgrade, and command-line commands.
Revision 1.0, 09/06/2012