How to Map Old CLI Commands to New CLI Commands in Oracle Solaris Cluster

by Veerareddy Papareddy
Published April 2013

Back to Oracle Solaris Cluster Technical Resources

This article is a reference document that lists the old CLI commands and the corresponding new commands that can be used to perform the respective equivalent tasks.

A new object-oriented command set was introduced with Oracle Solaris Cluster 3.2 to comply with the Command-line Interface Paradigm (CLIP) guidelines specified by Sun Microsystems. Prior to that, the commands that were used to administer previous versions, such as Oracle Solaris Cluster versions 3.0 and 3.1, were unintuitive, generated confusing error messages, were a few in number, and had several options for a single task.

The new command-line interface (CLI) for Oracle Solaris Cluster 3.2 included a separate command for each cluster object type and had consistent subcommand names and option letters. These new CLI commands were intuitive, easy to remember, and reusable, and they generated consistent error messages.

Easy replication of configurations is one of the biggest benefits of the new CLI commands. Most of the commands support the export subcommand, which outputs a cluster configuration to XML. In addition, most of the create subcommands accept the -input option and use XML files to create objects in the operand list.

Old CLI commands were removed from the product in Oracle Solaris Cluster version 4.4. Only new CLI commands are available starting with Cluster 4.4. New CLI commands can be used on Oracle Solaris 10 (with Oracle Solaris Cluster 3.2 and 3.3 updates) and on Oracle Solaris 11. You can use both old and new CLI commands to manage clusters in Oracle Solaris Cluster versions 3.2, 3.3, 4.0, and 4.1, 4.2 & 4.3.

The following sections list the old CLI commands and the corresponding new commands that can be used to perform the respective equivalent tasks.

Viewing Cluster Information

Table 1 lists the old and new CLI commands that are used to view cluster information.

Table 1
Task Old CLI Command New CLI Command
View quorum information. scstat -q clquorum status
clquorum show
View cluster components. scstat -pv cluster status
View resource group or resource status. scstat -g clresourcegroup status
clresource status
View IP multipathing (IPMP) status. scstat -i clnode status -m
View status of all nodes. scstat -n clnode status
clnode show
View disk device groups. scstat -D cldevicegroup status
cldevicegroup show
View transport information. scstat -W clinterconnect status
clinterconnect show
View detailed resource group or resource status. scrgadm -pv clresourcegroup show -v
clresource show -v
View cluster configuration information. scconf -p cluster show -v
View installation information (for example, information that includes print packages and version). scinstall -pv scinstall -pv

Configuring a Cluster

Table 2 lists the old and new CLI commands that are used to configure a cluster.

Table 2

Task Old CLI Command New CLI Command
Perform an integrity check. sccheck cluster check
Configure a cluster (for example, add nodes and add data services). scinstall scinstall
Use the cluster configuration utility (for example, quorum, data services and resource groups). scsetup clsetup
Add a node to a configuration. scconf -a -T node=<host> claccess allow -h <host>[,...]
Remove a node from a configuration. scconf -r -T node=<host> claccess deny -h <host>[,...]
Prevent new nodes from entering the cluster. scconf -a -T node=. claccess deny-all
Put a node in maintenance state. scconf -c -q node=<node>,maintstate

Note: Use the scstat -q command to verify that the node is in maintenance mode. The vote count should be zero for that node.
clquorum disable -t node <node>

Note: Use the clquorum status command to verify that the node is in maintenance mode. The vote count should be zero for that node.
Remove a node from maintenance state. scconf -c -q node=<node>,reset

Note: Use the scstat -q command to verify that the node is not in maintenance mode. The vote count should be one for that node.
clquorum reset

Note: Use the clquorum status command to verify that the node is not in maintenance mode. The vote count should be one for that node.

Configuring Quorum Devices

Nodes, disk devices, and quorum servers are referred to as quorum devices. A total quorum refers to all the nodes and devices added together. Table 3 lists the old and new CLI commands that are used to configure quorum devices such as nodes, disk devices, and quorum servers.

Table 3

Task Old CLI Command New CLI Command
Add a device to the quorum. scconf -a -q globaldev=d1

Note: If you get the error message unable to scrub device, use scgdevs to add the device to the global device namespace.
clquorum add d1

Note: If you get the error message unable to scrub device, use cldevice to add the device to the global device namespace.

Remove a device from the quorum. scconf -r -q globaldev=d1 clquorum remove d1
Remove the last quorum device, evacuate all nodes, and put the cluster into maintenance mode. scconf -c -q installmode cluster set -p installmode=enabled
Remove a quorum device (for example, a device named d1). scconf -r -q globaldev=d1 clquorum remove d1
Check the status of quorum devices after removing the last quorum device. scstat-q clquorum status
Put a quorum device in maintenance mode. scconf -c -q globaldev=d1,maintstate clquorum disable d1
Remove a quorum device from maintenance mode. scconf -c -q globaldev=<device>,reset

Note: This will bring all offline quorum devices online.

clquorum reset

Note: This will bring all offline quorum devices online.

Reset quorum vote. scconf -c -q reset

Note: This will bring all offline quorum devices online.

clquorum reset

Note: This will bring all offline quorum devices online.

Administering a Device

Table 4 lists the old and new CLI commands that are used to administer devices in a cluster.

Table 4

Task Old CLI Command New CLI Command
List all the configured devices, including paths across all nodes. scdidadm -L cldevice list -v
List all the configured devices, including paths for one node only. scdidadm -l cldevice list -v -n <nodename>
Reconfigure the device database, creating new instance numbers if required. scdidadm -r cldevice refresh
List all the configured devices including paths and fencing. N/A cldevice show -v
Rename a device ID (DID) instance. N/A cldevice rename -d <destination device> <device>
Clear DIDs that are no longer used or are nonexistent. scdidadm -C cldevice clear
Perform the repair procedure for a particular path.

Note: You can perform this task when a disk gets replaced.

scdidadm -R <ctd device>
scdidadm -R <device id>

Note: An example of a controller-target-disk (CTD) device is c1t1d0 and the device ID can be 1, 2, or 3.

cldevice repair <ctd device or device ID>

Note: An example of a CTD device is c1t1d0 and the device ID can be 1, 2, or 3.

Configure the global device namespace. scgdevs cldevice populate
View the status of all disk paths. scdpm -p all:all

Note: all:all denotes <host>:<disk>.

cldevice status
Monitor the device path. scdpm -m <node:disk path> cldevice monitor -n <node> <disk>
Stop monitoring the device path. scdpm -u <node:disk path> cldevice unmonitor -n <node> <disk>

Configuring Device Groups

Table 5 lists the old and new CLI commands that are used to configure device groups in a cluster.

Table 5

Task Old CLI Command New CLI Command
Add or register a device group. scconf -a -D type=vxvm,name=appdg,nodelist=<host>:<host>,preferenced=true cldevicegroup create -t <devicegroup-type>-n <node> -d <device> <devicegroup>
Remove a device group. scconf -r -D name=<device group> cldevicegroup remove-node[-t <devicegroup-type> -n <node> <devicegroup>
cldevicegroup remove-device -d <device> <devicegroup>
Add a single node. scconf -a -D type=vxvm,name=appdg,nodelist=<host> cldevicegroup add-node -t <devicegroup-type> -n <node> <device group>
Remove a single node. scconf -r -D name=<device group>,nodelist=<host> cldevicegroup remove-node -t <devicegroup-type> -n <node> <devicegroup>
Switch device groups. scswitch -z -D <device group> -h <host> cldevicegroup switch --n <host> <disk group>
Put a device group in maintenance mode. scswitch -m -D <device group> cldevicegroup disable -t <devicegroup-type> <device group>
Remove a device group from maintenance mode. scswitch -z -D <device group> -h <host> cldevicegroup enable -t <devicegroup-type> <device group>
Take a device group online. scswitch -z -D <device group> -h <host> cldevicegroup online -t <devicegroup-type> -n <node> <device group>
Take a device group offline. scswitch -F -D <device group> cldevicegroup offline -t <devicegroup-type> <devicegroup>
Resync a device group. scconf -c -D name=appdg,sync cldevicegroup sync -t <devicegroup-type> <device group>

Enabling or Disabling a Transport Cable

Table 6 lists the old and new CLI commands that are used to enable or disable a transport cable in a cluster.

Table 6

Task Old CLI Command New CLI Command
Enable a transport cable. scconf -c -m endpoint=<host>:e1000g1,state=enabled clinterconnect enable <host>:<interface>,<switch>
Disable a transport cable. scconf -c -m endpoint=<host>:e1000g1,state=disabled clinterconnect disable <host>:<interface>,<switch>

Configuring Resource Groups

Table 7 lists the old and new CLI commands that are used to configure resource groups in a cluster.

Table 7

Task Old CLI Command New CLI Command
Add a resource group. scrgadm -a -g <res_group> -h <host>,<host> clresourcegroup create -n <host>,<host> <res_group>
Remove a resource group. scrgadm -r -g <res_group> clresourcegroup delete <res_group>
Change a resource group's properties. scrgadm -c -g <res_group> -y <property=value> clresourcegroup set -p <name=value> <res_group>
List a resource group. scstat -g clresourcegroup status
View a detailed resource group list. scrgadm -pv -g <res_group> clresourcegroup show -v <res_group>
Take a resource group online. scswitch -Z -g <res_group> clresourcegroup online <res_group>
Take a resource group offline. scswitch -F -g <res_group> clresourcegroup offline <res_group>
Manage a resource group. scswitch -o -g <res_group> clresourcegroup manage <res_group>
Stop managing a resource group. scswitch -u -g <res_group>

Note: All resources in the group must be disabled.
clresourcegroup unmanage <res_group>

Note: All resources in the group must be disabled.
Suspend a resource group. N/A clresourcegroup suspend <res_group>
Resume a resource group. N/A clresourcegroup resume <res_group>
Switch a resource group. scswitch -z -g <res_group> -h <host> clresourcegroup switch -n <node> <res_group>
Quiesce a resource group. scswitch -Q -g <resource group name> clresourcegroup quiesce <resource group name>

Configuring Resources

Table 8 lists the old and new CLI commands that are used to configure resources in a cluster.

Table 8

Task Old CLI Command New CLI Command
Add a failover network resource. scrgadm -a -L -g <res_group> -l <logicalhost> clreslogicalhostname create -g <res_group> <logicalhost>
Add a shared network resource. scrgadm -a -S -g <res_group> -l <logicalhost> clressharedaddress create -g <res_group> <logicalhost>
Add a failover Apache application and attach the network resource.


scrgadm -a -j apache_res -g <res_group> \\ 
-t SUNW.apache -y Network_resources_used = <logicalhost> 
-y Scalable=False -y Port_list = 80/tcp \\ 
-x Bin_dir = /usr/apache/bin


clresource create -g apache-failover-rg -t SUNW.apache -p port_list=80/tcp 
-p Bin_dir=/usr/apache2/ bin -p Scalable=false 
-p Resource_Dependencies=<logical address> apache-rs
Add a shared Apache application and attach the network resource.


scrgadm -a -j apache_res -g <res_group> \\ 
-t SUNW.apache -y Network_resources_used = <logicalhost> 
-y Scalable=True -y Port_list = 80/tcp \\ 
-x Bin_dir = /usr/apache/bin


clresource create -t SUNW.apache -g apache-scalable-rg 
-p port_list=80/tcp -p Bin_dir=/usr/apache2/ bin -p Scalable=true 
-p Resource_Dependencies=<shared address> apache-scalable-rs
Create an HAStoragePlus failover resource.


scrgadm -a -g <res_group> -j <res> -t SUNW.HAStoragePlus \\
-x FileSystemMountPoints=/oracle/data01 -x Affinityon=true


clresource create -g <res_group> -t SUNW.HAStoragePlus \\
 -p FilesystemMountPoints=/test2 -x Affinityon=true <res>
Remove a resource. scrgadm -r -j <resource>

Note: You must disable the resource first.

clresource delete <resource>

Note: You must disable the resource first.

Change resource properties. scrgadm -c -j <resource> -y <property=value> clresource set -p <property=value> <resource>
List resources. scstat -g clresourcegroup status
clresource status
clresourcegroup show
clresource show
View a detailed list of resources. scrgadm -pv -j <resource>
scrgadm -pvv -j <resource>
clresource list -v
Disable the resource monitor. scrgadm -n -M -j <resource> clresource unmonitor <resource>
Enable the resource monitor. scrgadm -e -M -j <resource> clresource monitor <resource>
Disable a resource. scswitch -n -j <resource> clresource disable <resource>
Enable a resource. scswitch -e -j <resource> clresource enable <resource>
Clear a failed resource. scswitch -c -h<host>,<host> -j <resource> -f STOP_FAILED clresource clear -f STOP_FAILED <resource>
Find the network of a resource. scrgadm -pvv -j <resource> | grep -I network scrgadm -pvv -j <resource> | grep -I network
Remove a resource. scrgadm -r -j <resource> clresource delete <resource>

Configuring Resource Types

Table 9 lists the old and new CLI commands that are used to configure resource types in a cluster.

Table 9

Task Old CLI Command New CLI Command
Add a resource type (for example, SUNW.HAStoragePlus). scrgadm -a -t <resource type> clrt register <resource type>
Delete a resource type. scrgadm -r -t <resource type>

Note: Set the RT_SYSTEM property on the resource type to false.
clrt unregister <resource_type>

Note: Set the RT_SYSTEM property on the resource type to false.
List a resource type. scrgadm -pv | grep <resource type name> clrt list

See Also

About the Author

Veerareddy Papareddy (Veera) worked at BEML India as a CAD center engineer and then at Nortel Networks as a system administrator. Veera has now been with Sun Microsystems and Oracle for over 12 years as a quality engineer for the Oracle Solaris Cluster product.

Revision 1.0, 04/02/2013