by Mahesh Subramanya
Published February 2014
This article describes how to migrate a single instance of Oracle Database running on Oracle Solaris 10 with Oracle Solaris Cluster 3.3 into a clustered Oracle Solaris 11 environment without requiring any modification to the database, by using an Oracle Solaris 10 Zones cluster deployment.
The objective of this article is to show how to migrate applications from an Oracle Solaris 10 clustered environment into an Oracle Solaris 11 environment with minimal effort. The following are the steps for this procedure:
Note: The procedure in this article assumes that a two-node target cluster is already set up with the Oracle Solaris Cluster 4.1 software. See "How to Install and Configure a Two-Node Cluster" for instructions on how to configure a two-node cluster on Oracle Solaris 11.
Oracle Solaris Zones are application containers that provide isolated and secure environments for applications. Applications that run inside an Oracle Solaris Zone are partitioned from other applications or processes that run in a different Oracle Solaris Zone. Oracle Solaris Zones differ from other virtual machine solutions because they share the Oracle Solaris kernel.
An Oracle Solaris Zones cluster is a virtual cluster that leverages the capabilities of Oracle Solaris Zones—such as security isolation, fault isolation, and flexible resource management—and extends them across a cluster of nodes, delivering high availability (HA) to the applications running inside the zones and enabling the consolidation of multiple cluster applications onto a single, global cluster.
Oracle Solaris 10 Zones use the solaris10
brand zone, which provides a complete runtime environment for Oracle Solaris 10 applications on SPARC and x86 machines running the Oracle Solaris 10 9/10 operating system or later releases. This enables you to run an Oracle Solaris 10 environment on Oracle Solaris 11. The solaris10
brand zone cluster is a key feature that was introduced in Oracle Solaris Cluster 4.1 software. It can be used to migrate Oracle Solaris 10 clustered applications that are deployed with Oracle Solaris Cluster 3.3 3/13 software to a solaris10
brand zone cluster on a cluster that runs Oracle Solaris Cluster 4.1 software. This is accomplished without any modification to the applications. Thus, this feature provides the ability to consolidate Oracle Solaris 10 applications on the latest Oracle hardware and leverage the powerful features introduced in Oracle Solaris 11 software.
The example in this article uses an Oracle Database 10g Enterprise Edition Release 10.2.0.5 deployment on a two-node cluster. The cluster runs Oracle Solaris Cluster 3.3 3/13 software, and the database is managed by the Oracle Solaris Cluster HA for Oracle (HA for Oracle) data service. The servers are connected to an Oracle ZFS Storage Appliance serving the Oracle Database data and binaries.
For further details on configuring the HA for Oracle data service, refer to the Oracle Solaris Cluster Data Service for Oracle Guide (for Oracle Solaris Cluster 3.3 or Oracle Solaris Cluster 4). For information about configuring Oracle ZFS Storage Appliance, refer to "Installing and Maintaining Oracle's Sun ZFS Storage Appliances as NAS Devices in an Oracle Solaris Cluster Environment."
Figure 1 shows a view of the hardware and connectivity for the production source cluster, which is a two-node cluster called cluster-1
that has the following:
db-host-1
and db-host-2
)Figure 1. Source environment: hardware and connectivity.
Figure 2 shows a logical view of the source cluster, including the resource groups (rg) and resources (rs) for the HA for Oracle data service, as well as the dependencies between them.
Figure 2. Source environment: logical view of resources and resource groups.
Figure 3 shows a simplified view of the hardware and connectivity for the target cluster, new-phys-cluster
, which is a two-node cluster with two of Oracle's Sun SPARC Enterprise T5120 servers (new-phys-host-1
and new-phys-host-2
).
Figure 3. Target environment: hardware and connectivity.
Figure 4 shows a logical view of the target cluster running Oracle Solaris 11.1 and Oracle Solaris Cluster 4.1 software. This environment uses the solaris10
brand zone cluster to host the HA for Oracle Database 10g Release 2 configuration that is migrated from the source configuration shown in Figure 1.
Figure 4. Target environment: logical view.
In the logical view of the source cluster shown in Figure 2, the oracle-rg
resource group contains the oracle-server-rs
resource for the Oracle server instance; the logical host name resource (db-lh-rs
) that manages logical host db-lh
, which is used by clients to connect to the Oracle server; and the Oracle listener resource (oracle-listener-rs
).
The scal-mnt-rg
resource group contains the oracle-rs
and oradata-rs
scalable mount-point resources, which manage mount points /u01/app/oracle
and /u02/oradata/
, respectively.
These components are shown in the following status output.
db-host-1# clresourcegroup status
=== Cluster Resource Groups ===
Group Name Node Name Suspended Status
---------- --------- --------- ------
oracle-rg db-host-1 No Online
db-host-2 No Offline
scal-mnt-rg db-host-1 No Online
db-host-2 No Online
db-host-1# clresource status
=== Cluster Resources ===
Resource Name Node Name State Status Message
------------- --------- ----- --------------
oracle-listener-rs db-host-1 Online Online
db-host-2 Offline Offline
db-lh-rs db-host-1 Online Online - LogicalHostname online.
db-host-2 Offline Offline
oracle-server-rs db-host-1 Online Online
db-host-2 Offline Offline
oracle-rs db-host-1 Online Online
db-host-2 Online Online
oradata-rs db-host-1 Online Online
db-host-2 Online Online
The oracle-listener-rs
and oracle-server-rs
resources have dependencies of type resource_dependencies_offline_restart
on other resources. For more details on the types of dependencies that are available in Oracle Solaris Cluster, see the r_properties
(5) man page.
db-host-1# clresource show -p Resource_dependencies_offline_restart \
oracle-listener-rs
=== Resources ===
Resource: oracle-listener-rs
Resource_dependencies_offline_restart: oracle-rs
db-host-1# clresource show -p Resource_dependencies_offline_restart \
oracle-server-rs
=== Resources ===
Resource: oracle-server-rs
Resource_dependencies_offline_restart: oracle-rs oradata-rs
The oracle-listener-rs
and oracle-server-rs
resources have the following extension properties. For more details, see "HA for Oracle Extension Properties" in the Oracle Solaris Cluster Data Service for Oracle Guide.
oracle-listener-rs
resource are as follows:
Listener_name=LISTENER_DB1
ORACLE_HOME=/u01/app/oracle/product/10.2.0/Db_1
oracle-server-rs
resource are as follows:
ALERT_LOG_FILE=/u02/oradata/admin/testdb1/bdump/alert_testdb1.log
ORACLE_HOME=/u01/app/oracle/product/10.2.0/Db_1
CONNECT_STRING=hauser/hauser
Oracle Database software is installed on the NFS shares on the Sun ZFS Storage 7420 appliance. The following command shows information about the configuration of the Sun ZFS Storage 7420 appliance.
db-host-1# clnas show -v -d all
=== NAS Devices ===
Nas Device: qualfugu
Type: sun_uss
userid: osc_agent
Project: qualfugu-1/local/oracle_db
File System: /export/oracle_db/oradata
File System: /export/oracle_db/oracle
File system /export/oracle_db/oracle
, which is mounted on the cluster nodes at mount point /u01/app/oracle
, is the installation path for the Oracle Database software. File system /export/oracle_db/oradata
, which is mounted on the cluster nodes at mount point /u02/oradata/
, is the path for database files. The following are the /etc/vfstab
entries for these two files systems on both nodes of the cluster.
db-host-1# cat /etc/vfstab | grep oracle_db
qualfugu:/export/oracle_db/oracle - /u01/app/oracle nfs - no
rw,suid,bg,hard,nointr,rsize=32768,wsize=32768,proto=tcp,noac,vers=3
qualfugu:/export/oracle_db/oradata - /u02/oradata/ nfs - no
rw,suid,bg,hard,forcedirectio,nointr,rsize=32768,wsize=32768,hard,noac,proto=tcp,vers=3
The public network is using domain name service (DNS), and DNS is running and configured as follows.
db-host-1# svcs dns/client
STATE STIME FMRI
online Nov_04 svc:/network/dns/client:default
db-host-1# cat /etc/nsswitch.conf
#
# Copyright 2006 Sun Microsystems, Inc. All rights reserved.
# Use is subject to license terms.
#
#
# /etc/nsswitch.dns:
#
# An example file that could be copied over to /etc/nsswitch.conf; it uses
# DNS for hosts lookups, otherwise it does not use any other naming service.
#
# "hosts:" and "services:" in this file are used only if the
# /etc/netconfig file has a "-" for nametoaddr_libs of "inet" transports.
# DNS service expects that an instance of svc:/network/dns/client be
# enabled and online.
passwd: files
group: files
# You must also set up the /etc/resolv.conf file for DNS name
# server lookup. See resolv.conf(4).
#hosts: files dns
hosts: cluster files dns
# Note that IPv4 addresses are searched for in all of the ipnodes databases
# before searching the hosts databases.
#ipnodes: files dns
ipnodes: files dns [TRYAGAIN=0]
networks: files
protocols: files
rpc: files
ethers: files
#netmasks: files
netmasks: cluster files
bootparams: files
publickey: files
# At present there isn't a 'files' backend for netgroup; the system will
# figure it out pretty quickly, and won't use netgroups at all.
netgroup: files
automount: files
aliases: files
services: files
printers: user files
auth_attr: files
prof_attr: files
project: files
tnrhtp: files
tnrhdb: files
bash-3.2# cat /etc/resolv.conf
domain mydomain.com
nameserver 13.35.29.41
nameserver 19.13.8.13
nameserver 13.35.24.52
search mydomain.com
The following is the identity of the oracle
user.
db-host-1# id -a oracle
uid=602(oracle) gid=4051(oinstall) groups=4052(dba)
The oracle
user is configured with the following profile:
db-host-1# su - oracle
Oracle Corporation SunOS 5.10 Generic Patch January 2005
db-host-1$ cat .profile
export ORACLE_BASE=/u01/app/oracle
export ORACLE_HOME=/u01/app/oracle/product/10.2.0/Db_1
export ORACLE_SID=testdb1
export PATH=$PATH:$ORACLE_HOME/bin
export TERM=vt100
Before you begin the migration, perform the following tasks.
/var/opt/oracle/
to NFS location /net/qualfugu/archive/optbkp/
. db-host-1# cp -rf /var/opt/oracle/ /net/qualfugu/archive/optbkp/
db-host-1# clresourcegroup offline oracle-rg scal-mnt-rg
db-host-1# clresource disable -g oracle-rg,scal-mnt-rg +
db-host-1# clresourcegroup unmanage oracle-rg scal-mnt-rg
db-host-1# clnas remove-dir -d qualfugu-1/local/oracle_db
db-host-1# clnas remove qualfugu
db-host-1# clresourcegroup delete oracle-rg scal-mnt-rg
db-host-1# cluster shutdown -y -g0
Note: If this cluster is reinstated, the NAS shares from the Sun ZFS Storage 7420 appliance can be added back by using the clnas add
command, and the resource groups can be brought online by using the clresourcegroup online -emM
command (if they were not deleted).
The target two-node cluster new-phys-cluste
r includes the following (as shown in Figure 3 and Figure 4): two Sun SPARC Enterprise T5120 systems (new-phys-host-1
and new-phys-host-2
) running Oracle Solaris 11.1 and Oracle Solaris Cluster 4.1.
We will create a new solaris10
brand zone cluster, cluster-1
, on the target systems. The zone cluster will host the Oracle Database service that was configured in the source cluster running in an Oracle Solaris 10 environment.
Note: The procedure below assumes that the two-node target cluster is already set up with the Oracle Solaris Cluster 4.1 software. See "How to Install and Configure a Two-Node Cluster" for instructions on how to configure a two-node cluster on Oracle Solaris 11.
new-phys-host-1:~# clnode status
=== Cluster Nodes ===
--- Node Status ---
Node Name Status
--------- ------
new-phys-host-2 Online
new-phys-host-1 Online
solaris10
brand zone's support package on both nodes.
new-phys-host-1:~# pkg publisher
PUBLISHER TYPE STATUS URI
solaris origin online <solaris-repository>
ha-cluster origin online <ha-cluster-repository>
new-phys-host-1:~# pkg install pkg:/system/zones/brand/brand-solaris10
Packages to install: 1
Create boot environment: No
Create backup boot environment: Yes
DOWNLOAD PKGS FILES XFER (MB)
Completed 1/1 44/44 0.4/0.4
PHASE ACTIONS
Install Phase 74/74
PHASE ITEMS
Package State Update Phase 1/1
Image State Update Phase 2/2
new-phys-host-2:~# pkg publisher
PUBLISHER TYPE STATUS URI
solaris origin online <solaris-repository>
ha-cluster origin online <ha-cluster-repository>
new-phys-host-2:~# pkg install pkg:/system/zones/brand/brand-solaris10
Packages to install: 1
Create boot environment: No
Create backup boot environment: Yes
DOWNLOAD PKGS FILES XFER (MB)
Completed 1/1 44/44 0.4/0.4
PHASE ACTIONS
Install Phase 74/74
PHASE ITEMS
Package State Update Phase 1/1
Image State Update Phase 2/2
sc_ipmp0
. This interface will be used to configure the node-scope network resource for the zone cluster.
new-phys-host-1:~# ipmpstat -g
GROUP GROUPNAME STATE FDT INTERFACES
sc_ipmp0 sc_ipmp0 ok -- net0
new-phys-host-2:~# ipmpstat -g
GROUP GROUPNAME STATE FDT INTERFACES
sc_ipmp0 sc_ipmp0 ok -- net0
cluster-1.config
zone configuration file so the solaris10
brand zone cluster (cluster-1
) can be created in subsequent steps:This configuration uses the same host names for the cluster nodes and for the logical hosts as were used in the source setup shown in Figure 1.
new-phys-host-1:~# cat /cluster-1.config
create -b
set zonepath=/zones/cluster-1
set brand=solaris10
set autoboot=true
set limitpriv=default,proc_priocntl,proc_clock_highres
set enable_priv_net=true
set ip-type=shared
add net
set address=db-lh
set physical=auto
end
add capped-memory
set physical=10G
set swap=20G
set locked=10G
end
add dedicated-cpu
set ncpus=1-2
set importance=2
end
add attr
set name=cluster
set type=boolean
set value=true
end
add node
set physical-host=new-phys-host-1
set hostname=db-host-1
add net
set address=db-host-1/24
set physical=sc_ipmp0
end
end
add node
set physical-host=new-phys-host-2
set hostname=db-host-2
add net
set address=db-host-2/24
set physical=sc_ipmp0
end
end
add sysid
set root_password=ZiitH.NOLOrRg
set name_service="DNS{domain_name=mydomain.com name_server=13.35.24.52,13.35.29.41,
19.13.8.13 search=mydomain.com}"
set nfs4_domain=dynamic
set security_policy=NONE
set system_locale=C
set terminal=vt100
set timezone=US/Pacific
end
Note: Oracle Solaris Cluster 4.1 software supports only the shared-ip
type of solaris10
brand zone cluster. Oracle Solaris Cluster 4.1 SRU 3 supports the exclusive-ip
type.
The name chosen for the new zone cluster is functional. Since the old cluster setup is being preserved, the same name is used for the new zone cluster. (Use the file cluster-1.config
defined in the previous step.)
new-phys-host-1:~# clzonecluster configure -f /cluster-1.config cluster-1
If you encounter any errors during the zone cluster configuration or verification, refer to "Creating Zone Clusters."
new-phys-host-1:~# clzonecluster verify cluster-1
new-phys-host-1:~# clzonecluster status cluster-1
=== Zone Clusters ===
--- Zone Cluster Status ---
Name Brand Node Name Zone Host Name Status Zone Status
---- ----- --------- -------------- ------ -----------
cluster-1 solaris10 new-phys-host-1 db-host-1 Offline Configured
new-phys-host-2 db-host-2 Offline Configured
The following archive types are supported as the source archive for installing a solaris10
brand zone cluster:
native
brand zone on an Oracle Solaris 10 systemcluster
brand zone on an Oracle Solaris 10 cluster that has the proper patch levelsolaris10
zone archive (derived from an installed solaris10
brand zone)Although it is possible to use one of the original Oracle Solaris 10 physical cluster nodes from cluster-1
to create an archive and use it for the zone cluster installation, that option would involve making sure the original cluster meets the patch level requirements to ensure the zone cluster installation is successful.
Instead, this document will use a known solaris10
zone archive (by installing the archive from an Oracle VM Template), to ensure the results described in this article can be reproduced by readers on their own systems. This approach includes extra steps for creating a dummy zone, obtaining the archive out of the zone installed from the Oracle VM Template, and deleting the dummy zone that was used to obtain the archive.
See the Oracle VM Template for Oracle Solaris 10 Zones README (which is embedded in the template) for more details.
Also, ensure you have the following components, which are used in this procedure:
/net/qualfugu/archive/
: A directory on the Sun ZFS Storage 7420 appliance that contains the Oracle VM Template and is also used to store the dummy zone archive./net/qualfugu/osc-dir
: A DVD or DVD image path for the Oracle Solaris Cluster 3.3 3/13 software, which is available here, and the relevant patches, which are available on My Oracle Support./net/qualfugu/nas/
: A directory that contains the NAS client package (SUNWsczfsnfs
), which is required to configure the Sun ZFS Storage 7420 appliance for the Oracle Solaris Cluster configuration. (See Step 5 of "How to Install a Sun ZFS Storage Appliance in a Cluster" for download instructions.)-a 10.134.90.201
is the IP address and /zones/cluster-1
is the root path for the dummy zone.
new-phys-host-1:~# zfs create -o mountpoint=/zones rpool/zones
new-phys-host-1:~# cd /net/qualfugu/archive
new-phys-host-1:/net/qualfugu/archive#./solaris-10u11-sparc.bin \
-a 10.134.90.201 -i net0 -p /zones/cluster-1 -z s10zone
This is an Oracle VM Template for Oracle Solaris Zones.
Copyright 2011, Oracle and/or its affiliates. All rights reserved.
This software and related documentation are provided under a license agreement
containing restrictions on use and disclosure and are protected by intellectual
property laws. Except as expressly permitted in your license agreement or allowed
by law, you may not use, copy, reproduce, translate, broadcast, modify, license,
transmit, distribute, exhibit, perform, publish, or display any part, in any
form, or by any means. Reverse engineering, disassembly, or decompilation of this
software, unless required by law for interoperability, is prohibited.
Checking disk-space for extraction
Ok
Extracting in /net/qualfugu/archive/bootimage.wuaWaV ...
100% [===============================>]
Checking data integrity
Ok
Checking platform compatibility
The host and the image do not have the same Solaris release:
host Solaris release: 5.11
image Solaris release: 5.10
Will create a Solaris 10 branded Zone.
IMAGE: /net/qualfugu/archive/solaris-10u11-sparc.bin
ZONE: s10zone
ZONEPATH: /zones/cluster-1
INTERFACE: net0
VNIC: vnicZBI43632
MAC ADDR: 2:8:20:92:88:96
IP ADDR: 10.134.90.201
NETMASK: 255.0.0.0
DEFROUTER: # # This file is deprecated. Default routes will be created for any router #
addresses specified here, but they will not change when the underlying # network
configuration profile (NCP) changes. For NCP-specific static # routes, the '-p'
option of the route(1M) command should be used. # # See netcfg(1M) for information about
network configuration profiles.
TIMEZONE: US/Pacific
Checking disk-space for installation
Ok
Installing in /cpool/s10zone ...
100% [==========================>]
/net/qualfugu/archive/
directory as the location where we will copy the archive.)
new-phys-host-1:~# cd /zones
new-phys-host-1:~# find cluster-1 -print | cpio -oP@/ | gzip > \
/net/qualfugu/archive/disk-image.cpio.gz
new-phys-host-1:~# zoneadm -z s10zone uninstall
Are you sure you want to uninstall zone s10zone (y/[n])? y
new-phys-host-1:~# zonecfg -z s10zone delete
Are you sure you want to uninstall zone s10zone (y/[n])? y
new-phys-host-2:~# zfs create -o mountpoint=/zones rpool/zones
new-phys-host-1:~# clzonecluster install \
-a /net/qualfugu/archive/disk-image.cpio.gz cluster-1
Waiting for zone install commands to complete on all the nodes of the zone cluster "cluster-1"...
cluster-1
on all nodes of the zone cluster.
new-phys-host-1:~# zlogin -C cluster-1
new-phys-host-2:~# zlogin -C cluster-1
new-phys-host-1:~# clzonecluster boot -o cluster-1
Waiting for zone reboot commands to complete on all the nodes of the zone cluster "cluster-1"...
If the system configuration was not completed, complete any pending system configuration.
new-phys-host-1:~# clzonecluster status cluster-1
=== Zone Clusters ===
--- Zone Cluster Status ---
Name Brand Node Name Zone Host Name Status Zone Status
---- ----- --------- -------------- ------ -----------
cluster-1 solaris10 new-phys-host-1 db-host-1 Offline Running
new-phys-host-2 db-host-2 Offline Running
Note: You can skip this step if the archive contains cluster software, for example, if the archive is from an Oracle Solaris 10 physical cluster node or a cluster
brand zone on an Oracle Solaris 10 system.
new-phys-host-1:~# clzonecluster install-cluster \
-d /net/qualfugu/osc-dir/ \
-p patchdir=/net/qualfugu/osc-dir,patchlistfile=plist-sparc \
-s all cluster-1
Preparing installation. Do not interrupt ...
Installing the packages for zone cluster "cluster-1" ...
Where:
-d
specifies the location of the cluster software DVD image.-p patchdir
specifies the location of the patches to install along with the cluster software. The location must be accessible to all nodes of the cluster.patchlistfile
specifies the file that contains the list of patches to install along with cluster software inside the zone cluster. The location must be accessible to all nodes of the cluster. In the following example, the patch list plist-sparc
is as follows:
new-phys-host-1:~# cat /net/qualfugu/osc-dir/plist-sparc
145333-15
-s
specifies which agent packages to install along with core cluster software. In this example, all
is specified to install all the agent packages.new-phys-host-1:~# clzonecluster reboot cluster-1
new-phys-host-1:~# clzonecluster status cluster-1
=== Zone Clusters ===
--- Zone Cluster Status ---
Name Brand Node Name Zone Host Name Status Zone Status
---- ----- --------- -------------- ------ -----------
cluster-1 solaris10 new-phys-host-1 db-host-1 Online Running
new-phys-host-2 db-host-2 Online Running
On db-host-1
, install the required NAS software package in the zones of the zone cluster.
new-phys-host-1:~# zlogin cluster-1
[Connected to zone 'cluster-1' pts/2]
Last login: Mon Nov 5 21:20:31 on pts/2
Oracle Corporation SunOS 5.10 Generic Patch January 2005
db-host-1# cd /net/qualfugu/nas/
db-host-1# pkgadd -d . SUNWsczfsnfs
db-host-2
.
db-host-1# clnas add -t sun_uss -p userid=osc_agent qualfugu
Enter password: <password you set when you configured the appliance>
db-host-1# clnas find-dir qualfugu
=== NAS Devices ===
Nas Device: qualfugu
Type: sun_uss
Unconfigured Project: qualfugu-1/local/oracle_db
db-host-1# /usr/cluster/bin/clnas add-dir \
-d qualfugu-1/local/oracle_db qualfugu
db-host-1# /usr/cluster/bin/clnas show -v -d qualfugu
=== NAS Devices ===
Nas Device: qualfugu
Type: sun_uss
userid: osc_agent
Project: qualfugu-1/local/oracle_db
File System: /export/oracle_db/oradata
File System: /export/oracle_db/oracle
/etc/vfstab
entries.
db-host-1# mkdir -p /u01/app/oracle /u02/oradata/
db-host-1# echo "qualfugu:/export/oracle_db/oracle - /u01/app/oracle nfs - no
rw,suid,bg,hard,nointr,rsize=32768,wsize=32768,proto=tcp,noac,vers=3" >> /etc/vfstab
db-host-1# echo "qualfugu:/export/oracle_db/oradata /u02/oradata/ - nfs - no
rw,suid,bg,hard,forcedirectio,nointr,rsize=32768,wsize=32768,hard,noac,proto=tcp,vers=3" >> /etc/vfstab
db-host-2# mkdir -p /u01/app/oracle /u02/oradata/
db-host-2# echo "qualfugu:/export/oracle_db/oracle - /u01/app/oracle nfs - no
rw,suid,bg,hard,nointr,rsize=32768,wsize=32768,proto=tcp,noac,vers=3" >> /etc/vfstab
db-host-2# echo "qualfugu:/export/oracle_db/oradata /u02/oradata/ - nfs - no
rw,suid,bg,hard,forcedirectio,nointr,rsize=32768,wsize=32768,hard,noac,proto=tcp,vers=3" >> /etc/vfstab
oracle
user on both nodes of the zone cluster. Make sure that the identity is the same as that of the source setup.
db-host-1# id -a oracle
uid=602(oracle) gid=4051(oinstall) groups=4052(dba)
db-host-2# id -a oracle
uid=602(oracle) gid=4051(oinstall) groups=4052(dba)
oracle
user on both nodes of zone cluster.
Mount the file system on both nodes of the zone cluster using the following command:
# mount /u01/app/oracle
Verify the owner, group, and mode of $ORACLE_HOME/bin/oracle
using the following command:
# ls -l $ORACLE_HOME/bin/oracle
Confirm that the owner, group, and mode are as follows:
oracle
dba
-rwsr-s-x
db-host-1# clresourcegroup create -S scal-mnt-rg
db-host-1# clresource create -g scal-mnt-rg \
-t SUNW.ScalMountPoint \
-p MountPointDir=/u02/oradata/ \
-p FileSystemType =nas \
-p TargetFileSystem=qualfugu:/export/oracle_db/oradata \
oradata-rs
db-host-1# clresource create -g scal-mnt-rg \
-t SUNW.ScalMountPoint \
-p MountPointDir=/u01/app/oracle \
-p FileSystemType=nas \
-p TargetFileSystem=qualfugu:/export/oracle_db/oracle \
oracle-rs
db-host-1# clresourcegroup online -eM scal-mnt-rg
db-host-1# clresource status -g scal-mnt-rg
=== Cluster Resources ===
oracle-rs db-host-1 Online Online
db-host-2 Online Online
oradata-rs db-host-1 Online Online
db-host-2 Online Online
db-host-1# mount -p |grep qualfugu
qualfugu:/export/oracle_db/oracle - /u01/app/oracle nfs - no
rw,suid,bg,hard,nointr,rsize=32768,wsize=32768,proto=tcp,noac,vers=3,xattr,
zone=cluster-1,sharezone=10
qualfugu:/export/oracle_db/oradata /u02/oradata/ - nfs - no
rw,suid,bg,hard,forcedirectio,nointr,rsize=32768,wsize=32768,hard,noac,proto=tcp,
vers=3,xattr,zone=cluster-1,sharezone=10
db-host-2# mount -p |grep qualfugu
qualfugu:/export/oracle_db/oracle - /u01/app/oracle nfs - no
rw,suid,bg,hard,nointr,rsize=32768,wsize=32768,proto=tcp,noac,vers=3,xattr,
zone=cluster-1,sharezone=10
qualfugu:/export/oracle_db/oradata /u02/oradata/ - nfs - no
rw,suid,bg,hard,forcedirectio,nointr,rsize=32768,wsize=32768,hard,noac,proto=tcp,
vers=3,xattr,zone=cluster-1,sharezone=10
Note: Often the database has a specific SRM project (entries in /etc/projects
) to be used for the database to obtain the correct level of system resources. For such configurations, these project specifications must be reproduced in the solaris10
brand zone cluster as well.
/var/opt/oracle
on all nodes of zone cluster.
db-host-1# mkdir -p /var/opt/oracle/
db-host-2# mkdir -p /var/opt/oracle/
db-host-1# cp -rf /net/qualfugu/archive/optbkp/ /var/opt/oracle/
db-host-2# cp -rf /net/qualfugu/archive/optbkp/ /var/opt/oracle/
db-host-1# clresourcetype register SUNW.oracle_server
db-host-1# clresourcetype register SUNW.oracle_listener
db-host-1# clreslogicalhostname create -g oracle-rg \
-h db-lh db-lh-rs
db-host-1# clresource create -g oracle-rg \
-t oracle_listener \
-p ORACLE_HOME=/u01/app/oracle/product/10.2.0/Db_1 \
-p Listener_name=LISTENER_DB1 \
-p Resource_dependencies_offline_restart=oracle-rs \
oracle-listener-rs
db-host-1# clresource create -g oracle-rg \
-t oracle_server -p ORACLE_SID=testdb1 \
-p ALERT_LOG_FILE=/u02/oradata/admin/testdb1/bdump/alert_testdb1.log \
-p ORACLE_HOME=/u01/app/oracle/product/10.2.0/Db_1 \
-p CONNECT_STRING=hauser/hauser \
-p Resource_dependencies_offline_restart=oradata-rs,oracle-rs \
oracle-server-rs
db-host-1# clresourcegroup online -eM oracle-rg
db-host-1# clresource status -g oracle-rg
=== Cluster Resources ===
Resource Name Node Name State Status Message
------------- --------- ----- --------------
oracle-server-rs db-host-2 Offline Offline
db-host-1 Online Online
oracle-listener-rs db-host-2 Offline Offline
db-host-1 Online Online
db-lh-rs db-host-2 Offline Offline - LogicalHostname offline.
db-host-1 Online Online - LogicalHostname online.
db-host-1# halt -q
halt: can't turn off auditd
[Connection to zone 'cluster-1' pts/2 closed]
new-phys-host-1
is offline/installed.
new-phys-host-1:~# clzonecluster status cluster-1
=== Zone Clusters ===
--- Zone Cluster Status ---
Name Brand Node Name Zone Host Name Status Zone Status
---- ----- --------- -------------- ------ -----------
cluster-1 solaris10 new-phys-host-1 db-host-1 Offline Installed
new-phys-host-2 db-host-2 Online Running
new-phys-host-2:~# zlogin cluster-1
[Connected to zone 'cluster-1' pts/2]
Last login: Mon Nov 5 21:30:31 on pts/2
Oracle Corporation SunOS 5.10 Generic Patch January 2005
db-host-2# clresource status -g oracle-rg
=== Cluster Resources ===
Resource Name Node Name State Status Message
------------- --------- ----- --------------
oracle-server-rs db-host-2 Online Online
db-host-1 Offline Offline
oracle-listener-rs db-host-2 Online Online
db-host-1 Offline Offline
db-lh db-host-2 Online Online - LogicalHostname online
db-host-1 Offline Offline - LogicalHostname offline
Note that the following output is same as that obtained from the source setup shown in Figure 1.
new-phys-host-1:~# clzonecluster boot -n new-phys-host-1 cluster-1
new-phys-host-1:~# zlogin cluster-1
[Connected to zone 'cluster-1' pts/2]
Last login: Mon Nov 5 21:30:31 on pts/2
Oracle Corporation SunOS 5.10 Generic Patch January 2005
db-host-1# clresourcegroup switch -n db-host-1 oracle-rg
db-host-1# clresourcegroup status
=== Cluster Resource Groups ===
Group Name Node Name Suspended Status
---------- --------- --------- ------
oracle-rg db-host-1 No Online
db-host-2 No Offline
scal-mnt-rg db-host-1 No Online
db-host-2 No Online
db-host-1# clresource status
=== Cluster Resources ===
Resource Name Node Name State Status Message
------------- --------- ----- --------------
oracle-server-rs db-host-1 Online Online
db-host-2 Offline Offline
oracle-listener-rs db-host-1 Online Online
db-host-2 Offline Offline
db-lh-rs db-host-1 Online Online - LogicalHostname online.
db-host-2 Offline Offline - LogicalHostname offline.
oracle-rs db-host-1 Online Online
db-host-2 Online Online
oradata-rs db-host-1 Online Online
db-host-2 Online Online
Migration to the solaris10
brand zone cluster is now complete.
This article described how to migrate an Oracle database running on Oracle Solaris 10 in an Oracle Solaris Cluster environment to Oracle Solaris 11 without upgrading the database software by using Oracle Solaris 10 Zones and Oracle Solaris zone cluster features.
This migration is an example that can be extended to other application environments and shows how Oracle Solaris can help preserve infrastructure investments and lower migration costs.
For more information on Oracle Solaris Cluster, see the following resources.
And here are some Oracle Solaris resources:
Mahesh has been a software engineer in the Oracle Solaris Cluster High Availability Infrastructure group for the past three years. Previously, Mahesh worked in the Oracle Solaris Revenue Product Engineering group, and prior to that he worked at Nexus Info as a software engineer.
Revision 1.1, 01/23/2015
Revision 1.0, 02/14/2014