How to add high availability to the network infrastructure of a multitenant cloud environment using the DLMP aggregation technology introduced in Oracle Solaris 11.1.
This article is Part 1 of a two-part series that describes how to use datalink multipathing (DLMP) aggregations.
In this article, we will examine how to add high availability (HA) to your network using DLMP aggregation. To do this, we will perform the following main tasks:
Note: The information in this article applies to Oracle Solaris 11.2.
Once we virtualize a network cloud infrastructure using Oracle Solaris 11 network virtualization technologies—such as virtual network interface cards (VNICs), virtual switches, load balancers, firewalls, and routers—the network itself becomes an increasingly critical component of the cloud infrastructure.
In order to add resiliency to the network infrastructure layer, we need to implement an HA solution at this layer, such as we would do for any other mission-critical component of the data center.
This article is Part 1 of a two-part series. In Part 1, we will cover how to implement network HA using datalink multipathing (DLMP) aggregation technology, which was introduced in Oracle Solaris 11.1.
A DLMP aggregation allows us to deliver resiliency to the network infrastructure by providing transparent failover and increasing throughput. The objects that are involved in the process are VNICs, irrespective of whether they are configured inside Oracle Solaris Zones or in logical domains under Oracle VM Server for SPARC.
Using this technology, you can add HA to your current network infrastructure without the cross-organizational complexity that might often be associated with this kind of solution.
The benefits of this technology are clear and they take into account the limitations of existing technologies:
Trunk aggregation and DLMP aggregation support nearly the same features to improve network performance. However, differences do exist. The following table presents a general comparison of these two types of link aggregations, which are both supported in Oracle Solaris.
Table 1. Comparison of Trunk Aggregation and DLMP Aggregation
Feature | Trunk Aggregation | DLMP Aggregation |
---|---|---|
Link-based failure detection | Supported | Supported |
Probe-based failure detection | Supported (Link Aggregation Control Protocol [LACP]) | Supported in Oracle Solaris 11.2 |
LACP | Supported | Not supported |
Use of standby interfaces | Not supported | Supported |
Ability to span multiple switches | Supported when used with a vendor-proprietary solution | Supported |
Switch configuration | Required | Not required |
Policies for load balancing | Supported | Not applicable |
Load spreading across all the aggregation's ports | Supported | Limited (The aggregation spreads its VNICs across all ports. However, individual VNICs cannot spread the load across multiple ports.) |
User-defined flows for resource management | Supported | Supported |
Link protection | Supported | Supported |
Back-to-back parallel configuration | Supported | Not supported (DLMP aggregations must always use an intermediary switch to send packets to other destination systems. However, when using DLMP aggregations, do not configure the switch for link aggregation.) |
In this series of articles, we will examine four use cases in order to demonstrate the capability of DLMP aggregation in a multitenant cloud environment.
Part 2 also describes how to detect failures in DLMP aggregations.
In the architecture used for all the uses cases, all the network building blocks are installed using Oracle Solaris 11 Zones, ZFS, and network virtualization technologies. Figure 1 shows the architecture:
Figure 1. Architecture used in this series of articles
First let's set up the DLMP aggregation in the global zone.
The following example shows how to create a DLMP aggregation. The aggregation has four underlying data links (net0
, net1
, net2
, net3
), as shown in Figure 1.
Important: In the examples presented in this series of articles, the command prompt indicates which user needs to run each command in addition to indicating the environment where the command should be run. For example, the command prompt root@global_zone:~#
indicates that user root
needs to run the command from the global zone.
Note: If you plan to create an aggregation that includes the network interface through which you are currently connecting, you need to set up the aggregation from the system console using the Oracle Integrated Lights Out Manager (Oracle ILOM). Using Oracle ILOM will help you avoid a temporary disconnection that otherwise would take place during the setup because you are aggregating on the network interface that you are using for the link aggregation.
You can verify that you are within the system console using the tty
command, which prints the user's terminal name.
root@global_zone:~# tty
/dev/console
Begin by listing the current data links, as shown in Listing 1:
root@global_zone:~# dladm show-link
LINK CLASS MTU STATE BRIDGE OVER
net0 phys 1500 up -- ----
net1 phys 1500 unknown -- ----
net2 phys 1500 unknown -- ----
net3 phys 1500 unknown -- ----
Listing 1
In Listing 1, we can see four data links: net0
, net1
, net2
, and net3
. We will build our link aggregation using these data links.
To avoid a temporary service outage, ensure that the data links you are configuring into an aggregation are not being used by any applications while you are performing the configuration. For example, if an IP interface has been created over a data link, remove the IP interface first.
To determine whether a link is being used by any applications, examine the output of the ipadm show-if
command, as shown in Listing 2.
root@global_zone:~# ipadm show-if
IFNAME CLASS STATE ACTIVE OVER
lo0 loopback ok yes --
net0 ip ok no -
Listing 2
The output in Listing 2 indicates that an IP interface exists over the data link net0
.
Let's capture the IP interface properties; later we will use this information in order to build another IP interface appropriately.
root@global_zone:~# ipadm show-addr net0
ADDROBJ TYPE STATE ADDR
net0/v4 static ok 10.0.0.100/24
(Optional) Next, capture the default gateway information; we will use this information later.
root@global_zone:~# netstat -rn
To remove the IP interface, type the following command:
root@global_zone:~# ipadm delete-ip net0
Verify that the IP interface was removed:
root@global_zone:~# ipadm show-if net0
ipadm: cannot get information for interface(s): No such interface
Note: If the output of the ipadm show-if
command indicated other IP interfaces that are being used by applications, also remove those IP interfaces.
Create a link aggregation by issuing the command shown in Listing 3:
root@global_zone:~# dladm create-aggr -m dlmp -l net0 -l net1 -l net2 -l net3 aggr0
Listing 3
The command shown in Listing 3 uses the following options:
-m dlmp
specifies that the type of aggregation is DLMP.-l
specifies that the underlying data links net0
through net3
comprise the aggregation.aggr0
specifies the name of the aggregation.Verify the creation of the DLMP aggregation using the command shown in Listing 4:
root@global_zone:~# dladm show-aggr
LINK MODE POLICY ADDRPOLICY LACPACTIVITY LACPTIMER
aggr0 dlmp -- -- -- --
Listing 4
In Listing 4, we can see the aggregation (aggr0
) reflecting its type (dlmp
).
In order to see the underlying data links that comprise this aggregation, run the command shown in Listing 5:
root@global_zone:~# dladm show-link aggr0
LINK CLASS MTU STATE OVER
aggr0 aggr 1500 up net0 net1 net2 net3
Listing 5
In Listing 5, we can see the net0
, net1
, net2
, and net3
data links; in addition, the aggregation's state is up
.
You can see how easy it was to create the DLMP aggregation!
(Optional) Display Data Link Properties
To display the properties of all the data links that comprise an aggregation, you can use the dladm show-linkprop
command. Listing 6 shows partial output from the the dladm show-linkprop
command.
root@global_zone:~# dladm show-linkprop aggr0
LINK PROPERTY PERM VALUE EFFECTIVE DEFAULT POSSIBLE
aggr0 autopush rw -- -- -- --
aggr0 zone rw -- -- -- --
aggr0 state r- up up up up,down
aggr0 mtu rw 1500 1500 1500 1500-9194
aggr0 maxbw rw -- -- -- --
aggr0 cpus rw -- -- -- --
aggr0 rxfanout rw -- 0 0 --
aggr0 pool rw -- -- -- --
aggr0 priority rw medium medium medium low,medium,
high
aggr0 tagmode rw vlanonly vlanonly vlanonly normal,
vlanonly
...
Listing 6
Re-Create the Removed IP Interface and Default Gateway
Let's re-create the IP interface we previously had on net0
, but now it will be on top of the aggr0
aggregation:
root@global_zone:~# ipadm create-ip aggr0
Assign an IP address to the IP interface; we will use the same IP address that net0
used (10.0.0.100
).
root@global_zone:~# ipadm create-addr -a local=10.0.0.100/24 aggr0/v4
Verify the creation of the IP address using the ipadm show-addr
command:
root@global_zone:~# ipadm show-addr aggr0
ADDROBJ TYPE STATE ADDR
aggr0/v4 static ok 10.0.0.100/24
(Optional) To re-create the default gateway, use the following command:
Note: The IP address you specify should be the same as the IP address we captured earlier using the netstat -rn
command.
root@global_zone:~# route -p add default <IP address>
Things to Remember
tty
command to verify whether you are on the console device.dladm create-aggr -m dlmp
command.The next step is to create two VNICs on top of the aggregation that we created in the previous step (aggr0
).
There are two ways of creating a VNIC for use by a zone:
dladm
command from the global zone, and configure the zone to use the VNIC by using the zone configuration net
resource.anet
resource to specify the configuration of the VNIC, which will cause the VNIC to be automatically created and destroyed when the zone is booted and halted, respectively.First, let's create the VNICs using the dladm
command. (In a later step, we will associate the VNICs with Oracle Solaris Zones and we will also use the anet
resource to create a VNIC):
root@global_zone:~# dladm create-vnic -l aggr0 vnic1
root@global_zone:~# dladm create-vnic -l aggr0 vnic2
Verify the creation of the VNICs using the dladm show-vnic
command, as shown in Listing 7:
root@global_zone:~# dladm show-vnic
LINK OVER SPEED MACADDRESS MACADDRTYPE VIDS
vnic1 aggr0 1000 2:8:20:5:c3:be random 0
vnic2 aggr0 1000 2:8:20:43:14:1a random 0
Listing 7
In Listing 7, we can see the two VNICs, vnic1
and vnic2
, have been created, and we can see their MAC address and the aggregation that they are associated with (aggr0
).
Things to Remember
dladm create-vnic
command.dladm show-vnic
commandWe will set up three zones in the environment: zone1
, zone2
, and zone3
. In order to accelerate the creation of the zones, we will set up zone2
as a clone of zone1
. In addition, we will use the ZFS snapshot and send capability to create a zone image that we will use to create zone3
on a separate machine, as shown in Figure 2.
Figure 2: Sequence of zone creation
Create the First Zone (zone1
)
Oracle Solaris Zones technology is a built-in virtualization technology available in Oracle Solaris. In the first use case, we will use zones to contain our testing environments.
We will use the zonecfg
command to create our first zone, zone1
. The minimum information required to create a zone is its name and its zonepath. In addition, we will provide the name of a VNIC we created in the previous section (vnic1
).
Now, create zone1
by running the commands shown in Listing 8:
root@global_zone:~# zonecfg -z zone1
Use 'create' to begin configuring a new zone.
zonecfg:zone1> create
create: Using system default template 'SYSdefault'
zonecfg:zone1> set zonepath=/zones/zone1
zonecfg:zone1> set autoboot=true
zonecfg:zone1> add net
zonecfg:zone1:net> set physical=vnic1
zonecfg:zone1:net> end
zonecfg:zone1> verify
zonecfg:zone1> commit
zonecfg:zone1> exit
Listing 8
(Optional) Alternatively, instead of doing what's shown in Listing 8, when Oracle Solaris Zones are used, VNICs can be created automatically on top of the aggregation using the anet zonecfg
resource. In that case, lower-link
can be set to the name of the data link aggregation. For example, Listing 9 shows how to create an automatic VNIC on top of aggr0
:
root@global_zone:~# zonecfg -z zone1
Use 'create' to begin configuring a new zone.
zonecfg:zone1> create
create: Using system default template 'SYSdefault'
zonecfg:zone1> set zonepath=/zones/zone1
zonecfg:zone1> set autoboot=true
zonecfg:zone1> select anet linkname=net0
zonecfg:zone1:net> set lower-link=aggr0
zonecfg:zone1:net> end
zonecfg:zone1> verify
zonecfg:zone1> commit
zonecfg:zone1> exit
Listing 9
Install the First Zone
The next step is to install the zone. You will need access to your Image Packaging System (IPS) repository to install a zone. This could be the same repository that you used to install Oracle Solaris.
root@global_zone:~# zoneadm -z zone1 install
Then, we need to boot the zone:
root@global_zone:~# zoneadm -z zone1 boot
Log in to zone1
. Then we will specify the zone's system configuration using the System Configuration Tool, which is shown in Figure 3.
root@global_zone:~# zlogin -C zone1
Press ESC-2 to start the System Configuration Tool.
Figure 3. System Configuration Tool
Specify the following information in the interactive screens of the System Configuration Tool:
zone1
.vnic1
.10.0.0.1
.255.255.255.0
.In addition, specify a root
password.
Press ESC-2 to apply all the changes.
When you are finished, you should see the zone boot messages. As root
, log in to the zone at the zone console login prompt:
zone1 console login: root
Password:
After logging in to the zone, verify the networking configuration using the ipadm show-addr
command, as shown in Listing 10:
root@zone1:~# ipadm show-addr
ADDROBJ TYPE STATE ADDR
lo0/v4 static ok 127.0.0.1/8
vnic1/v4 static ok 10.0.0.1/24
lo0/v6 static ok ::1/128
vnic1/v6 addrconf ok fe80::8:20ff:fec0:cd0/10
Listing 10
As you can see in Listing 10, vnic1
has IP address 10.0.0.1
.
The next step is to install the iperf
tool inside zone1
using the command shown in Listing 11. iperf
(1) is a tool for performing network throughput measurements, and it can observe TCP or UDP throughput and provide real-time statistics. Later, we will use this tool to measure the network bandwidth between two Oracle Solaris Zones (as shown in Figure 4).
root@zone1:~# pkg install iperf
Packages to install: 1
Create boot environment: No
Create backup boot environment: No
DOWNLOAD PKGS FILES XFER (MB) SPEED
Completed 1/1 6/6 0.0/0.0 942k/s
PHASE ITEMS
Installing new actions 20/20
Updating package state database Done
Updating image state Done
Creating fast lookup database Done
Listing 11
Using your favorite text editor, add the zones' IP addresses and host names to /etc/hosts
:
root@zone1:~# vi /etc/hosts
::1 localhost
127.0.0.1 localhost loghost
10.0.0.1 zone1
10.0.0.2 zone2
10.0.0.3 zone3
Create the Second Zone (zone2
)
We will now create zone2
as clone of zone1
, as shown in Figure 2. There will be three steps in this process:
zone2
zone1
so we can use it as a master profile templatezone2
by cloning zone1
Create a Zone System Configuration Template
To avoid having to manually configure the system properties of our cloned zone, let's first create a system configuration template for zone2
. We can do this by running the sysconfig
command from within zone1
, which will launch the System Configuration Tool.
root@zone1:~# sysconfig create-profile
The System Configuration Tool handles the initial configuration of a freshly installed Oracle Solaris instance. It also handles the configuration of previously configured Oracle Solaris instances, including the reconfiguration of the global zone, the configuration of cloned zones, and the configuration of physical-to-virtual (P2V) migrated systems. For more information, see "How to Configure Oracle Solaris 11 Using the sysconfig
Command."
Press ESC-2 and go through the System Configuration Tool screens, entering the following information for zone2
:
zone2
vnic1
. (This value will be changed later.)10.0.0.2
.255.255.255.0
.In addition specify a root
password.
Press ESC-2 to apply the settings.
Copy the system configuration template (sc_profile.xml
) into /root/zone2-template.xml
(we'll copy this file to a more convenient location in a later step):
root@zone1:~# cp /system/volatile/profile/sc_profile.xml /root/zone2-template.xml
Change the file permissions in order to enable file editing:
root@zone1:~# chmod +w /root/zone2-template.xml
Using your favorite text editor, change every occurrence of vnic1
to vnic2
.
Note: If you set up the VNIC using the anet
property, you can avoid this step, since the anet
configuration will be the same across multiple zones.
root@zone1:~# vi /root/zone2-template.xml
Verify the file modification:
root@zone1:~# grep vnic2 /root/zone2-template.xml
<propval type="astring" name="name" value="vnic2/v6"/>
<propval type="astring" name="name" value="vnic2/v4 "/>
Log out of zone1
and back in to the global zone and exit from the zone console by using the ~.
escape sequence:
root@zone1:~# ~.
[Connection to zone 'zone1' console closed]
Create a Zone Profile File
From the global zone on our system, we will need to shut down zone1
, the zone we want to clone. (You should not clone a running zone.)
First, verify that you are in the global zone using the zonename
command:
root@global_zone:~# zonename
global
Then verify the zone status, as shown in Listing 12:
root@global_zone:~# zoneadm list -iv
ID NAME STATUS PATH BRAND IP
0 global running / solaris shared
1 zone1 running /zones/zone1 solaris excl
Listing 12
In Listing 12, we can see that the zone1
status is running
.
Shut down the zone:
root@global_zone:~# zoneadm -z zone1 shutdown
Now verify its status again, as shown in Listing 13:
root@global_zone:~# zoneadm list -iv
ID NAME STATUS PATH BRAND IP
0 global running / solaris shared
- zone1 installed /zones/zone1 solaris excl
Listing 13
In Listing 13, we can see now that the zone has been shut down.
Now let's capture the configuration of zone1
and use it as a master profile template for other zones we will create, in this case, zone2
:
root@global_zone:~# zonecfg -z zone1 export -f /zones/zone2-profile
Using your favorite editor, make the changes shown in Listing 14. (You always need to update the zonepath, but we are also choosing to update the physical network name to vnic2
.)
root@global_zone:~# vi /zones/zone2-profile
create -b
set brand=solaris
set zonepath=/zones/zone2
set autoboot=true
set ip-type=exclusive
add net
set physical=vnic2
end
Listing 14
We now want to place the system configuration template (zone2-template.xml
) we created earlier in a more convenient location:
root@global_zone:~# cp /zones/zone1/root/root/zone2-template.xml /zones
Verify that the copy operation for the system configuration template succeeded:
root@global_zone:~# ls /zones/zone2-template.xml
/zones/zone2-template.xml
Create zone2
by Cloning zone1
Next, create zone2
using the zone2-profile
file and the zonecfg
command:
root@global_zone:~# zonecfg -z zone2 -f /zones/zone2-profile
Then perform the clone of zone1
using the zoneadm
command, as shown in Listing 15. Remember to use the full path to the system configuration template (zone2-template.xml
). We will also use the time
command in order to measure how much time it takes for the zone to be cloned.
root@global_zone:~# time zoneadm -z zone2 clone -c /zones/zone2-template.xml zone1
The following ZFS file system(s) have been created:
rpool/zones/zone2
Progress being logged to /var/log/zones/zoneadm.20131027T090026Z.zone2.clone
Log saved in non-global zone as /zones/zone2/root/var/log/zones/zoneadm.20131027T090026Z.zone2.clone
real 0m36.843s
user 0m7.323s
sys 0m9.878s
Listing 15
In Listing 15, we can see from the time
command output that it took approximately 37 seconds to clone the zone. This is the fastest way to create a new zone on the same system as an existing zone.
Next, boot the zone:
root@global_zone:~# zoneadm -z zone2 boot
Let's log in to the zone:
root@global_zone:~# zlogin zone2
Let's wait one minute for the zone's services to be up, and then verify that the zone's services are up and running by using the svcs -xv
command, which will check the status of the zone's services. If all the services are up and running without any issues, the command will return to the prompt without any message.
root@zone2:~# svcs -xv
Exit from the zone using the exit
command:
root@zone2:~# exit
logout
[Connection to zone 'zone2' pts/1 closed]
Create the Third Zone (zone3
)
We will now create zone3
as clone of zone2
, as shown in Figure 2. There will be three steps in this process:
zone3
by using the files from zone2
zone1
's ZFS file system to a single image file that can be used to create zone3
on a second systemzone3
Create Zone Profile and System Configuration Files for zone3
First, let's create the zone profile file for zone3
by copying the zone profile file for zone2
:
root@global_zone:~# cp /zones/zone2-profile /zones/zone3-profile
Then, using your favorite editor, edit the file to change the zonepath and the physical network name:
root@global_zone:~# vi /zones/zone3-profile
create -b
set brand=solaris
set zonepath=/zones/zone3
set autoboot=true
set ip-type=exclusive
add net
set physical=vnic3
end
Copy the system configuration template for zone2
to create a similar template for zone3
:
root@global_zone:~# cp /zones/zone2-template.xml /zones/zone3-template.xml
Edit the file to change the zone name to zone3
, the VNIC name to vnic3
, and the IP address to 10.0.0.3
:
root@global_zone:~# vi /zones/zone3-template.xml
Verify the file modifications:
root@global_zone:~# egrep "10.0.0.3|zone3|vnic3" /zones/zone3-template.xml
You should get the following output from the egrep
command:
<propval type="astring" name="nodename" value="zone3"/>
<propval type="astring" name="name" value="vnic3/v6"/>
<propval type="net_address_v4" name="static_address" value="10.0.0.3/24"/>
<propval type="astring" name="name" value="vnic3/v4"/>
Use ZFS Snapshot and Send Functionality to Create a Zone Image
We can use the ZFS "snapshot" and "send" functionality to archive zone1
's ZFS file system to a single image file that can be used to create zones on a different system.
An Oracle Solaris ZFS snapshot is a read-only copy of an Oracle Solaris ZFS file system or volume. ZFS snapshots can be created almost instantly and initially consume no additional disk space within the ZFS pool. Snapshots are a valuable tool for system administrators who need to perform backups. For more information about ZFS snapshots, see "Working with Oracle Solaris ZFS Snapshots." (PDF)
First, get the name of zone1
's file system:
root@global_zone:~# zfs list | grep zone1
rpool/zones/zone1 919M 251G 33K /zones/zone1
rpool/zones/zone1/rpool
Then take a recursive ZFS snapshot of the zone's ZFS storage pool (rpool), as shown in Listing 16:
root@global_zone:~# time zfs snapshot -r rpool/zones/zone1@archive
real 0m0.406s
user 0m0.007s
sys 0m0.026s
Listing 16
In Listing 16, we can see from the time
command's output that it took only 0.4 seconds to create the ZFS snapshot!
Verify the ZFS snapshot creation:
root@global_zone:~# zfs list -t snapshot rpool/zones/zone1@archive
NAME USED AVAIL REFER MOUNTPOINT
rpool/zones/zone1@archive 0 - 33K -
Now, archive the snapshot using zfs send
; in addition, we will use the bzip2
command to compress the image in order to reduce its size:
root@global_zone:~# zfs send -rc rpool/zones/zone1@archive | bzip2 > /var/tmp/zone1.zfs.bz2
Note: This process will take several minutes to complete.
Verify the image creation, as shown in Listing 17:
root@global_zone:~# ls -lh /var/tmp/zone1.zfs.bz2
-rw-r--r-- 1 root root 326M Oct 27 02:42 /var/tmp/zone1.zfs.bz2
Listing 17
In Listing 17, we can see that the image size is 326 megabytes.
Note: If desired, you can put the zone image on an NFS share in order to create network-based Oracle Solaris Zones image repository. For NFS share examples, see "How to Migrate a Non-Global Zone Using ZFS Archives."
Now, copy the zone image and configuration files to the second system (global2
) using the scp
command:
root@global_zone:~# scp /zones/zone3-template.xml /zones/zone3-profile /var/tmp/zone1.zfs.bz2 root@global2:/var/tmp
zone3-template.xml 100% |*****************************| 3021 00:00
zone3-profile 100% |*****************************| 126 00:00
zone1.zfs.bz2 100% |*****************************| 326 MB 00:07
Log in to the second system using the ssh
command:
root@global_zone:~# ssh global2
On the second system, let's create the VNIC vnic3
:
root@global2:~# dladm create-vnic vnic3 -l net0
Verify the VNIC creation:
root@global2:~# dladm show-vnic vnic3
LINK OVER SPEED MACADDRESS MACADDRTYPE VIDS
vnic3 net0 1000 2:8:20:db:a4:54 random 0
In a previous step, we copied three files from the first system to the second system (global2
); let's list the content of the /var/tmp
directory:
root@global2:~# ls /var/tmp
You should see three files there:
zone1.zfs.bz2
, the zone imagezone3-profile
, the zone profile filezone3-template.xml
, the zone system configuration fileDecompress the zone image using the bzip2
command:
root@global2:~# bzip2 -d /var/tmp/zone1.zfs.bz2
Configure zone3
using the zonecfg
command:
root@global2:~# zonecfg -z zone3 -f /var/tmp/zone3-profile
Note: The decompress process can take several minutes to complete.
Install zone3
Next, we will install the zone using the zoneadm
command, as shown in Listing 18. The zone configuration can be automated during the zone installation if a system configuration file is provided. We will also use the time
command in order to check how fast the zone is created when the image and system configuration profile are provided as arguments to the zoneadm
command.
In Listing 18, we will use the following options:
-z zone3
specifies the zone name.-a /var/tmp/zone1.zfs
specifies the image name.-u
unconfigures the zone configuration (that is, removes all the zone configuration, such as the host name and name services information), since we are using new system configuration file (zone3-template.xml
).-c /var/tmp/zone3-template.xml
specifies the name of the system configuration file.
root@global2:~# time zoneadm -z zone3 install -a /var/tmp/zone1.zfs -u -c /var/tmp/zone3-template.xml
The following ZFS file system(s) have been created:
rpool/zones
rpool/zones/zone3
Progress being logged to /var/log/zones/zoneadm.20131027T094458Z.zone3.install
Installing: This may take several minutes...
(...)
real 2m37.726s
user 1m42.381s
sys 0m27.423s
Listing 18
In Listing 18, we can see from the time
command output that it took 2 minutes and 37 seconds to install the zone using the image and the system configuration file.
Boot the new zone:
root@global2:~# zoneadm -z zone3 boot
Verify the zone's status using the zoneadm
command, as shown in Listing 19:
root@global2:~# zoneadm list -cv
ID NAME STATUS PATH BRAND IP
0 global running / solaris shared
2 zone3 running /zones/zone3 solaris excl
Listing 19
In Listing 19, we can see that the zone is running now.
Let's log in to the zone:
root@global2:~# zlogin zone3
Let's wait one minute for the zone's services to be up. Then verify that the zone's services are up and running:
root@zone3:~# svcs -xv
Verify the network configuration using the ipadm
command, as shown in Listing 20:
root@zone3:~# ipadm show-addr
ADDROBJ TYPE STATE ADDR
lo0/v4 static ok 127.0.0.1/8
vnic3/v4 static ok 10.0.0.3/24
lo0/v6 static ok ::1/128
vnic3/v6 addrconf ok fe80::8:20ff:fec0:cd0/10
Listing 20
As you can see in Listing 20, vnic3
has IP address 10.0.0.3
.
Verify the Zones and the Network Configuration
Let's return to the first system, and boot zone1
:
root@global_zone:~# zoneadm -z zone1 boot
Now, check the status of the zones that we created on the first system: zone1
and zone2
, as shown in Listing 21:
root@global_zone:~# zoneadm list -cv
ID NAME STATUS PATH BRAND IP
0 global running / solaris shared
1 zone1 running /zones/zone1 solaris excl
2 zone2 running /zones/zone2 solaris excl
Listing 21
In Listing 21, we can see that the status of zone1
and zone2
is running
.
Log in to zone2
:
root@global_zone:~# zlogin zone2
Verify the zone network configuration using the ipadm
command, as shown in Listing 22:
root@zone2# ipadm show-addr
ADDROBJ TYPE STATE ADDR
lo0/v4 static ok 127.0.0.1/8
vnic2/v4 static ok 10.0.0.2/24
lo0/v6 static ok ::1/128
vnic2/v6 addrconf ok fe80::8:20ff:fe7c:9c6/10
Listing 22
As you can see in Listing 22, vnic2
has IP address 10.0.0.2
.
Edit /etc/hosts
in order to add the zone1
entry:
root@zone2:~# echo "10.0.0.1 zone1" >> /etc/hosts
From zone2
, check the network connectivity to zone1
and zone3
using the ping
command:
root@zone2:~# ping zone1
zone1 is alive
root@zone2:~# ping zone3
zone3 is alive
Note: In some environments, the Oracle Solaris 11 firewall might block network traffic. If your security policy allows it, you can disable the firewall service using the svcadm disable ipfilter
command or add a firewall rule in order to enable network traffic between the two environments. For more Oracle Solaris firewall examples, see Securing the Network in Oracle Solaris 11.1.
Exit from zone2
using the exit
command:
root@zone2:~# exit
logout
[Connection to zone 'zone2' pts/1 closed]
As we can see, Oracle Solaris Zones can benefit from the underlying network HA that DLMP aggregation provides—without the need to set up anything in the non-global zone or on the network switches.
Things to Remember
zoneadm
, zonecfg
, and zlogin
commands are used to install and administer Oracle Solaris Zones.zfs snapshot
and zfs send
to create a zone image that can be moved to a different system.zonename
command.ipadm
command to see the IP address configuration.svcs
command is used to check the status of a zone's services.anet zonecfg
property.The next step is to check the network high availability by disabling one of the data links that we used to build the aggr0
aggregation. To do this, we will test a physical NIC failure during a network load that is generated by using the iperf
network-performance tool.
In order to perform an iperf
measurement, you must establish both a server on zone1
and a client on zone3
to generate the network traffic, as shown in Figure 4.
Figure 4. Layout of the network-performance test
Run the Network-Performance Tool
Log in to zone1
:
root@global_zone:~# zlogin zone1
On zone1
, run the iperf
command in server mode by using the following options:
-s
specifies server mode.-l 128k
sets the length of the read/write buffer (128 K).root@zone1:~# iperf -s -l 128k
After running the iperf
command, you will see the following message on the terminal:
------------------------------------------------------------
Server listening on TCP port 5001
TCP window size: 125 KByte (default)
------------------------------------------------------------
The next step is to run the iperf
client on zone3
.
Note: You don't need to install the iperf
tool in zone3
. Earlier, when we used ZFS "snapshot" and "send" capability to create a zone image, every package that was installed in the original zone (zone1
, where the image was taken) was put on zone3
. This is another benefit of the zone cloning process.
From another terminal window, log in to zone3
:
root@global2:~# zlogin zone3
Edit /etc/hosts
in order to add the zone1
entry:
root@zone3:~# echo "10.0.0.1 zone1" >> /etc/hosts
From zone3
, check the network connectivity to zone1
and zone2
by using the ping
command:
root@zone3:~# ping zone1
zone1 is alive
root@zone3:~# ping zone2
zone2 is alive
Run the iperf
command with the following options to run the test in client (that is, loader) mode on zone3
.
-c
specifies the client mode.-l 128k
sets the length of the read/write buffer (128 K).-P 4
specifies the number of parallel client threads to run (four).-i 1
specifies that there should be a one-second pause between periodic bandwidth reports.-t 360
specifies the time in seconds to transmit data (360 seconds).root@zone3:~# iperf -c zone1 -l 128k -P 4 -i 1 -t 360
After running the iperf
command, you will start to see runtime statistics:
------------------------------------------------------------
Client connecting to zone1, TCP port 5001
TCP window size: 48.0 KByte (default)
------------------------------------------------------------
[ 7] local 10.0.0.3 port 55262 connected with 10.0.0.1 port 5001
[ 5] local 10.0.0.3 port 56078 connected with 10.0.0.1 port 5001
[ 6] local 10.0.0.3 port 46789 connected with 10.0.0.1 port 5001
[ 4] local 10.0.0.3 port 36639 connected with 10.0.0.1 port 5001
[ ID] Interval Transfer Bandwidth
[ 4] 0.0- 1.0 sec 27.9 MBytes 234 Mbits/sec
[ ID] Interval Transfer Bandwidth
[ 5] 0.0- 1.0 sec 27.8 MBytes 233 Mbits/sec
[ ID] Interval Transfer Bandwidth
[ 7] 0.0- 1.0 sec 28.5 MBytes 239 Mbits/sec
[ ID] Interval Transfer Bandwidth
[ 6] 0.0- 1.0 sec 28.0 MBytes 235 Mbits/sec
...
Display In-Progress Network Statistics
We will use the dlstat
(1m) command to display network statistics for the vnic1
network interface. The dlstat
command reports network usage statistics during runtime about physical or virtual data links (such as VNICs).
Open another terminal on the first system and log in to zone1
. Then run the command shown in Listing 23 to display network statistics for vnic1
at one-second time intervals:
root@zone1:~# dlstat -i 1 vnic1
LINK IPKTS RBYTES OPKTS OBYTES
vnic1 12.74M 18.06G 1.61M 105.94M
vnic1 87.33K 123.83M 11.00K 726.13K
vnic1 87.66K 123.89M 11.04K 728.77K
vnic1 87.81K 124.05M 11.09K 731.87K
vnic1 87.69K 124.26M 11.00K 726.20K
...
Listing 23
As you can see in Listing 23, the dlstat
command displays the following information:
IPKTS
)RBYTES
)OPKTS
)OBYTES
)We can see that vnic1
is receiving 124 Mb/sec network traffic.
Keep the terminal that is displaying the dlstat
output open; we will return to it in a few steps.
Test the DLMP Aggregation's Redundancy
Let's remove the net0
data link from the aggr0
aggregation.
From the global zone, open another terminal and verify the data links that are associated with aggr0
:
root@global_zone:~# dladm show-link aggr0
LINK CLASS MTU STATE OVER
aggr0 aggr 1500 up net0 net1 net2 net3
Remove the net0
interface from the aggr0
aggregation by using the following command:
root@global_zone:~# dladm remove-aggr -l net0 aggr0
Verify that net0
has been removed from the aggr0
aggregation, as shown in Listing 24:
root@global_zone:~# dladm show-link aggr0
LINK CLASS MTU STATE OVER
aggr0 aggr 1500 up net1 net2 net3
Listing 24
In Listing 24, we can see that net0
is no longer associated with the aggr0
aggregation.
Let's return to the terminal on zone1
where the dlstat
command is running, as shown in Listing 25:
root@zone1:~#
SLINK IPKTS RBYTES OPKTS OBYTES
vnic1 12.74M 18.06G 1.61M 105.94M
vnic1 87.33K 123.83M 11.00K 726.13K
vnic1 87.66K 123.89M 11.04K 728.77K
^C
Listing 25
Note: To stop the dlstat
command, press Ctrl-C.
In Listing 25, we can see that the iperf
test continues to run despite the fact that net0
is no longer in the aggregation.
Once the network load is finished, iperf
will print the benchmark results summary shown in Listing 26:
[ ID] Interval Transfer Bandwidth
[ 5] 0.0-360.0 sec 9.84 GBytes 235 Mbits/sec
[SUM] 0.0-360.0 sec 39.3 GBytes 938 Mbits/sec
Listing 26
As we can see in Listing 26, the data link removal didn't affect the network connectivity to zone1
.
(Optional) Add net0
back to the aggr0
aggregation:
root@global_zone:~# dladm add-aggr -l net0 aggr0
Verify that data link net0
has been associated with aggr0
again:
root@global_zone:~# dladm show-link aggr0
LINK CLASS MTU STATE OVER
aggr0 aggr 1500 up net1 net2 net3 net0
Things to Remember
iperf
tool is used to test network throughput between two environments.dlstat
command reports network usage statistics during runtime about physical or virtual data links.In this article, we saw how to add high availability (HA) to a network infrastructure using DLMP aggregation. Specifically, we explored how to set up DLMP aggregation in the global zone by creating VNICs on top of the DLMP aggregation and assigning those VNICs to Oracle Solaris Zones. Then we tested the network HA by removing a physical network card from the DLMP aggregation.
In Part 2 of this series, we will explore how to secure the network and perform typical network management operations for an environment that uses DLMP aggregations.
Also see these additional publications by this author:
And here are additional Oracle Solaris 11 resources:
Orgad Kimchi is a principal software engineer on the ISV Engineering team at Oracle (formerly Sun Microsystems). For 6 years he has specialized in virtualization of big data and cloud computing technologies.
Nicolas Droux is the chief architect for Solaris Kernel Networking at Oracle. His specialties have developed from his more than 20 years' experience working on operating systems kernel, networking, virtualization, security, I/O, performance, HPC, and cloud architectures.
Revision 1.0, 07/09/2014