Getting Started with Puppet on Oracle Solaris 11
by Glynn Foster
Published June 2014
Table of Contents
- About Puppet
- Overview of Using Puppet to Administer Systems
- Using Puppet for Oracle Solaris 11 System Configuration
- Performing Additional Configuration Using Stencils
- Summary
- See Also
- About the Author
How to automate system configuration across your heterogeneous data center.
About Puppet
Puppet is a popular open source configuration management tool that's now included with Oracle Solaris 11.2. Using a declarative language, administrators can describe the system configuration that they would like to apply to a system or a set of systems, helping to automate repetitive tasks, quickly deploy applications, and manage change across the data center. These capabilities are increasingly important as administrators manage more and more systems—including virtualized environments. In addition, automation can reduce the human errors that can occur with manual configuration.
Puppet is usually configured to use a client/server architecture where nodes (agents) periodically connect to a centralized server (master), retrieve configuration information, and apply it. The Puppet master controls the configuration that is applied to each connecting node.
Overview of Using Puppet to Administer Systems
This article covers the basics of administering systems with Puppet using an example of a single master (master.oracle.com
) and single agent (agent.oracle.com
). To learn more about Puppet, including more-advanced configuration management options, check out the Puppet 3 Reference Manual.
Installing Puppet
Puppet is available through a single package in the Oracle Solaris Image Packaging System repository that provides the ability to define a system as either a master or an agent. By default, the package is not installed with the Oracle Solaris 11 media, as indicated by the output shown in Listing 1 from using the pkg info
command to query the system:
root@master:~# pkg info -r puppet
Name: system/management/puppet
Summary: Puppet - configuration management toolkit
Description: Puppet is a flexible, customizable framework designed to help
system administrators automate the many repetitive tasks they
regularly perform. As a declarative, model-based approach to IT
automation, it lets you define the desired state - or the "what"
- of your infrastructure using the Puppet configuration
language. Once these configurations are deployed, Puppet
automatically installs the necessary packages and starts the
related services, and then regularly enforces the desired state.
Category: System/Administration and Configuration
State: Not installed
Publisher: solaris
Version: 3.4.1
Build Release: 5.11
Branch: 0.175.2.0.0.37.1
Packaging Date: April 14, 2014 08:03:48 PM
Size: 3.87 MB
FMRI: pkg://solaris/system/management/puppet@3.4.1,5.11-0.175.2.0.0.37.1:20140414T200348Z
Listing 1
To install the Puppet package, use the pkg install
command, as shown in Listing 2:
root@master:~#pkg install puppet
Packages to install: 3
Mediators to change: 1
Services to change: 2
Create boot environment: No
Create backup boot environment: No
DOWNLOAD PKGS FILES XFER (MB) SPEED
Completed 3/3 18354/18354 56.1/56.1 216k/s
PHASE ITEMS
Installing new actions 21388/21388
Updating package state database Done
Updating package cache 0/0
Updating image state Done
Creating fast lookup database Done
Updating package cache 1/1
root@master:~# which puppet
/usr/sbin/puppet
Listing 2
Configuring Masters and Agents
Now that we've installed Puppet, it's time to configure our master and agents. In this article, we will use two systems: one that will act as the Puppet master and another that will be an agent node. In reality, you may have hundreds or thousands of nodes communicating with one or more master servers.
When you install Puppet on Oracle Solaris 11, there are two services that are available, as shown in Listing 3: one service for the master and another service for agents:
root@master:~# svcs -a | grep puppet
disabled 16:04:54 svc:/application/puppet:agent
disabled 16:04:55 svc:/application/puppet:master
Listing 3
Puppet has been integrated with the Oracle Solaris Service Management Facility configuration repository so that administrators can take advantage of a layered configuration (which helps preserve configuration during updates). Service Management Facility stencils provide seamless mapping between the configuration stored in the Service Management Facility configuration repository and the traditional Puppet configuration file, /etc/puppet/puppet.conf
.
The first step is to configure and enable the master service using the Service Management Facility:
root@master:~# svccfg -s puppet:master setprop config/server=master.oracle.com
root@master:~# svcadm enable puppet:master
root@master:~# svcs puppet:master
STATE STIME FMRI
online 17:38:42 svc:/application/puppet:master
Listing 4
As you can see in Listing 4, the Puppet master service is now online. Let's switch over to the node that will be controlled by the master and configure it. We will set the value of config/server
to point to our master, as shown in Listing 5:
root@agent:~# svccfg -s puppet:agent setprop config/server=master.oracle.com
root@agent:~# svccfg -s puppet:agent refresh
Listing 5
Once we have done this, we can test our connection by using the puppet agent
command with the --test
option, as shown in Listing 6. More importantly, this step also creates a new Secure Sockets Layer (SSL) key and sets up a request for authentication between the agent and the master.
root@agent:~# puppet agent --test
Info: csr_attributes file loading from /etc/puppet/csr_attributes.yaml
Info: Creating a new SSL certificate request for agent.oracle.com
Info: Certificate Request fingerprint (SHA256): E0:1D:0F:18:72:B7:CE:A7:83:E4:48
:D5:F8:93:36:15:55:0A:B9:C8:E5:B1:CE:D9:3E:0A:68:01:BE:F7:76:47
Exiting; no certificate found and waitforcert is disabled
Listing 6
If we move back to our master, we can use the puppet cert list
command to view outstanding certificate requests from connecting clients:
root@master:~# puppet cert list
"agent.oracle.com" (SHA256) E0:1D:0F:18:72:B7:CE:A7:83:E4:48 :D5:F8:93:36:15:55:
0A:B9:C8 :E5:B1:CE:D9:3E:0A:68:01:BE:F7:76:47
Listing 7
In Listing 7, we can see that a request has come in from agent.oracle.com
. Assuming this is good, we can now go ahead and sign this certificate using the puppet cert sign
command, as shown in Listing 8:
root@master:~# puppet cert sign agent.oracle.com
Notice: Signed certificate request for agent.oracle.com
Notice: Removing file Puppet:SSL:CertificateRequest agent at '/etc/puppet/ssl/ca/requests/solaris.pem'
Listing 8
Let's return to the agent again and retest our connection to make sure the authentication has been correctly set up, as shown in Listing 9:
root@agent:~# puppet agent --test
Info: Caching certificate for agent.oracle.com
Info: Caching certificate_revocation_list for ca
Info: Caching certificate for agent.oracle.com
Info: Retrieving plugin
Info: Caching catalog for agent.oracle.com
Info: Applying configuration version '1400782295'
Notice: Finished catalog run in 0.18 seconds
Listing 9
And finally, let's enable the agent service, as shown in Listing 10:
root@agent:~# svcadm enable puppet:agent
root@agent:~# svcs puppet:agent
STATE STIME FMRI
online 18:20:32 svc:/application/puppet:agent
Listing 10
Puppet Resources, Resource Types, and Manifests
Puppet uses the concept of resources and resource types as a way of describing the configuration of a system. Examples of resources include a running service, a software package, a file or directory, and a user on a system. Each resource is modeled as a resource type, which is a higher-level abstraction defined by a title and a series of attributes and values.
Puppet has been designed to be cross-platform compatible such that different implementations can be created for similar resources using platform-specific providers. For example, installing a package using Puppet would use the Image Packaging System on Oracle Solaris 11 and RPM on Red Hat Enterprise Linux. This ability enables administrators to use a common set of configuration definitions to manage multiple platforms. This combination of resource types and providers is known as the Puppet Resource Abstraction Layer (RAL). Using a declarative language, administrators can describe system resources and their state; this information is stored in files called manifests.
To see a list of resource types that are available on an Oracle Solaris 11 system, we can use the puppet resource
command with the --types
option:
root@master:~# puppet resource --types
address_object
address_properties
augeas
boot_environment
computer
cron
dns
etherstub
exec
file
filebucket
group
host
interface
interface_properties
ip_interface
ip_tunnel
ipmp_interface
k5login
ldap
link_aggregation
link_properties
macauthorization
mailalias
maillist
mcx
mount
nagios_command
nagios_contact
nagios_contactgroup
nagios_host
nagios_hostdependency
nagios_hostescalation
nagios_hostextinfo
nagios_hostgroup
nagios_service
nagios_servicedependency
nagios_serviceescalation
nagios_serviceextinfo
nagios_servicegroup
nagios_timeperiod
nis
notify
nsswitch
package
pkg_facet
pkg_mediator
pkg_publisher
pkg_variant
protocol_properties
resources
router
schedule
scheduled_task
selboolean
selmodule
service
solaris_vlan
ssh_authorized_key
sshkey
stage
svccfg
tidy
user
vlan
vni_interface
vnic
whit
yumrepo
zfs
zone
zpool
root@master:~# puppet resource --types | wc -l
72
Listing 11
As you can see in Listing 11, there are 72 resource types that are available on this system currently. Many of these have been made available as part of the core Puppet offering and won't necessarily make sense within an Oracle Solaris 11 context. However, having these resource types means that you can manage non-Oracle Solaris 11 systems as agents if you wish. We can view more information about those resource types using the puppet describe
command with the --list
option, as shown in Listing 12:
root@master:~# puppet describe --list
These are the types known to puppet:
address_object - Manage the configuration of Oracle Solaris ad ...
address_properties - Manage Oracle Solaris address properties
augeas - Apply a change or an array of changes to the ...
boot_environment - Manage Oracle Solaris Boot Environments (BEs)
computer - Computer object management using DirectorySer ...
cron - Installs and manages cron jobs. Every cron re ...
dns - Manage the configuration of the DNS client fo ...
etherstub - Manage the configuration of Solaris etherstub ...
exec - Executes external commands. It is critical th ...
file - Manages files, including their content, owner ...
filebucket - A repository for storing and retrieving file ...
group - Manage groups. On most platforms this can onl ...
host - Installs and manages host entries. For most s ...
interface - This represents a router or switch interface. ...
interface_properties - Manage Oracle Solaris interface properties
ip_interface - Manage the configuration of Oracle Solaris IP ...
ip_tunnel - Manage the configuration of Oracle Solaris IP ...
ipmp_interface - Manage the configuration of Oracle Solaris IP ...
k5login - Manage the `.k5login` file for a user. Specif ...
ldap - Manage the configuration of the LDAP client f ...
link_aggregation - Manage the configuration of Oracle Solaris li ...
link_properties - Manage Oracle Solaris link properties
macauthorization - Manage the Mac OS X authorization database. S ...
mailalias - Creates an email alias in the local alias dat ...
maillist - Manage email lists. This resource type can on ...
mcx - MCX object management using DirectoryService ...
mount - Manages mounted filesystems, including puttin ...
nagios_command - The Nagios type command. This resource type i ...
nagios_contact - The Nagios type contact. This resource type i ...
nagios_contactgroup - The Nagios type contactgroup. This resource t ...
nagios_host - The Nagios type host. This resource type is a ...
nagios_hostdependency - The Nagios type hostdependency. This resource ...
nagios_hostescalation - The Nagios type hostescalation. This resource ...
nagios_hostextinfo - The Nagios type hostextinfo. This resource ty ...
nagios_hostgroup - The Nagios type hostgroup. This resource type ...
nagios_service - The Nagios type service. This resource type i ...
nagios_servicedependency - The Nagios type servicedependency. This resou ...
nagios_serviceescalation - The Nagios type serviceescalation. This resou ...
nagios_serviceextinfo - The Nagios type serviceextinfo. This resource ...
nagios_servicegroup - The Nagios type servicegroup. This resource t ...
nagios_timeperiod - The Nagios type timeperiod. This resource typ ...
nis - Manage the configuration of the NIS client fo ...
notify - Sends an arbitrary message to the agent run-t ...
nsswitch - Name service switch configuration data
package - Manage packages. There is a basic dichotomy i ...
pkg_facet - Manage Oracle Solaris package facets
pkg_mediator - Manage Oracle Solaris package mediators
pkg_publisher - Manage Oracle Solaris package publishers
pkg_variant - Manage Oracle Solaris package variants
protocol_properties - Manage Oracle Solaris protocol properties
resources - This is a metatype that can manage other reso ...
router - Manages connected router.
schedule - Define schedules for Puppet. Resources can be ...
scheduled_task - Installs and manages Windows Scheduled Tasks. ...
selboolean - Manages SELinux booleans on systems with SELi ...
selmodule - Manages loading and unloading of SELinux poli ...
service - Manage running services. Service support unfo ...
solaris_vlan - Manage the configuration of Oracle Solaris VL ...
ssh_authorized_key - Manages SSH authorized keys. Currently only t ...
sshkey - Installs and manages ssh host keys. At this p ...
stage - A resource type for specifying run stages. Th ...
svccfg - Manage SMF service properties with svccfg(1M) ...
tidy - Remove unwanted files based on specific crite ...
user - Manage users. This type is mostly built to ma ...
vlan - Manages a VLAN on a router or switch.
vni_interface - Manage the configuration of Solaris VNI inter ...
vnic - Manage the configuration of Oracle Solaris Vi ...
whit - Whits are internal artifacts of Puppet's curr ...
yumrepo - The client-side description of a yum reposito ...
zfs - Manage zfs. Create destroy and set properties ...
zone - Manages Solaris zones.
Listing 12
Querying a System Using Resource Types
Before we dive into configuring systems, let's use Puppet to query a system based on the resource types we saw in Listing 11 and Listing 12. We will use the puppet resource
command with the appropriate resource type. The puppet resource
command converts the current system state into Puppet's declarative language, which can be used to enforce configuration on other systems. For example, let's query for the service state of our system by using the service
resource type:
root@master:~# puppet resource service
service { 'svc:/application/cups/scheduler:default':
ensure => 'running',
enable => 'true',
}
...
service { 'svc:/system/zones-install:default':
ensure => 'running',
enable => 'true',
}
service { 'svc:/system/zones-monitoring:default':
ensure => 'running',
enable => 'true',
}
service { 'svc:/system/zones:default':
ensure => 'running',
enable => 'true',
}
Listing 13
In Listing 13, we can see a list of individual resources identified by their service names. Within each resource, there are two attributes—ensure
and enable
—and there is a value associated with them. This is the heart of Puppet's declarative language. A resource can be described as shown in Listing 14:
resource_type { 'title':
attribute1 => 'value1',
attribute2 => 'value2',
}
Listing 14
Each resource_type
has a title
, which is an identifying string used by Puppet that must be unique per resource type. Attributes describe the desired state of the resource. Most resources have a set of required attributes, but also include a set of optional attributes. Additionally, each resource type also has a special attribute, namevar
, which is used by the target system and must be unique. If namevar
is not specified, it often defaults to title
. This is important when you are planning to manage multiple platforms. For example, suppose you want to ensure that the NTP service was installed on both Oracle Solaris 11 and Red Hat Enterprise Linux. The resource could be consistently titled as ntp
, but service/network/ntp
could be used for namevar
on Oracle Solaris 11 and ntpd
could be used for namevar
on Red Hat Enterprise Linux.
Now let's use the zone
resource type to query for the current state of the global zone and any non-global zones that have been configured or installed on the master:
root@master:~# puppet resource zone
zone { 'global':
ensure => 'running',
brand => 'solaris',
iptype => 'shared',
zonepath => '/',
}
Listing 15
In Listing 15, we can see that we do not have any non-global zones on this system; we just have the global zone.
Finally, let's take a look at what Image Packaging System publishers have been configured by looking at the pkg_publisher
resource type, as shown in Listing 16:
root@master:~# puppet resource pkg_publisher
pkg_publisher { 'solaris':
ensure => 'present',
enable => 'true',
origin => ['http://pkg.oracle.com/solaris/beta'],
searchfirst => 'true',
sticky => 'true',
}
Listing 16
Performing Simple Configuration with Puppet
Now that we have set up a Puppet master and agent and learned a little about resource types and how resources are declared, we can start to enforce some configuration on the agent.
Puppet uses a main site manifest—located at /etc/puppet/manifests/site.pp
—where administrators can centrally define resources that should be enforced on all agent systems. As you get more familiar with Puppet, the following approach is recommended:
- Use
site.pp
only for configuration that affects all agents. - Split out specific agents into separate Puppet classes.
We won't cover the use of Puppet classes in this article, but we will use site.pp
to define our resources.
For simplicity, let's take a look at the file
resource type. Let's modify /etc/puppet/manifests/site.pp
and include the following resource declaration:
file { '/custom-file.txt':
ensure => 'present',
content => "Hello World",
}
Listing 17
The declaration shown in Listing 17 uses the file
resource type and two attributes—ensure
and content
—to ensure that a custom-file.txt
file exists in the root directory on the agent node and that the file includes the content "Hello World."
Once we have saved the /etc/puppet/manifests/site.pp
file, we can test that it is valid by using the puppet apply
command. We'll use the -v
option to increase the verbosity of the output and the --noop
option to ensure that no changes are made (in essence, to do a dry run), as shown in Listing 18:
root@master:~# puppet apply -v --noop /etc/puppet/manifests/site.pp
Notice: Compiled catalog for master in environment production in 0.16 seconds
Info: Applying configuration version '1400794990'
Notice: /Stage[main]/Main/File[/custom-file.txt]/ensure: current_value absent, should be present (noop)
Notice: Class[Main]: Would have triggered 'refresh' from 1 events
Notice: Stage[Main]: Would have triggered 'refresh' from 1 events
Notice: Finished catalog run in 0.27 seconds
Listing 18
We could choose to apply this resource to the master itself by removing the --v
and --noop
options, running puppet apply
again, and then checking for the existence and contents of the custom-file.txt
file, as shown in Listing 19:
root@master:~# puppet apply /etc/puppet/manifests/site.pp
Notice: Compiled catalog for master in environment production in 0.16 seconds
Notice: /Stage[main]/Main/File[/custom-file.txt]/ensure: created
Notice: Finished catalog run in 0.28 seconds
root@master:~# ls -la /custom-file.txt
-rw------- 1 root root 15 May 22 21:45 /custom-file.txt
root@master:~# cat /custom-file.txt
Hello World
Listing 19
By default, agents contact the master server in 30-minute intervals (this can be changed in the configuration, if required). We can check that Puppet has enforced this configuration by looking to see whether the custom-file.txt
file has appeared and checking the Puppet agent log located at /var/log/puppet/puppet-agent.log
:
root@agent:~# ls -la /custom-file.txt
-rw------- 1 root root 15 May 22 21:50 /custom-file.txt
root@agent:~# cat /custom-file.txt
Hello World
root@agent:~# tail /var/log/puppet/puppet-agent.log
....
2014-05-22 21:50:17 +0000 /Stage[main]/Main/File[/custom-file.txt]/ensure (notice): created
2014-05-22 21:50:17 +0000 Puppet (notice): Finished catalog run in 0.21 seconds
Listing 20
As shown in Listing 20, the configuration enforcement succeeded and the file was created on the agent system.
Using the Facter Utility
Puppet uses a utility called Facter to gather information about a particular node and send it to the Puppet master. This information is used to determine what system configuration should be applied to a node. To see some "facts" about a node, we can use the facter
command, as shown in Listing 21:
root@master:~# facter osfamily
Solaris
root@master:~# facter operatingsystem
Solaris
root@master:~# facter ipaddress
10.0.2.15
root@master:~# facter hostname
solaris
Listing 21
To list all facts for a node, we can use the -p
option:
root@master:~# facter -p
architecture => i86pc
facterversion => 1.6.18
hardwareisa => i386
hardwaremodel => i86pc
hostname => solaris
id => root
interfaces => lo0,net0
ipaddress => 10.0.2.15
ipaddress6 => ::
ipaddress_lo0 => 127.0.0.1
ipaddress_net0 => 10.0.2.15
ipaddress_net1 => 10.1.1.5
...
uptime => 0:22 hours
uptime_days => 0
uptime_hours => 0
uptime_seconds => 1320
virtual => virtualbox
Listing 22
As shown in Listing 22, a variety of different facts can be queried for on a given system. These facts can be exposed within resources as global variables that help programmatically decide what configuration should be enforced.
As an example, let's see how we could declare a file
resource to populate a file with different content on different platforms. We will use the osfamily
fact to detect our platform:
$file_contents = $osfamily ? {
'redhat' => "Hello RHEL",
'solaris' => "Hello Oracle Solaris",
}
file { '/custom-file.txt':
ensure => 'present',
content => $file_contents,
}
Listing 23
In Listing 23, we create a new variable called $file_contents
and provide a conditional check using the osfamily
fact. Then, depending on the platform type, we assign different contents to our file.
Matching Configuration to Specific Nodes
When managing configuration across a variety of systems, administrators might want to provide some conditional logic to control how a node gets matched to an appropriate configuration. For this task, we can use the node
keyword within our manifests. For example, if we wanted to match the configuration for a specific host called agent1.oracle.com
, we could use what's shown in Listing 24:
node agent1.oracle.com {
# Include resources here
}
Listing 24
Or, to match against agent1.oracle.com
and agent2.oracle.com
but provide a separate resource definition for agent3.oracle.com
, we could use what's shown in Listing 25:
node agent1.oracle.com, agent2.oracle.com {
# Include resources here
}
node agent3.oracle.com {
# Include other resources here
}
Listing 25
The use of default
as a node name is special, allowing for a fallback configuration for nodes that don't match other node definitions. You can define a fallback as shown in Listing 26:
node default {
# Insert other resources here
}
Listing 26
Using Puppet for Oracle Solaris 11 System Configuration
Now that we've covered the basics of Puppet, let's look at the specific resource types and providers that have been added to enable the administration of Oracle Solaris 11.2 systems. These resource types provide administrators with the ability to manage a wide range of Oracle Solaris technologies, including packaging, services, ZFS, Oracle Solaris Zones, and a diverse set of network configurations.
Managing Oracle Solaris Zones
We'll start by taking a look at the zone
resource type. To get a better understanding of what attributes can be set, we'll use the puppet describe
command again to look at the documentation:
root@master:~# puppet describe zone
zone
====
Manages Solaris zones.
Parameters
----------
- **archive**
The archive file containing an archived zone.
- **archived_zonename**
The archived zone to configure and install
- **brand**
The zone's brand type
- **clone**
Instead of installing the zone, clone it from another zone.
If the zone root resides on a zfs file system, a snapshot will be
used to create the clone; if it resides on a ufs filesystem, a copy of
the
zone will be used. The zone from which you clone must not be running.
- **config_profile**
Path to the config_profile to use to configure a solaris zone.
This is set when providing a sysconfig profile instead of running the
sysconfig SCI tool on first boot of the zone.
- **ensure**
The running state of the zone. The valid states directly reflect
the states that `zoneadm` provides. The states are linear,
in that a zone must be `configured`, then `installed`, and
only then can be `running`. Note also that `halt` is currently
used to stop zones.
Valid values are `absent`, `configured`, `installed`, `running`.
- **id**
The numerical ID of the zone. This number is autogenerated
and cannot be changed.
- **install_args**
Arguments to the `zoneadm` install command. This can be used to create
branded zones.
- **iptype**
Displays exclusive or shared instance of IP.
- **name**
The name of the zone.
- **sysidcfg**
The text to go into the `sysidcfg` file when the zone is first
booted. The best way is to use a template:
# $confdir/modules/site/templates/sysidcfg.erb
system_locale=en_US
timezone=GMT
terminal=xterms
security_policy=NONE
root_password=<%= password %>
timeserver=localhost
name_service=DNS {domain_name=<%= domain %> name_server=<%= nameserver %>}
network_interface=primary {hostname=<%= realhostname %>
ip_address=<%= ip %>
netmask=<%= netmask %>
protocol_ipv6=no
default_route=<%= defaultroute %>}
nfs4_domain=dynamic
And then call that:
zone { myzone:
ip => "bge0:192.168.0.23",
sysidcfg => template("site/sysidcfg.erb"),
path => "/opt/zones/myzone",
realhostname => "fully.qualified.domain.name"
}
The `sysidcfg` only matters on the first booting of the zone,
so Puppet only checks for it at that time.
- **zonecfg_export**
Contains the zone configuration information. This can be passed in
in the form of a file generated by the zonecfg command, in the form
of a template, or a string.
- **zonepath**
The path to zone's file system.
Providers
---------
solaris
Listing 27
In Listing 27, we can see that one of the attributes is called zonecfg_export
, and it gives us the ability to provide a zone configuration file. Let's quickly create one using the zonecfg
command, as shown in Listing 28. We'll call the zone testzone
for now, but this will be configurable when we use the zone
resource type.
root@master:~# zonecfg -z testzone
Use 'create' to begin configuring a new zone.
zonecfg:testzone> create
create: Using system default template 'SYSdefault'
zonecfg:testzone> export -f /tmp/zone.cfg
zonecfg:testzone> exit
root@master:~# cat /tmp/zone.cfg
create -b
set zonepath=/system/zones/%{zonename}
set autoboot=false
set autoshutdown=shutdown
set ip-type=exclusive
add anet
set linkname=net0
set lower-link=auto
set configure-allowed-address=true
set link-protection=mac-nospoof
set mac-address=auto
end
Listing 28
We can now define our zone within our manifest as follows:
zone { 'myzone':
zonecfg_export => '/tmp/zone.cfg',
ensure => 'installed',
}
Listing 29
In Listing 29, you'll notice that we provided a value of /tmp/zone.cfg
for our zone configuration file, and we set the ensure
attribute to installed
. The value of ensure
matches the zone states configured
, installed
, and running
. In this case, we'll be creating a zone called myzone
on the agent node. Once we have applied the configuration, we'll wait to see what happens on the node:
root@agent:~# zoneadm list -cv
ID NAME STATUS PATH BRAND IP
0 global running / solaris shared
- myzone installed /system/zones/myzone solaris excl
Listing 30
As we can see in Listing 30, our zone is now configured and installed, and it is ready to be booted.
Adding Software Packages
Now we'll use the package
resource type to add a new software package using the Image Packaging System.
Let's first confirm that the package is not currently installed on the agent node, as shown in Listing 31:
root@agent:~# pkg info nmap
pkg: info: no packages matching the following patterns you specified are
installed on the system. Try specifying -r to query remotely:
nmap
root@agent:~# pkg info -r nmap
Name: diagnostic/nmap
Summary: Network exploration tool and security / port scanner.
Description: Nmap is useful for inventorying the network, managing service
upgrade schedules, and monitoring host or service uptime.
Category: System/Administration and Configuration
State: Not installed
Publisher: solaris
Version: 6.25
Build Release: 5.11
Branch: 0.175.2.0.0.37.1
Packaging Date: April 14, 2014 06:21:31 PM
Size: 18.47 MB
FMRI: pkg://solaris/diagnostic/nmap@6.25,5.11-0.175.2.0.0.37.1:20140414T182131Z
Listing 31
The resource definition is simple for this. We simply set the title to nmap
and ensure that package is present, as shown in Listing 32:
package { 'nmap':
ensure => 'present',
}
Listing 32
After a short while, we can check our agent node to ensure the package has been correctly installed, as shown in Listing 33:
root@agent:~# pkg info nmap
Name: diagnostic/nmap
Summary: Network exploration tool and security / port scanner.
Description: Nmap is useful for inventorying the network, managing service
upgrade schedules, and monitoring host or service uptime.
Category: System/Administration and Configuration
State: Installed
Publisher: solaris
Version: 6.25
Build Release: 5.11
Branch: 0.175.2.0.0.37.1
Packaging Date: April 14, 2014 06:21:31 PM
Size: 18.47 MB
FMRI: pkg://solaris/diagnostic/nmap@6.25,5.11-0.175.2.0.0.37.1:20140414T182131Z
Listing 33
Earlier, we mentioned that Puppet enforces configuration. So let's try to remove the package from the agent node:
root@agent:~# pkg uninstall nmap
Packages to remove: 1
Services to change: 1
Create boot environment: No
Create backup boot environment: No
Planning linked: 0/1 done; 1 working: zone:myzone
Planning linked: 1/1 done
Downloading linked: 0/1 done; 1 working: zone:myzone
Downloading linked: 1/1 done
PHASE ITEMS
Removing old actions 913/913
Updating package state databasea Done
Updating package cache 1/1
Updating image state Done
Creating fast lookup database Done
Executing linked: 0/1 done; 1 working: zone:myzone
Executing linked: 1/1 done
Updating package cache 1/1
root@agent:~# which nmap
no nmap in /usr/bin /usr/sbin
Listing 34
As Listing 34 shows, the package is no longer installed. But, sure enough, when the agent node contacts the master after a short period, the package is reinstalled, as shown in Listing 35:
root@agent:~# pkg info nmap
Name: diagnostic/nmap
Summary: Network exploration tool and security / port scanner.
Description: Nmap is useful for inventorying the network, managing service
upgrade schedules, and monitoring host or service uptime.
Category: System/Administration and Configuration
State: Installed
Publisher: solaris
Version: 6.25
Build Release: 5.11
Branch: 0.175.2.0.0.37.1
Packaging Date: April 14, 2014 06:21:31 PM
Size: 18.47 MB
FMRI: pkg://solaris/diagnostic/nmap@6.25,5.11-0.175.2.0.0.37.1:20140414T182131Z
Listing 35
Creating ZFS Datasets
We'll take the simplest example for enforcing the existence of a ZFS dataset by using the following resource definition with the zfs
resource type. We'll also set an additional attribute called readonly
to on
, as shown in Listing 36:
zfs { 'rpool/test':
ensure => 'present',
readonly => 'on',
}
Listing 36
We can quickly confirm that a new ZFS dataset has been created and the readonly
dataset property has been set, as shown in Listing 37:
root@agent:~# zfs list rpool/test
NAME USED AVAIL REFER MOUNTPOINT
rpool/test 31K 31.8G 31K /rpool/test
root@agent:~# zfs get readonly rpool/test
NAME PROPERTY VALUE SOURCE
rpool/test readonly on local
Listing 37
Performing Additional Configuration Using Stencils
As mentioned previously, all of Puppet's configuration is managed through Service Management Facility stencils. You should avoid directly editing /etc/puppet/puppet.conf
, because such edits will be lost when the Puppet Service Management Facility services are restarted. Instead, configuration can be achieved by using svccfg
and the configuration properties defined in puppet.conf(5)
, as shown in Listing 38:
root@master:~# svccfg -s puppet:master setprop config/<option> = "<value>"
root@master:~# svccfg -s puppet:master refresh
Listing 38
Using the Service Management Facility to store Puppet configuration also makes it very easy to configure Puppet environments, which are a useful way of dividing up your data center into an arbitrary number of environments.
For example, you could have the Puppet master manage configurations for development and production environments. This is easily achieved by creating new Service Management Facility instances of the Puppet service, as shown in Listing 39:
root@master:~# svccfg -s puppet
svc:/application/puppet> add production
svc:/application/puppet> add dev
svc:/application/puppet> exit
root@master:~# svccfg -s puppet:production setprop config/modulepath = \
"$confdir/environments/$environment/modules:$confdir/modules"
root@master:~# svccfg -s puppet:dev setprop config/manifest = "$confdir/manifests/site-dev.pp"
Listing 39
Summary
Puppet is an excellent tool for administrators who want to enforce configuration management across a wide range of platforms in their data centers. This article briefly touched on a small fraction of the capabilities of Puppet. With support for a new set of Oracle Solaris 11–based resource types—including packaging, service configuration, networking, virtualization, and data management—administrators can now benefit from the type of automation they have achieved on Linux-based platforms previously.
See Also
Here are some Puppet resources:
And here are some Oracle Solaris 11 resources:
- Download Oracle Solaris 11
- Access Oracle Solaris 11 product documentation
- Access all Oracle Solaris 11 how-to articles
- Learn more with Oracle Solaris 11 training and support
- See the official Oracle Solaris blog
- Check out The Observatory blogs for Oracle Solaris tips and tricks
- Follow Oracle Solaris on Facebook and Twitter
About the Author
Glynn Foster is a principal product manager for Oracle Solaris. He is responsible for a number of technology areas including OpenStack, the Oracle Solaris Image Packaging System, installation, and configuration management.
Revision 1.0, 05/30/2014 |