Key Users
  Product Detail
  Automated Brochure
  Automated Demos
  Press Releases
  Newsletter
  Brochure
  Testimonials
  Customers
  Analyst Reviews
  Live Demo Request
  ROI

 


 
 
 
    > Newsletter > July 2006 > Solaris Spotlight - Solaris Zones
 
   
 

Solaris Zones - Slicing and Dicing Servers
by Bill Calkins

Since the distributed computing environment was introduced in the 1970’s, applications that were centrally hosted on large mainframe style servers have been distributed across an array of networked servers. Over the years, these servers have become smaller and more powerful, but management and utilization of these servers has become more complex making server consolidation the latest buzz.

Containment and Virtualization technologies have been introduced to facilitate consolidation and reduce server sprawl. Containment and Virtualization allow multiple applications to run on the same server hardware, while providing applications with an isolated, protected environment. This article explains containment and virtualization, and describes how Sun Microsystems has implemented this technology into the Solaris 10 operating environment through the use of Zones.

Containers and Partitions
Utilization and efficiency of early mainframe computers was critical and systems were designed to run many tasks simultaneously using more than one operating system at very high utilization rates. To provide multiple independent and contained execution environments, mainframes used hardware configuration managers (HCM) or virtual machine monitors (VMM), also called hypervisors. These software programs interact directly with the system hardware, and sometimes run on a dedicated component known as a service processor. HCM and VMM software allow the system hardware to be partitioned into multiple containers. When running on specialized hardware, the containers can be fully isolated and independent environments which are capable of being powered, configured, booted, and administered separately.

Containers constructed in this manner are called hardware domains or partitions, and may support different operating systems, or different releases of the same operating system, in each partition. Technologies like Hewlett Packard’s VPARs, IBM’s LPARs, and Sun’s Dynamic System Domains utilize this type of container technology. Operating systems and applications are fully contained within their respective partitions, and effectively run on separate hardware servers which happen to be enclosed within the same physical system enclosure. What happens within one partition — resource consumption, application misbehavior, security issues, and hardware faults — generally has no effect on other partitions. These hardware containment solutions originated in the 1960’s and almost always require specialized equipment capable of hardware partitioning.

Virtualization
Virtualization is the deployment of a virtual technique. The term virtual is used to indicate that access to the physical hardware is abstracted to hide implementation details. An example of virtualization is a virtual computer, which would be one computer running multiple operating systems simultaneously. Each instance of the operating system runs its own applications as if it were the only OS on the computer. To the user or application, the environment appears to be a dedicated single computer with its own hostname, IP address, and process table, when in reality the underlying hardware is being shared.

Zones and Containers - Some refer to zones and containers interchangeably as if they mean exactly the same thing. This is incorrect because the term containment is a concept and a container is a specific implementation of a containment concept. For example, a bottle is a container for liquid to protect the liquid from the surrounding environment and to prevent the liquid from spreading into areas where it is unwanted. Containers is a technology that comprises the resource management features, such as resource pools and Solaris zones. Solaris zones is a subset of containers, so the two terms should not be used interchangeably.

In a computing environment, it may be important to contain applications, processes, users, and even the operating system. In Solaris, we refer to these categories as a service. A service is a long-lived set of software objects with well-defined states, error boundaries, start and stop mechanisms, and dependency relationships to other services. A service must be viewed and managed - in other words, contained - as a single entity. A container is therefore a bounded environment for a service; such environments can be implemented and managed using a wide variety of hardware and software technologies.

For more information on services in the Solaris 10 environment, refer to chapter 3 of my Solaris 10 System Administration Exam Prep2 guide where SMF (the Service Management Facility) are described in detail.

Earlier, I described hardware partitioning, but in recent years, several commercial and open-source software-based containment solutions have emerged. These solutions do not require specialized hardware, and can run on a wide range of systems, ranging from laptops to enterprise-class servers. Now, server virtualization can be accomplished efficiently without the use of a separate VMM. This type of containment has been described as operating system virtualization, and is the approach taken with Solaris Containers. In the Solaris OS, virtual server environments are implemented using a type of container called a Solaris Zone. Other types of containers exist in the Solaris OS, such as projects and limit nodes. So, to be clear, a Zone is one type of container, one that encapsulates a server environment, limits the effects of that environment on other system activities (including other active zones), and protects the environment from outside influence. A container is defined as a bounded environment for a service, and a service is a group of processes managed as a whole. A zone is a container for the service or group of processes.

Ideally, a container solution should provide the following:

• Resource containment
The ability to allocate and manage resources assigned to the container, such as CPUs, memory, network and I/O bandwidth

• Security containment
The bounding of user, namespace, and process visibility, hiding activity in each container from other containers, limiting unwanted process interaction, and limiting access to other containers

• Fault containment
Hardware errors (failed components) and software errors (such as memory leaks) in one container should not affect other containers

• Scalability
The ability to exploit enterprise class systems by creating and managing a potentially large number of containers without significant performance overhead

• Flexible resource allocation
The ability to share resources from a common pool or to dedicate resources to specific containers

• Workload visibility
The ability to view system activity both from within the container and from a global system perspective

• Management framework
Tools and procedures to create, start, stop, re-create, restart, reboot, move, and monitor containers, as well as provisioning and versioning

• Hardware independence
Where possible, containment technologies should not require special hardware

• Native operating system support
Solutions should not require a custom or ported kernel, as this has an impact on ISV supportability

Solaris Zones
Solaris zones is a major new feature of Solaris 10 and provides additional facilities that were not available in previous releases of the Operating Environment at no extra cost. Zones allow virtual environments to run on the same physical system. Think of virtual environments as multiple instances of the operating system running on the same server. Previously, the only way of compartmenting an environment was to purchase a separate server, or use an expensive high-end server, capable of hardware partitioning, such as the E10K or E15K. Now you can create virtual environments on any machine capable of running the Solaris 10 Operating Environment. Zones provide a virtual operating system environment within a single running instance of the Solaris 10 Operating Environment. Applications can run in an isolated, and secure environment. This isolation prevents an application running in one zone from monitoring or affecting an application running in a different zone. Applications running in a zone environment cannot affect applications running in a different zone, even though they exist and run on the same physical server. Even a privileged user in a zone cannot monitor or access processes running in a different zone. A further important aspect of zones is that a failing application, such as one that would traditionally have leaked all available memory, or exhausted all CPU resources, can be limited to only affect the zone in which it is running. This is achieved by limiting the amount of physical resources on the system that the zone can use.

Types of Zones
There are two types of zones, global and non-global. Think of a global zone as the server itself, the traditional view of a Solaris system as we all know it, where you can login as root and have full control of the entire system. The global zone is the default zone and is used for system-wide configuration and control. Every system contains a global zone and there can only be one global zone on a physical Solaris server. A non-global zone is created from the global zone and also managed by it. You can have up to 8192 non-global zones on a single physical system - the only real limitation is the capability of the server itself. Applications that run in a non-global zone are isolated from applications running in a separate non-global zone, allowing multiple versions of the same application to run on the same physical server. Zone States Non-global zones are referred to simply as zones and can be in a number of states depending on the current state of configuration or readiness for operation. You should note that zone states only refer to non-global zones because the global zone is always running and represents the system itself. The only time the global zone is not running is when the server has been shut down. Table 1.1 describes the six states that a zone can be in:

Table 1.1 Zone States
State Description
Configured A zone is in this state when the configuration has been completed and storage has been committed. Additional configuration that must be done after the initial reboot has yet to be done.
Incomplete A zone is set to this state during an install or uninstall operation. Upon completion of the operation, it changes to the correct state.
Installed A zone in this state has a confirmed configuration. The zoneadm command is used to verify that the zone will run on the designated Solaris system. Packages have been installed under the zone's root path. Even though the zone is installed, it still has no virtual platform associated with it.
Ready The zone's virtual platform is established. The kernel creates the zsched process, the network interfaces are plumbed and file systems are mounted. The system also assigns a zone ID at this state, but there are no processes associated with this zone.
Running A zone enters this state when the first user process is created. This is the normal state for an operational zone.
Shutting Down + Down Transitional states that are only visible while a zone is in the process of being halted. If a zone cannot shut down for any reason, then it will also display this state.

Zone Features
It’s important to note the features of both the global zone and non-global zones.

The global zone has the following features:

  • The global zone is assigned zone ID 0 by the system.
  • It provides the single bootable instance of the Solaris Operating Environment that runs on the system.
  • It contains a full installation of Solaris system packages.
  • It can contain additional software, packages, file, or data that was not installed through the packages mechanism.
  • Contains a complete product database of all installed software components.
  • It holds configuration information specific to the global zone, such as the global zone hostname and the file system table.
  • It is the only zone that is aware of all file systems and devices on the system.
  • It is the only zone that is aware of non-global zones and their configuration.
  • It is the only zone from which a non-global zone can be configured, installed, managed, and uninstalled.

Non-global zones have the following features:

  • The non-global zone is assigned a zone ID by the system when it is booted.
  • It shares the Solaris kernel that is booted from the global zone.
  • It contains a subset of the installed Solaris system packages.
  • It can contain additional software packages, shared from the global zone.
  • It can contain additional software packages that are not shared from the global zone.
  • It can contain additional software, files, or data that was not installed using the package mechanism, or shared from the global zone.
  • It contains a complete product database of all software components that are installed in the zone. This includes software that was installed independently of the global zone as well as software shared from the global zone.
  • It is not aware of the existence of other zones.
  • It cannot install, manage, or uninstall other zones, including itself.
  • It contains configuration information specific to itself, the non-global zone, such as the non-global zone hostname and file system table.

Non-Global Zone Root File System Models
A non-global zone contains its own root (/) file system. The size and contents of this file system depend on how you configure the global zone and the amount of configuration flexibility that is required.

There is no limit on how much disk space a zone can use, but the zone administrator, normally the system administrator, must ensure that sufficient local storage exists to accommodate the requirements of all non-global zones being created on the system.

The system administrator can restrict the overall size of the non-global zone file system by using any of the following:
  • Standard disk partitions on a disk can be used to provide a separate file system for each non-global zone
  • Soft partitions can be used to divide disk slices or logical volumes into a number of partitions. Soft partitions were covered in Chapter 9, "Virtual File Systems, Swap Space, and Core Dumps."
  • Use a lofi-mounted file system to place the zone on. For further information on the loopback device driver see the manual pages for lofi and lofiadm.

Sparse Root Zones
When you create a non-global zone, you have to decide how much of the global zone file system you want to be inherited from the global zone. A sparse root zone optimizes sharing by implementing read-only loopback file systems from the global zone and only installing a subset of the system root packages locally. The majority of the root file system is shared (inherited) from the global zone. Generally this model would require about 100 Megabytes of disk space when the global zone has all of the standard Solaris packages installed. A sparse root zone uses the inherit-pkg-dir resource, where a list of inherited directories from the global zone are specified.

Whole Root Zones
This model provides the greatest configuration flexibility because all of the required (and any other selected) Solaris packages are copied to the zone's private file system, unlike the sparse root model where loopback file systems are used. The disk space requirement for this model is considerably greater and is determined by evaluating the space used by the packages currently installed in the global zone.

Networking in a Zone Environment
On a system supporting zones, the zones can communicate with each other over the network, but even though the zones reside on the same physical system, network traffic is restricted so that applications running on a specified zone cannot interfere with applications running on a different zone.

Each zone has its own set of bindings and zones can all run their own network daemons. As an example, consider three zones all providing web server facilities using the apache package. Using zones, all three zones can host websites on port 80, the default port for http traffic, without any interference between them. This is because the IP stack on a system supporting zones implements the separation of network traffic between zones.

The only interaction allowed is for ICMP traffic to resolve problems, so that commands such as ping can be used to check connectivity.

Of course, when a zone is running, it behaves like any other Solaris system on the network in that you can telnet or ftp to the zone as if it was any other system, assuming the zone has configured these network services for use.

When a zone is created, a dedicated IP address is configured that identifies the host associated with the zone. In reality though, the zone's IP address is configured as a logical interface on the network interface specified in the zone's configuration parameters. Only the global zone has visibility of all zones on the system and can also inspect network traffic, using for example, snoop.

Zone Daemons
The zone management service is managed through the Service Management Facility (SMF), the service identifier is called: svc:/system/zones:default

For more information on the Service Management Facility (SMF) in the Solaris 10 environment, my Solaris 10 System Administration Exam Prep2 guide describes SMF in detail.

There are two daemon processes associated with zones, zoneadmd and zsched.

The zoneadmd daemon starts when a zone needs to be managed. An instance of zoneadmd will be started for each zone, so it is not uncommon to have multiple instances of this daemon running on a single server. It is started automatically by SMF and is also shut down automatically when no longer required.

The zoneadmd daemon carries out the following actions:

  • Allocates the zone ID and starts the zsched process
  • Sets system-wide resource controls
  • Prepares the zone's devices if any are specified in the zone configuration
  • Plumbs the virtual network interface
  • Mounts any loopback or conventional file systems

The zsched process is started by zoneadmd and exists for each active zone (a zone is said to be active when in the ready, running, or shutting down state. The job of zsched is to keep track of kernel threads running within the zone. It is also known as the zone scheduler.

Configuring a Zone
Before a zone can be installed and booted it has to be created and configured. This section deals with the initial configuration of a zone and describes the zone components.

A zone is configured using the zonecfg command. The zonecfg command is also used to verify that the resources and properties that are specified during configuration are valid for use on a Solaris system. zonecfg checks that a zone path has been specified and that for each resource, all of the required properties have been specified.

The zonecfg Command
The zonecfg command is used to configure a zone. It can run interactively, on the command-line, or using a command-file. A command-file is created by using the export subcommand of zonecfg. zonecfg carries out the following operations:

  • Create, or delete, a zone configuration
  • Add, or remove, resources in a configuration
  • Set the properties for a resource in the configuration
  • Query and verify a configuration
  • Commit (save) a configuration
  • Revert to a previous configuration
  • Exit from a zonecfg session

When you enter zonecfg in interactive mode, the prompt changes to show that you are in a zonecfg session. If you are configuring a zone called apps, then the prompt changes as follows:

# zonecfg -z apps
zonecfg:apps>

This is known as the global scope of zonecfg. When you configure a specific resource, the prompt changes to include the resource being configured. The command scope also changes so that you are limited to entering commands relevant to the current scope. You have to enter an end command to return to the global scope.

Viewing the Zone Configuration
The zone configuration data can be viewed in two ways:

  • Viewing a file
  • Using the export option of zonecfg

Both of these are described here:
The zone configuration file is held in the /etc/zones directory and is stored as an xml file. To view the configuration for a zone named testzone, you would enter:

# cat /etc/zones/testzone.xml

The alternative method of viewing the configuration is to use the zonecfg command with the export option. The following example shows how to export the configuration data for zone testzone:

# zonecfg -z testzone export

By default, the output goes to stdout, but this can be changed by entering a filename instead. If you save the configuration to a file, then it can be used at a later date, if required, as a command file input to the zonecfg command. This option is useful if you have to recreate the zone for any reason.

Installing a Zone
When a zone has been configured, the next step in its creation is to install it. This has the effect of copying the necessary files from the global zone and populating the product database for the zone. You should verify a configuration before it is installed to ensure that everything is set up correctly.

To verify the zone configuration for a zone named testzone enter the following command:

zoneadm -z testzone verify

If, for example, the zonepath does not exist, or it has not had the correct permissions set, then the verify operation will generate a suitable error message.

When the zone has been successfully verified it can be installed, as follows:

zoneadm -z testzone install

A number of status and progress messages are displayed on the screen as the files are copied and the package database is updated.

Notice that whilst the zone is installing, its state will change from configured to incomplete. The state will change to installed when the install operation has completed.

Booting a Zone
Before issuing the boot command, a zone needs to be transitioned to the ready state. This can be done using the zoneadm command as follows:

zoneadm -z testzone ready

The effect of the ready command is to establish the virtual platform, plumb the network interface and mount any file systems. At this point though, there are no processes running.

To boot the zone testzone, issue the following command:

zoneadm -z testzone boot

Confirm that the zone has booted successfully by listing the zone using the zoneadm command as follows:

zoneadm -z testzone list -v

The state of the zone will have changed to running if the boot operation was successful.

No Need to Ready - If you want to boot a zone, then there is no need to transition to the ready state. The boot operation does this automatically prior to booting the zone.

Halting a Zone
To shut down a zone, issue the halt option of the zoneadm command as shown in the following:

zoneadm -z testzone halt

The zone state changes from running to installed when a zone is halted.

Rebooting a Zone
A zone can be rebooted at any time without affecting any other zone on the system. The reboot option of the zoneadm command is used to reboot a zone as shown here to reboot the zone testzone:
zoneadm -z testzone reboot

The state of the zone should be running when the reboot operation has completed.

Uninstalling a Zone
When a zone is no longer required, it should be uninstalled before it is deleted. In order to uninstall a zone, it must first be halted. When this has been done, issue the uninstall command as shown here to uninstall the zone testzone:
zoneadm -z testzone uninstall -F

The -F option forces the command to execute without confirmation. If you omit this option, then you will be asked to confirm that you wish to uninstall the zone.

Deleting a Zone
When a zone has been successfully uninstalled, its configuration can be deleted from the system. Enter the zonecfg command as shown here to delete the zone testzone from the system:

zonecfg -z testzone delete -F

The -F option forces the command to execute without confirmation. If you omit this option, then you will be asked to confirm that you wish to delete the zone configuration.

Zone Login
When a zone is operational and running, the normal network access commands can be used to access a zone, such as telnet, rlogin, and ssh, but a non-global zone can also be accessed from the global zone using zlogin command. This is necessary for administration purposes and to be able to access the console session for a zone. Only the Superuser (root), or a role with the RBAC profile "Zone Management" can use the zlogin command from the global zone.

For more information on RBAC (Role-Based Access Accounts), my Solaris 10 System Administration Exam Prep2 guide describes in configuring RBAC in detail.

The syntax for the zlogin command is as follows:

zlogin [-CE] [-e c] [-l username] zonename
zlogin [-ES] [-e c] [-l username] zonename utility [argument...]

zlogin works in three modes:
  • Interactive - where a login session is established from the global zone.
  • Non-interactive - where a single command or utility can be executed. Upon completion of the command (or utility), the session is automatically closed.
  • Console - where a console session is established for administration purposes.

Initial Zone Login
When a zone has been installed and is booted for the first time, it is still not fully operational because the internal zone configuration needs to be completed. This includes setting the following:

  • Language
  • Terminal Type
  • Host name
  • Security Policy
  • Name Service
  • Time Zone
  • Root Password

These settings are configured interactively the first time you use zlogin to connect to the zone console, similar to when you first install the Solaris 10 Operating Environment. The zone then reboots to implement the changes. When this reboot completes, the zone is fully operational.

Initial Console Login -]You must complete the configuration by establishing a console connection. If this is not completed, the zone will not be operational and users will be unable to connect to the zone across the network.

Instead of completing the zone configuration interactively, you can pre-configure the required options in a sysidcfg file. This enables the zone configuration to be completed without intervention and is described in detail in my Solaris 10 System Administration Exam Prep2 guide.

Logging in to the Zone Console
You can access the console of a zone by using the zlogin -C command. If you are completing a hands-off configuration, connect to the console before the initial boot and you will see the boot messages appear in the console as well as the reboot after the sysidcfg file has been referenced.

The following session shows what happens when the zone testzone is booted for the first time, using a sysidcfg file:

# zlogin -C testzone

[NOTICE: Zone readied]
[NOTICE: Zone booting up]

SunOS Release 5.10 Version Generic 64-bit
Copyright 1983-2005 Sun Microsystems, Inc. All rights reserved.
Use is subject to license terms.
Hostname: testzone
Loading smf(5) service descriptions: 100/100
Creating new rsa public/private host key pair
Creating new dsa public/private host key pair


rebooting system due to change(s) in /etc/default/init

[NOTICE: Zone rebooting]

SunOS Release 5.10 Version Generic 64-bit
Copyright 1983-2005 Sun Microsystems, Inc. All rights reserved.
Use is subject to license terms.
Hostname: testzone

testzone console login:

Logging in to a Zone
The Superuser (root), or a role with the RBAC profile "Zone Management", can log directly into a zone from the global zone, without having to supply a password. The system administrator uses the zlogin command; the following example shows a zone login to the testzone zone, the command zonename is run and then the connection is closed:

# zlogin testzone
[Connected to zone 'testzone' pts/6]
Sun Microsystems Inc. SunOS 5.10 Generic January 2005
# zonename
testzone
# exit

[Connection to zone 'testzone' pts/6 closed]

Running a Command in a Zone
In the previous section an interactive login to a zone was achieved. Here, a non-interactive login is actioned and a single command is executed. The connection is automatically disconnected as soon as the command has completed. The following example shows how this works. First, the hostname command is run, demonstrating that we are on the host called global, then a non-interactive login to the testzone zone runs, which runs the zonename command and then exits automatically. Finally, the same hostname command is run, which shows we are back on the host called global:

# hostname
global
# zlogin testzone zonename
testzone
# hostname
global

Creating a Zone
Now that we have seen the technicalities of configuring a zone, let's put it all together and create a zone. Step by Step 1.1 configures the zone named testzone, installs it and boots it. Finally, we will list the zone configuration data.

STEP BY STEP
1.1 Creating a zone
1. Perform the initial configuration on a zone named testzone. The zonepath will be /export/zones/testzone and the IP address will be 192.168.0.43. This zone will be a sparse root zone with no additional file systems being mounted from the global zone. Create the zonepath and assign the correct permission (700) to the directory. The bold text identifies the keystrokes to be entered at the keyboard:

# mkdir -p /export/zones/testzone
# chmod 700 /export/zones/testzone

2. Enter the zonecfg command to configure the new zone.

# zonecfg -z testzone
testzone: No such zone configured
Use 'create' to begin configuring a new zone.
zonecfg:testzone>create
zonecfg:testzone>set zonepath=/export/zones/testzone
zonecfg:testzone>set autoboot=true
zonecfg:testzone>add net
zonecfg:testzone:net>set physical=hme0
zonecfg:testzone:net> zonecfg:testzone:net>set address=192.168.0.43
zonecfg:testzone:net>end
zonecfg:testzone> add rctl
zonecfg:testzone:rctl> set name=zone.cpu-shares
zonecfg:testzone:rctl> add value (priv=privileged,limit=20,action=none)
zonecfg:testzone:rctl> end
zonecfg:testzone> add attr
zonecfg:testzone:attr> set name=comment
zonecfg:testzone:attr> set type=string
zonecfg:testzone:attr> set value="First zone - Testzone"
zonecfg:testzone:attr> end
 

3. Having entered the initial configuration information, use a separate login session to check to see if the zone exists using the zoneadm command.

# zoneadm -z testzone list -v
zoneadm: testzone: No such zone configured

At this point the zone configuration has not been committed and saved to disk, so it only exists in memory.

4.Verify and save the zone configuration. Exit zonecfg and then check to see if the zone exists using the zoneadm command.

zonecfg:testzone> verify
zonecfg:testzone> commit
zonecfg:testzone> exit
# zoneadm -z testzone list -v

ID NAME             STATUS         PATH                          
   - testzone       configured     /export/zones/testzone
Notice that the zone now exists and that it has been placed in the configured state.

5. Use the zoneadm command to verify that the zone is correctly configured and ready to be installed:

# zoneadm -z testzone verify

6. Install the zone:

# zoneadm -z testzone install
Preparing to install zone <testzone>.
Creating list of files to copy from the global zone.
Copying <77108> files to the zone.
Initializing zone product registry.
Determining zone package initialization order.
Preparing to initialize <1141> packages on the zone.
Initialized <1141> packages on zone.
Zone <testzone> is initialized.
The file </export/zones/testzone/root/var/sadm/system/logs/
[ic:ccc]install_log> contains a log of the zone installation.

7. The zone is now ready to be used operationally. Change the state to ready and verify that it has changed, then boot the zone and check that the state has changed to running.

# zoneadm -z testzone ready
# zoneadm -z testzone list -v

ID NAME             STATUS         PATH                          
   7 testzone       ready          /export/zones/testzone
# zoneadm -z testzone boot
# zoneadm -z testzone list -v
ID NAME             STATUS         PATH                          
   7 testzone       running        /export/zones/testzone

8. View the configuration data by exporting the configuration to stdout.

# zonecfg -z testzone export
create -b
set zonepath=/export/zones/testzone
set autoboot=true
add inherit-pkg-dir
set dir=/lib
end
add inherit-pkg-dir
set dir=/platform
end
add inherit-pkg-dir
set dir=/sbin
end
add inherit-pkg-dir
set dir=/usr
end
add net
set address=192.168.0.43
set physical=hme0
end
add rctl
set name=zone.cpu-shares
add value (priv=privileged,limit=20,action=none)
end
add attr
set name=comment
set type=string
set value="First zone - Testzone"
end
Notice the four default inherit-pkg-dir entries showing that this is a sparse root zone.

Summary
The Solaris zones facility is a major step forward in the Solaris Operating Environment. It allows virtualization of Operating System services so that applications can run in an isolated and secure environment. Previously, this functionality has only been available on high-end, extremely expensive servers. One of the advantages of zones is that multiple versions of the same application can be run on the same physical system, but independently of each other. Solaris zones also protects the user from having a single application able to exhaust the CPU or memory resources when it encounters an error.
This article has described the concepts of Solaris zones and the zone components as well as the types of zone that can be configured.

For more detailed information on configuring and managing up zones, please refer to the Zones chapter found in my Solaris 10 System Administration Exam Prep 2 Guide (ISBN 0789734613).

---------------------------------------------------

About the author:
Bill Calkins is the owner of Pyramid Consulting, a computer training and consulting firm specializing in the implementation and administration of Open Systems. He works as a consultant with Sun Microsystems and has contributed extensively to the Solaris certification program. Bill also provides online and classroom style Unix training programs. He has more than 20 years of experience in UNIX system administration, consulting, and training at more than 200 different companies and has authored several books on Solaris. If you would like to contact Bill, please email him at billcalkins@stsolutions.com