Cloning a Solaris Zone

I tried out cloning on a Solaris Zone today and it was a breeze, so much easier (and far, far quicker) than creating another zone from scratch and re-installing all the same users, packages, port lock-downs etc. Here are my notes from the exercise:

Existing System Setup

SunFire T1000 with a single sparse root zone (zone1) installed in /export/zones/zone1. The objective is to create a clone of zone1 called zone2 but using a different IP address and physical network port. I am not using any ZFS datasets (yet).


1. Export the configuration of the zone you want to clone/copy

# zonecfg -z zone1 export > zone2.cfg

2. Change the details of the new zone that differ from the existing one (e.g. IP address, data set names, network interface etc.)

# vi zone2.cfg

3. Create a new (empty, unconfigured) zone in the usual manner based on this configuration file

# zonecfg -z zone2 -f zone2.cfg

4. Ensure that the zone you intend to clone/copy is not running

# zoneadm -z zone1 halt

5. Clone the existing zone

# zoneadm -z zone2 clone zone1
Cloning zonepath /export/zones/zone1...
This took around 5 minutes to clone a 1GB zone (see notes below)

6. Verify both zones are correctly installed

# zoneadm list -vi
0 global running /
- zone1 installed /export/zones/zone1
- zone2 installed /export/zones/zone2

7. Boot the zones again (and reverify correct status)

# zoneadm -z zone1 boot
# zoneadm -z zone2 boot
# zoneadm list -vi
0 global running /
5 zone1 running /export/zones/zone1
6 zone2 running /export/zones/zone2

8. Configure the new zone via its console (very important)

# zlogin -C zone2

The above step is required to configure the locale, language, IP settings of the new zone. It also creates the system-wide RSA key pairs for the new zone, without which you cannot SSH into the zone. If this step not done, many of the services on the new zone will not start and you may observe /etc/.UNCONFIGURED errors in certain log files.


You should now be able to log into the new zone, either from the root zone using zlogin or directly via ssh (of configured). All of the software that was installed in the existing zone was present and accounted for in the new zone, including SMF services, user configuration and security settings etc.


If you are using ZFS datasets in your zones, then you may see the following error when trying to execute the clone command for the newly created zone:

Could not verify zfs dataset tank/xxxxx: mountpoint cannot be inherited
zoneadm: zone xxxxx failed to verify

To resolve this, you need to ensure that the mountpoint for the data set (i.e. ZFS partition) being used has been explicitly set to none. Even though the output from a zfs list command at the global zone might suggest that it does not have a mount point, this has happened to me a number of times and in each case, the following command did the trick for me:

# zfs set mountpoint=none tank/xxxxx


15 thoughts on “Cloning a Solaris Zone”

  1. will the two zones not end up having the same ip address in this scenario? (assuming you’re on static ip addresses)

  2. Yes, they will. However, this can be changed either before booting the new zone (using the zonecfg command) or manually just after the new zone is booted (best to shut down copied zone here though).

    This is not the only potential side-effect of cloning as other configuration files (e.g. MySQL) may have fixed references that need to be updated by hand.

    However, if circumstances permit it, cloning is still and excellent feature and can save a log of time.

  3. Regarding ip address, surely just easier to modify it in the zone2.cfg file you created in step 1?

  4. Steve,

    You are correct, and I did allude to this in Step 2 (maybe it could have been clearer though). However, from memory, I think you still need to carry out the “zlogin -C” step to properly configure some of the other system-wide settings correctly.

  5. Will the clone be able to clone data on raw devices presented to zone1. In particular a sybase server with raw presented via vxvm?

  6. I would like to export the zone config from one host, and read it into another host. Then, I’d like to mount the zone on the new host, using SRDF SAN luns (i.e. EMC R2 luns that were split off), in case of a disaster. Will this work? Obviously other things need to be done, including changing IPs,..etc.

    I don’t want to clone the zone per se because that would require that the cloned zone have its own disk resources. I want to use the R2 luns, including the OS lun.

  7. Steve,
    Apologies for the late reply but I very much doubt that you can clone data on raw devices at the same time as cloning your zone. Of course I don’t know this for sure but am just surmising based on other knowledge about cloning and migrating zones.

    If you consider how zones are actually managed, they’re just a bunch of files in a certain directory, carefully managed by the global zone, So cloning a zone is very just a matter of making a copy of these files. So ask yourself if you can do this with raw data in the same way?

    Also, I had problems recently when I tried to migrate a zone with a dataset configured from one system to another. I found that I had to dismount any datasets used by the zone before detaching it from the source system. Otherwise, it would look for (and possibly try to mount) a dataset of the same name on the target system.

Leave a Reply