Renaming a Solaris zone

I needed to rename a zone on a Solaris 10 system earlier this week and here are some notes on how I did it.

The process of renaming a zone is essentially a task of renaming, editing and replacing strings in a series of (mostly XML) configuration files. All of the tasks below were carried out from the global zone on the system in question.

1. Shut down the zone to be renamed

# zoneadm -z <oldname> halt

2. Modify the configuration files that store the relevant zone configuration

# vi /etc/zones/index
Change all references of <oldname> to <newname> as appropriate
# cd /etc/zones
# mv <oldname>.xml <newname>.xml
# vi <newname>.xml

Change all references of <oldname> to <newname> as appropriate

3. Rename the main zone path for the zone

# cd /export/zones
# mv <oldname> <newname>

Your zone path may be different than the one shown above

4. Modify (network) configuration files of new zone

Depending on the applications installed in your zone, there may be several files you need to update. The essential networking files are:

# cd /export/zones/<newname>/root
# vi etc/hosts
# vi etc/nodename

But others containing your old host/zone name can also be found using this command:

# cd /export/zones/<newname>/root/etc
# find . -type f | xargs grep <oldname>

5. Boot the new zone again
# zoneadm -z <newname> boot

Renaming a Solaris system

I’ve had to change the hostname of a number of Solaris systems over the past few weeks, some of which were (sparse) Solaris zones. Here are some notes for my/your future reference:

File Global Zones Non-Global Zones
/etc/nodename Yes Yes
/etc/hostname.ifname Yes No
/etc/hosts Yes Yes
/etc/inet/hosts Yes Yes
/etc/inet/ipnodes Yes Yes

Notes

  1. You will need to replace the ifname above with the name of the appropriate interface on your system.
  2. Depending on your system configuration, I may of course be missing some files. However, the above worked for me!
  3. None of the systems in question were running IPv6

Cloning a Solaris Zone

I tried out cloning on a Solaris Zone today and it was a breeze, so much easier (and far, far quicker) than creating another zone from scratch and re-installing all the same users, packages, port lock-downs etc. Here are my notes from the exercise:

Existing System Setup

SunFire T1000 with a single sparse root zone (zone1) installed in /export/zones/zone1. The objective is to create a clone of zone1 called zone2 but using a different IP address and physical network port. I am not using any ZFS datasets (yet).

Procedure

1. Export the configuration of the zone you want to clone/copy

# zonecfg -z zone1 export > zone2.cfg

2. Change the details of the new zone that differ from the existing one (e.g. IP address, data set names, network interface etc.)

# vi zone2.cfg

3. Create a new (empty, unconfigured) zone in the usual manner based on this configuration file

# zonecfg -z zone2 -f zone2.cfg

4. Ensure that the zone you intend to clone/copy is not running

# zoneadm -z zone1 halt

5. Clone the existing zone

# zoneadm -z zone2 clone zone1
Cloning zonepath /export/zones/zone1...
This took around 5 minutes to clone a 1GB zone (see notes below)

6. Verify both zones are correctly installed

# zoneadm list -vi
ID NAME STATUS PATH
0 global running /
- zone1 installed /export/zones/zone1
- zone2 installed /export/zones/zone2

7. Boot the zones again (and reverify correct status)

# zoneadm -z zone1 boot
# zoneadm -z zone2 boot
# zoneadm list -vi
ID NAME STATUS PATH
0 global running /
5 zone1 running /export/zones/zone1
6 zone2 running /export/zones/zone2

8. Configure the new zone via its console (very important)

# zlogin -C zone2

The above step is required to configure the locale, language, IP settings of the new zone. It also creates the system-wide RSA key pairs for the new zone, without which you cannot SSH into the zone. If this step not done, many of the services on the new zone will not start and you may observe /etc/.UNCONFIGURED errors in certain log files.

Summary

You should now be able to log into the new zone, either from the root zone using zlogin or directly via ssh (of configured). All of the software that was installed in the existing zone was present and accounted for in the new zone, including SMF services, user configuration and security settings etc.

Notes

If you are using ZFS datasets in your zones, then you may see the following error when trying to execute the clone command for the newly created zone:

Could not verify zfs dataset tank/xxxxx: mountpoint cannot be inherited
zoneadm: zone xxxxx failed to verify

To resolve this, you need to ensure that the mountpoint for the data set (i.e. ZFS partition) being used has been explicitly set to none. Even though the output from a zfs list command at the global zone might suggest that it does not have a mount point, this has happened to me a number of times and in each case, the following command did the trick for me:

# zfs set mountpoint=none tank/xxxxx

Easy!