table.codetable { border-width: 1px; border-spacing: 5px; border-color: black; border-style: solid; border-collapse: separate; background-color: #ffffcc; font-family:monospace; font-size:2; width: 95%; } Introduction


Now that OpenSolaris 2009.06 is available on Amazon EC2, I have been interested in setting up zones within a OpenSolaris EC2 instances utilizing the virtual networking features provided by Crossbow.

In this tutorial I will provide a step-by-step guide describing how to get this environment up and running. We use Crossbow together with NAT to build a complete virtual network connecting multiple Zones within a Solaris EC2 host.

Much of the networking information used in this tutorial is taken directly from the excellent article Private virtual networks for Solaris xVM and Zones with Crossbow by Nicolas Droux.

This is Part 1 of this tutorial series. In Part 2 we will explain how to use ZFS and AWS snapshots to backup the zones. In Part 3 we will explain how to save a fully configured environment using a AMI and EBS snapshots, which can then be cloned and up and running in minutes.



Prerequisites

  • Basic understanding of AWS EC2 including: managing AMIs, launching instances, managing EBS volumesand snapshots, and firewall management.


  • Basic understanding of OpenSolarisincluding system setup, networking, and zone management.


Building the EC2 environment



For this tutorial, I used the OpenSolaris 2009.06 AMIami-e56e8f8c. I also created three EBS volumes, one for shared software, one for zones storage, and another one for zones backup. In Part 2 of this tutorial, I will explain the use of ZFS snapshots and EBS snapshots for the purposes of backing up the zones. The EC2 environment is displayed below.






A summary of the steps are as follows:

  • Create the EBS volumes and attach them to the instance.

  • Create ZFS pools, one for the shared software, one for zones, and one for zones backup.



When finished, I have a OpenSolaris 2009.06 EC2 instance running with three ZFS filesystems on top of three EBS volumes as shown below.


root:~# zfs list -r sharedsw
NAME USED AVAIL REFER MOUNTPOINT
sharedsw/opt 3.41G 4.12G 3.41G /opt


root:~# zfs list -r zones
NAME USED AVAIL REFER MOUNTPOINT
zones 70K 7.81G 19K /zones


root:~# zfs list -r zones-backup
NAME USED AVAIL REFER MOUNTPOINT
zones-backup 70K 7.81G 19K /zones-backup


Building the private network

The next task is to create the virtual network as shown in the diagram below.





We create an etherstub and three VNICs for our virtual network.

root:~# dladm show-phys
LINK MEDIA STATE SPEED DUPLEX DEVICE
xnf0 Ethernet up 1000 full xnf0
root:~#
root:~# dladm create-etherstub etherstub0
root:~# dladm create-vnic -l etherstub0 vnic0
root:~# dladm create-vnic -l etherstub0 vnic1
root:~# dladm create-vnic -l etherstub0 vnic2
root:~#
root:~# dladm show-etherstub
LINK
etherstub0
root:~# dladm show-vnic
LINK OVER SPEED MACADDRESS MACADDRTYPE VID
vnic0 etherstub0 0 2:8:20:20:10:b8 random 0
vnic1 etherstub0 0 2:8:20:c2:70:f6 random 0
vnic2 etherstub0 0 2:8:20:15:35:ca random 0


Assign a static IP address to vnic0 in the global zone:

root:~# ifconfig vnic0 plumb
root:~# ifconfig vnic0 inet 192.168.0.1 up
root:~# ifconfig vnic0
vnic0: flags=1000843 mtu 9000 index 3
inet 192.168.0.1 netmask ffffff00 broadcast 192.168.0.255
ether 2:8:20:20:10:b8


Note that the usual configuration variables (e.g. /etc/hostname.)must be populated for the configuration to persist across reboots). Wemust also enable IPv4 forwarding on the global zone. Run routeadm(1M)to display the current configuration, and if "IPv4 forwarding" isdisabled, enable it with the following command:

root:~# routeadm -u -e ipv4-forwarding
root:~# routeadm
Configuration Current Current
Option Configuration System State
---------------------------------------------------------------
IPv4 routing disabled disabled
IPv6 routing disabled disabled
IPv4 forwarding enabled enabled
IPv6 forwarding disabled disabled

Routing services "route:default ripng:default"

Routing daemons:

STATE FMRI
disabled svc:/network/routing/route:default
disabled svc:/network/routing/rdisc:default
online svc:/network/routing/ndp:default
disabled svc:/network/routing/legacy-routing:ipv4
disabled svc:/network/routing/legacy-routing:ipv6
disabled svc:/network/routing/ripng:default
root:~#




Next, we enable NAT on the xnf0 interface. We also want to be able to connect to the zones from the public internet, so we enable port forwarding. In EC2 make sure you open these ports within the EC2 firewall, following best practices.


root:~# cat /etc/ipf/ipnat.conf
map xnf0 192.168.0.0/24 -> 0/32 portmap tcp/udp auto
map xnf0 192.168.0.0/24 -> 0/32

rdr xnf0 0.0.0.0/0 port 22101 -> 192.168.0.101 port 22
rdr xnf0 0.0.0.0/0 port 22102 -> 192.168.0.102 port 22
rdr xnf0 0.0.0.0/0 port 8081 -> 192.168.0.101 port 80
rdr xnf0 0.0.0.0/0 port 8082 -> 192.168.0.102 port 80
rdr xnf0 0.0.0.0/0 port 40443 -> 192.168.0.102 port 443 root:~# svcadm enable network/ipfilter
root:~# ipnat -l
List of active MAP/Redirect filters:
map xnf0 192.168.0.0/24 -> 0.0.0.0/32 portmap tcp/udp auto
map xnf0 192.168.0.0/24 -> 0.0.0.0/32
rdr xnf0 0.0.0.0/0 port 22101 -> 192.168.0.101 port 22 tcp
rdr xnf0 0.0.0.0/0 port 22102 -> 192.168.0.102 port 22 tcp
rdr xnf0 0.0.0.0/0 port 8081 -> 192.168.0.101 port 80 tcp
rdr xnf0 0.0.0.0/0 port 8082 -> 192.168.0.102 port 80 tcp
rdr xnf0 0.0.0.0/0 port 40443 -> 192.168.0.102 port 443 tcp

List of active sessions:
root:~#




Creating the zones



Create and install zone1



root:~# zonecfg -z zone1
zone1: No such zone configured
Use 'create' to begin configuring a new zone.
zonecfg:zone1> create
zonecfg:zone1> set zonepath=/zones/zone1
zonecfg:zone1> set ip-type=exclusive
zonecfg:zone1> add net
zonecfg:zone1:net> set physical=vnic1
zonecfg:zone1:net> end
zonecfg:zone1> add fs
zonecfg:zone1:fs> set dir=/opt
zonecfg:zone1:fs> set special=/opt
zonecfg:zone1:fs> set type=lofs
zonecfg:zone1:fs> end
zonecfg:zone1> verify
zonecfg:zone1> commit
zonecfg:zone1> exit
root:~#
root:~# zoneadm -z zone1 install
A ZFS file system has been created for this zone.
Publisher: Using opensolaris.org (http://pkg.opensolaris.org/release/).
Image: Preparing at /zones/zone1/root.
Cache: Using /var/pkg/download.
Sanity Check: Looking for 'entire' incorporation.
Installing: Core System (output follows)
Postinstall: Copying SMF seed repository ... done.
Postinstall: Applying workarounds.
Done: Installation completed in 428.065 seconds.

Next Steps: Boot the zone, then log into the zone console
(zlogin -C) to complete the configuration process


root:~#





Create and install zone2

root:~#
root:~# zonecfg -z zone2
zone2: No such zone configured
Use 'create' to begin configuring a new zone.
zonecfg:zone2> create
zonecfg:zone2> set zonepath=/zones/zone2
zonecfg:zone2> set ip-type=exclusive
zonecfg:zone2> add net
zonecfg:zone2:net> set physical=vnic2
zonecfg:zone2:net> end
zonecfg:zone2> add fs
zonecfg:zone2:fs> set dir=/opt
zonecfg:zone2:fs> set special=/opt
zonecfg:zone2:fs> set type=lofs
zonecfg:zone2:fs> end
zonecfg:zone2> verify
zonecfg:zone2> commit
zonecfg:zone2> exit
root:~#
root:~# zoneadm -z zone2 install
A ZFS file system has been created for this zone.
Publisher: Using opensolaris.org (http://pkg.opensolaris.org/release/).
Image: Preparing at /zones/zone2/root.
Cache: Using /var/pkg/download.
Sanity Check: Looking for 'entire' incorporation.
Installing: Core System (output follows)
Postinstall: Copying SMF seed repository ... done.
Postinstall: Applying workarounds.
Done: Installation completed in 125.975 seconds.

Next Steps: Boot the zone, then log into the zone console
(zlogin -C) to complete the configuration process
root:~# Zone configuration

Now that the zones are installed, we are ready to boot each zone and perform system configuration. First we boot each zone.


root:~# zoneadm -z zone1 boot
root:~# zoneadm -z zone2 boot

The next step is to connect to the console for each zone and perform system configuration. The configuration params that I used are listed below. Connect to the console with the command "zlogin -C zone_name", for example: zlogin -C zone1


zone1
=====
Host name for vnic1 : zone1
IP address for vnic1 : 192.168.0.101
System part of a subnet : Yes
Netmask for vnic1 : 255.255.255.0
Enable IPv6 for vnic1 : No
Default Route for vnic1 : Specify one
Router IP Address for vnic1: 192.168.0.1
Name service : DNS
DNS Domain name : compute-1.internal
DNS Server's IP address : 172.16.0.23
NFSv4 Domain Name : >

zone2
=====
Host name for vnic2 : zone2
IP address for vnic2 : 192.168.0.102
System part of a subnet : Yes
Netmask for vnic2 : 255.255.255.0
Enable IPv6 for vnic2 : No
Default Route for vnic2 : Specify one
Router IP Address for vnic2: 192.168.0.1
Name service : DNS
DNS Domain name : compute-1.internal
DNS Server's IP address : 172.16.0.23
NFSv4 Domain Name : >

Test connection to the zones

Once the zones are running and have been configured, we should be able to connect to the zones from the "outside". This connection test is dependent on having the EC2 firewall correctly setup. In the example below, we connect to zone1 via port 22101.


-bash-3.2$ ssh ec2-xxx-xxx-xxx-xxx.compute-1.amazonaws.com -p 22101

login as: username
Using keyboard-interactive authentication.
Password:
Last login: Sun Sep 6 22:15:38 2009 from c-24-7-37-94.hs
Sun Microsystems Inc. SunOS 5.11 snv_111b November 2008


-bash-3.2$ hostname
zone1
-bash-3.2$


References


More...