Solaris 10 10/09 (u8) is now available for download at DVD ISO images (full and segments that can be reassembled after download) are available for both SPARC and x86.

Here are a few of the new features in this release that caught my attention.Packaging and Patching

Improved performance of SVR4 package commands: Improvements have been made in the SVR4 package commands (pkgadd, pkgrm, pkginfo et al). The impact of these can be seen in drastically reduced zone installation time.

Zones parallel patching: Before Solaris 10 10/09 the patching process was single threaded which could lead to prolonged patching time on a system with several nonglobal zones. Starting with this update you can specify the number of threads to be used to patch a system with zones. Enable this feature by assigniung a value to num_proc in /etc/patch/pdo.conf. The maximum value is capped at 1.5 times the number of on-line CPUs, but can be limited by a lower value of num_proc.

This feature is also available by applying Solaris patches 119254-66 (SPARC) or 119255-66 (x86).

ZFS Enhancements

Flash archive install into a ZFS root filesystem: ZFS support for the root file system was introduced in Solaris 10 10/08 but the install tools did not work with flash archives. Solaris 10 10/09 providesthe ability to install a flash archive created from an existing ZFS root system. This capability is also provided by patches 119534-15 + 124630-26 (SPARC) or 119535-15 + 124631-27 (x86) that can be appliedto a Solaris 10 10/08 or later system. There are still a few limitations such as the the flash source must be from a ZFS root system and you cannot use differential archives. More information can befound in Installing a ZFS Root File System (Flash Archive Installation).

Set ZFS properties on the initial zpool file system: Prior to Solaris 10 10/09, ZFS file system properties could only be set once the initial file system was created. This would make it impossible to create a pool with same name as an existing mounted file system or to be able to have replication or compression from the time the pool is created. In Solaris 10 10/09 you can specify any ZFS file system property using zpool -O. zpool create -O mountpoint=/data,copies=3,compression=on datapool c1t1d0 c1t2d0ZFS Read Cache (L2ARC): You now have the ability to add persistent read ahead caches to a ZFS zpool. This can improve the read performance of ZFS as well as reducing the ZFS memory footprint.

L2ARC devices are added as cache vdevs to a pool. In the following example we will create a pool of 2 mirrored devices, 2 cache devices and a spare. # zpool create datapool mirror c1t1d0 c1t2d0 cache c1t3d0 c1t4d0 spare c1t5d0# zpool status datapool pool: datapool state: ONLINE scrub: none requestedconfig: NAME STATE READ WRITE CKSUM datapool ONLINE 0 0 0 mirror ONLINE 0 0 0 c1t1d0 ONLINE 0 0 0 c1t2d0 ONLINE 0 0 0 cache c1t3d0 ONLINE 0 0 0 c1t4d0 ONLINE 0 0 0 spares c1t5d0 AVAILerrors: No known data errorsSo what do ZFS cache devices do ? Rather than go into a lengthy explanation of the L2ARC, I would rather refer you to Fishworks developer Brendan Gregg's excellent treatment of the subject.

Unlike the intent log (ZIL), L2ARC cache devices can be added and removed dynamically.# zpool remove datapool c1t3d0# zpool remove datapool c1t4d0# zpool status datapool pool: datapool state: ONLINE scrub: none requestedconfig: NAME STATE READ WRITE CKSUM datapool ONLINE 0 0 0 mirror ONLINE 0 0 0 c1t1d0 ONLINE 0 0 0 c1t2d0 ONLINE 0 0 0 spares c1t5d0 AVAILerrors: No known data errors# zpool add datapool cache c1t3d0# zpool status datapool pool: datapool state: ONLINE scrub: none requestedconfig: NAME STATE READ WRITE CKSUM datapool ONLINE 0 0 0 mirror ONLINE 0 0 0 c1t1d0 ONLINE 0 0 0 c1t2d0 ONLINE 0 0 0 cache c1t3d0 ONLINE 0 0 0 spares c1t5d0 AVAILerrors: No known data errorsNew cache control properties: Two new ZFS properties are introduced with Solaris 10 10/09. These control what what is stored (nothing, data + metadata, or metadata only) in the ARC (memory) and L2ARC (external) caches. These new properties are
  • primarycache - controls what is stored in the memory resident ARC cachesecondarycache - controls what is stored in the L2ARC
and they can take the values
  • none - the caches are not used
  • metadata - only file system metadata is cached
    all - both file system data and the metadata is stored in the associated cache
# zpool create -O primarycache=metadata -O secondarycache=all datapool c1t1d0 c1t2d0 cache c1t3d0 There are workloads such as databases that perform better or make more efficient use of memory if the system is not competing with the caches that the applications are maintaining themselves.

User and group quotas:ZFS has always had quotas and reservations but they were applied at the file system level. To achieve user or group quotas would requirecreating additional file systems which might make administration more complex. Starting with Solaris 10 10/09 you can apply both user and group quotas to a file systemmuch like you would with UFS. The ZFS file system must be at version 15 or later and the zpool must be at version 4 or later.

Let's create a file system and see if we are at the proper versions to set quotas.# zfs create rpool/newdata# chown bobn:local /rpool/newdata# zpool get version rpoolNAME PROPERTY VALUE SOURCErpool version 18 default# zpool upgrade -vThis system is currently running ZFS pool version 18.The following versions are supported:VER DESCRIPTION--- -------------------------------------------------------- 1 Initial ZFS version 2 Ditto blocks (replicated metadata) 3 Hot spares and double parity RAID-Z 4 zpool history 5 Compression using the gzip algorithm 6 bootfs pool property 7 Separate intent log devices 8 Delegated administration 9 refquota and refreservation properties 10 Cache devices 11 Improved scrub performance 12 Snapshot properties 13 snapused property 14 passthrough-x aclinherit 15 user/group space accounting 16 stmf property support 17 Triple-parity RAID-Z 18 snapshot user holdsFor more information on a particular version, including supported releases, see: 'N' is the version number.# zfs get version rpool/newdataNAME PROPERTY VALUE SOURCErpool/newdata version 4 # zfs upgrade -vThe following filesystem versions are supported:VER DESCRIPTION--- -------------------------------------------------------- 1 Initial ZFS filesystem version 2 Enhanced directory entries 3 Case insensitive and File system unique identifier (FUID) 4 userquota, groupquota propertiesFor more information on a particular version, including supported releases, see: 'N' is the version number.Excellent. Now let's set a user and group quota and see what happens. We'll set a group quota of 1GB and a user quota at 2GB.# zfs set groupquota@local=1g rpool/newdata# zfs set userquota@bobn=2g rpool/newdata# su - bobn% mkfile 500M /rpool/newdata/file1% mkfile 500M /rpool/newdata/file2% mkfile 500M /rpool/newdata/file3file3: initialized 40370176 of 524288000 bytes: Disc quota exceededAs expected, we have exceeded our group quota. Let's change the group of the existing files and see if we can proceed to our user quota.% rm /rpool/newdata/file3% chgrp sales /rpool/newdata/file1 /rpool/newdata/file2% mkfile 500m /rpool/newdata/file3Could not open /rpool/newdata/disk3: Disc quota exceededWhoa! What's going on here ? Relax - ZFS does things asynchronously unless told otherwise. And we should have noticed this when the mkfile for file3 actually started. ZFS wasn't quite caught up with the current usage. A good sync should do the trick.% sync% mkfile 500M /rpool/newdata/file3% mkfile 500M /rpool/newdata/file4% mkfile 500M /rpool/newdata/file5/rpool/newdata/disk5: initialized 140247040 of 524288000 bytes: Disc quota exceededGreat. We now have user and group quotas. How can I find out what I have used against my quota ? There are two new ZFS properties, userused and groupused that will show what the group or user is currently consuming.% zfs get userquota@bobn,userused@bobn rpool/newdataNAME PROPERTY VALUE SOURCErpool/newdata userquota@bobn 2G localrpool/newdata userused@bobn 1.95G local% zfs get groupquota@local,groupused@local rpool/newdataNAME PROPERTY VALUE SOURCErpool/newdata groupquota@local 1G localrpool/newdata groupused@local 1000M local% zfs get groupquota@sales,groupused@sales rpool/newdataNAME PROPERTY VALUE SOURCErpool/newdata groupquota@sales none localrpool/newdata groupused@sales 1000M local% zfs get groupquota@scooby,groupused@scooby rpool/newdataNAME PROPERTY VALUE SOURCErpool/newdata groupquota@scooby - -rpool/newdata groupused@scooby - New space usage properties: Four new usage properties have been added to ZFS file systems.
  • usedbychildren (usedchild) - this is the amount of space that is used by all of the children of the specified dataset
  • usedbydataset (usedds) - this is the total amount of space that would be freed if this dataset and it's snapshots and reservations were destroyed
  • usedbyrefreservation (usedrefreserv) - this is the amount of space that would be freed if the dataset's reservations were to be removedusertbysnapshots (usedsnap) - the total amount of space that would be freed if all of the snapshots of this dataset were deleted.
# zfs get all datapool | grep useddatapool used 5.39G -datapool usedbysnapshots 19K -datapool usedbydataset 26K -datapool usedbychildren 5.39G -datapool usedbyrefreservation 0 -

These new properties can also be viewed in a nice tabular form using zfs list -o space.# zfs list -r -o space datapoolNAME AVAIL USED USEDSNAP USEDDS USEDREFRESERV USEDCHILDdatapool 480M 5.39G 19K 26K 0 5.39Gdatapool@now - 19K - - - -datapool/fs1 480M 400M 0 400M 0 0datapool/fs2 1.47G 1.00G 0 1.00G 0 0datapool/fs3 480M 21K 0 21K 0 0datapool/fs4 2.47G 0 0 0 0 0datapool/vol1 1.47G 1G 0 16K 1024M 0 Miscellaneous

Support for 2TB boot disks: Solaris 10 10/09 supports a disk Volume Table of Contents (VTOC) of up to 2TB in size. The previous maximum VTOC size was 1TB. On x86 systems you must be running Solaris with a 64bit kernel and have at least 1GB of memory to use a VTOC larger that 1TB.

pcitool: A new command for Solaris that can assign interrupts to specific threads or display the current interrupt routing. This command is available for both SPARC and x86.

New iSCSI initiator SMF service: svc:/network/iscsi/initiator:default is a new Service Management Facility (SMF) service to control discovery and enumeration of iSCSI devices early in the boot process. Other boot services that may require iSCSI services can add dependencies to insure that the devices are available before being needed.Device Drivers

The following device drivers are either new to Solaris or have had some new features or chipsets added.
  • MPxIO support for the LSI 6180 Controller
  • LSI MPT 2.0 SAS 2.0 controllers (mpt_sas)
  • Broadcom NetXTreme II gigabit Ethernet (bcm5716c and bcm5716s) controllers
  • Interrupt remapping for Intel VT-x enabled processors
  • Support for SATA AHCI tape
  • Sun StorageTek 6Gb/s SAS RAID controller and LSI MegaRAID 92xx (mt_sas)Intel 82598 and 82599 10Gb/s PCIe Ethernet controller
Open Source Software Updates

The following open source packages have been updated for Solaris 10 10/09.
  • NTP 4.2.5
  • PostgreSQL versions 8.1.17, 8.2.13 and 8.3.7Samba 3.0.35
For more information

A complete list of new features and changes can be found in the Solaris 10 10/09 Release Notes and the What's New in Solaris 10 10/09 documentation at

Technocrati Tags: SunSolarispatchingzfs