900% difference between backup and restore rates using ufsdump and ufsrestore and LTO3 drives - SUN

This is a discussion on 900% difference between backup and restore rates using ufsdump and ufsrestore and LTO3 drives - SUN ; Hi Guru's, Can someone please tell me if this is normal or else give me clues as to what the possible problem is. I have a v240 that is backing up to a HP LTO3 tape drive via its onboard ...

+ Reply to Thread
Results 1 to 4 of 4

Thread: 900% difference between backup and restore rates using ufsdump and ufsrestore and LTO3 drives

  1. 900% difference between backup and restore rates using ufsdump and ufsrestore and LTO3 drives

    Hi Guru's,

    Can someone please tell me if this is normal or else give me clues as
    to what the possible problem is.

    I have a v240 that is backing up to a HP LTO3 tape drive via its
    onboard scsi port and I am getting transfer rates of between 9.5mb per
    sec (for root) up to 27mb per sec for a database file system.

    Backup Command is

    /usr/sbin/ufsdump 0uf /dev/rmt/0cbn /

    Backup Tape Drive

    mt -f /dev/rmt/0cbn status

    HP Ultrium LTO 3 tape drive:
    sense key(0x0)= No Additional Sense residual= 0 retries= 0
    file no= 0 block no= 0

    I then take that tape to another site with a v240 with equivalent cpu
    and memory config with a Sun badged HP LTO3 connected via its internal
    scsi card, boot from cdrom (latest sol 9hw) and get transfer rates of
    approx 1.2mb per sec for root and 3.2mb per sec for the database file
    system

    Restore command is

    ufsrestore rf /dev/rmt/0cbn

    Recovery Tape Drive

    mt -f /dev/rmt/0cbn status

    HP Ultrium LTO 3 tape drive:
    sense key(0x6)= Unit Attention residual= 0 retries= 0
    file no= 0 block no= 0

    To me this looks like a 900% speed difference between the disk to tape
    speed and the tape to disk speed and that does not seem right

    Comments please as I need to reduce the recovery window if at all
    possible


  2. Re: 900% difference between backup and restore rates using ufsdumpand ufsrestore and LTO3 drives

    KiwiSpud schrieb:
    > Hi Guru's,
    >
    > Can someone please tell me if this is normal or else give me clues as
    > to what the possible problem is.
    >
    > I have a v240 that is backing up to a HP LTO3 tape drive via its
    > onboard scsi port and I am getting transfer rates of between 9.5mb per
    > sec (for root) up to 27mb per sec for a database file system.
    >
    > Backup Command is
    >
    > /usr/sbin/ufsdump 0uf /dev/rmt/0cbn /
    >
    > Backup Tape Drive
    >
    > mt -f /dev/rmt/0cbn status
    >
    > HP Ultrium LTO 3 tape drive:
    > sense key(0x0)= No Additional Sense residual= 0 retries= 0
    > file no= 0 block no= 0
    >
    > I then take that tape to another site with a v240 with equivalent cpu
    > and memory config with a Sun badged HP LTO3 connected via its internal
    > scsi card, boot from cdrom (latest sol 9hw) and get transfer rates of
    > approx 1.2mb per sec for root and 3.2mb per sec for the database file
    > system
    >
    > Restore command is
    >
    > ufsrestore rf /dev/rmt/0cbn
    >
    > Recovery Tape Drive
    >
    > mt -f /dev/rmt/0cbn status
    >
    > HP Ultrium LTO 3 tape drive:
    > sense key(0x6)= Unit Attention residual= 0 retries= 0
    > file no= 0 block no= 0
    >
    > To me this looks like a 900% speed difference between the disk to tape
    > speed and the tape to disk speed and that does not seem right
    >
    > Comments please as I need to reduce the recovery window if at all
    > possible
    >


    just a guess: the tape is too fast for your disks to keep up and there
    is no intermediate buffer. In consequence the tape has to stop, rewind a
    little bit, which degrades performance dramatically.

    Try using mbuffer (http://www.maier-komor.de/mbuffer.html) and give your
    tape a buffer it can fill up at full speed. The buffer then can deliver
    data to ufsrestore at any rate. Like this you will get much better
    performance, because disk speed can vary from very slow to high speed
    depending if a seek is necessary. Using mbuffer will also lengthen the
    live of your tapedrive.

    HTH,
    Thomas

  3. Re: 900% difference between backup and restore rates using ufsdumpand ufsrestore and LTO3 drives

    Thomas Maier-Komor wrote:
    > KiwiSpud schrieb:
    >> I then take that tape to another site with a v240 with equivalent cpu
    >> and memory config with a Sun badged HP LTO3 connected via its internal
    >> scsi card, boot from cdrom (latest sol 9hw) and get transfer rates of
    >> approx 1.2mb per sec for root and 3.2mb per sec for the database file
    >> system
    >>


    > just a guess: the tape is too fast for your disks to keep up and there
    > is no intermediate buffer. In consequence the tape has to stop, rewind a
    > little bit, which degrades performance dramatically.


    That seems a wee bit unlikely as I've yet to see a system where the
    disks weren't considerably faster than the tapes. Maybe if this second
    system has very old disks, or DMA is disabled for some reason? It's
    also possible there's something odd about the tape controller/tape drive
    on that system and the problem is there. It's probably worth doing
    some disk and tape io speed tests on the second system to see if the
    bottleneck is apparent.

    > Try using mbuffer (http://www.maier-komor.de/mbuffer.html) and give your
    > tape a buffer it can fill up at full speed.


    I didn't have much success with mbuffer when I needed it to buffer a
    slow network transfer to a very fast tape system. It was bigger than
    "buffer", but not big enough to eliminate the shoe shining.
    In that case I ended up writing mbin/mbout which round robin
    buffer through disk files which can be very large, and which kept the
    tape either completely streaming, or waiting doing nothing (better than
    shoe shining the heads). That's here:

    http://saf.bio.caltech.edu/pub/softw...m_tools.tar.gz

    However, mbin/mbout buffer through disk files, and if all disk writes
    are slow that's not going to help here unless there are spare disks in
    the system. Note that mbuffer was also originally written for the "fast
    disk to slow tape" problem, so some of its modes also use
    disk buffering, which you probably want to avoid in this application.

    > The buffer then can deliver
    > data to ufsrestore at any rate.


    Does your data consist of many many small files per directory? That
    sort of restore can be slow on some systems (not sure about ufsrestore)
    as if there is insufficient memory buffering it can cause the heads to
    have to jump back and forth between the directory which is being
    modified and the files being written. On an OS that is set to make
    sure that all data hits the disks right away (VMS for instance) that
    sort of restore can be very, very slow. Not sure if Solaris can be
    set to be this anal about data retention or not. If it is possible,
    and this is a commercial system, then this effect may be in play.

    Regards,

    David Mathog

  4. Re: 900% difference between backup and restore rates using ufsdump and ufsrestore and LTO3 drives

    On May 4, 10:57 am, David Mathog wrote:
    > Thomas Maier-Komor wrote:
    > > KiwiSpud schrieb:
    > >> I then take that tape to another site with a v240 with equivalent cpu
    > >> and memory config with a Sun badged HP LTO3 connected via its internal
    > >> scsi card, boot from cdrom (latest sol 9hw) and get transfer rates of
    > >> approx 1.2mb per sec for root and 3.2mb per sec for the database file
    > >> system

    >
    > > just a guess: the tape is too fast for your disks to keep up and there
    > > is no intermediate buffer. In consequence the tape has to stop, rewind a
    > > little bit, which degrades performance dramatically.

    >
    > That seems a wee bit unlikely as I've yet to see a system where the
    > disks weren't considerably faster than the tapes. Maybe if this second
    > system has very old disks, or DMA is disabled for some reason? It's
    > also possible there's something odd about the tape controller/tape drive
    > on that system and the problem is there. It's probably worth doing
    > some disk and tape io speed tests on the second system to see if the
    > bottleneck is apparent.
    >
    > > Try using mbuffer (http://www.maier-komor.de/mbuffer.html) and give your
    > > tape a buffer it can fill up at full speed.

    >
    > I didn't have much success with mbuffer when I needed it to buffer a
    > slow network transfer to a very fast tape system. It was bigger than
    > "buffer", but not big enough to eliminate the shoe shining.
    > In that case I ended up writing mbin/mbout which round robin
    > buffer through disk files which can be very large, and which kept the
    > tape either completely streaming, or waiting doing nothing (better than
    > shoe shining the heads). That's here:
    >
    > http://saf.bio.caltech.edu/pub/softw...ools/drm_tools...
    >
    > However, mbin/mbout buffer through disk files, and if all disk writes
    > are slow that's not going to help here unless there are spare disks in
    > the system. Note that mbuffer was also originally written for the "fast
    > disk to slow tape" problem, so some of its modes also use
    > disk buffering, which you probably want to avoid in this application.
    >
    > > The buffer then can deliver

    >
    > > data to ufsrestore at any rate.

    >
    > Does your data consist of many many small files per directory? That
    > sort of restore can be slow on some systems (not sure about ufsrestore)
    > as if there is insufficient memory buffering it can cause the heads to
    > have to jump back and forth between the directory which is being
    > modified and the files being written. On an OS that is set to make
    > sure that all data hits the disks right away (VMS for instance) that
    > sort of restore can be very, very slow. Not sure if Solaris can be
    > set to be this anal about data retention or not. If it is possible,
    > and this is a commercial system, then this effect may be in play.
    >
    > Regards,
    >
    > David Mathog




    It may not be true on a Solaris 10 based system but under older
    versions of Solaris, the OS essentially did a sync after each write to
    the disk. This causes a very large slow down in a restore operation.
    In order to improve the restore times there was a utility called
    fastfs that allowed you to shut off the verify after write. Fastfs
    speeded up restores for me to nearly the same speed as a dump. I
    haven't explored what to do under Solaris 10 yet.


+ Reply to Thread